Category: Evening Report

  • MIL-Evening Report: Is owning a dog good for your health?

    Source: The Conversation (Au and NZ) – By Tania Signal, Professor of Psychology, School of Health, Medical and Applied Sciences, CQUniversity Australia

    Pogodina Natalia/Shutterstock

    Australia loves dogs. We have one of the highest rates of pet ownership in the world, and one in two households has at least one dog.

    But are they good for our health?

    Mental health is the second-most common reason cited for getting a dog, after companionship. And many of us say we “feel healthier” for having a dog – and let them sleep in our bedroom.

    Here’s what it means for our physical and mental health to share our homes (and doonas) with our canine companions.

    Are there physical health benefits to having a dog?

    Having a dog is linked to lower risk of death over the long term. In 2019, a systematic review gathered evidence published over 70 years, involving nearly four million individual medical cases. It found people who owned a dog had a 24% lower risk of dying from any cause compared to those who did not own a dog.

    Having a dog may help lower your blood pressure through more physical activity.
    Barnabas Davoti/Pexels

    Dog ownership was linked to increased physical activity. This lowered blood pressure and helped reduce the risk of stroke and heart disease.

    The review found for those with previous heart-related medical issues (such as heart attack), living with a dog reduced their subsequent risk of dying by 35%, compared to people with the same history but no dog.

    Another recent UK study found adult dog owners were almost four times as likely to meet daily physical activity targets as non-owners. Children in households with a dog were also more active and engaged in more unstructured play, compared to children whose family didn’t have a dog.

    Exposure to dirt and microbes carried in from outdoors may also strengthen immune systems and lead to less use of antibiotics in young children who grow up with dogs.

    Children in households with a dog were often more active.
    Maryshot/Shutterstock

    Health risks

    However, dogs can also pose risks to our physical health. One of the most common health issues for pet owners is allergies.

    Dogs’ saliva, urine and dander (the skin cells they shed) can trigger allergic reactions resulting in a range of symptoms, from itchy eyes and runny nose to breathing difficulties.

    A recent meta-analysis pooled data from nearly two million children. Findings suggested early exposure to dogs may increase the risk of developing asthma (although not quite as much as having a cat does). The child’s age, how much contact they have with the dog and their individual risk all play a part.

    Slips, trips and falls are another risk – more people fall over due to dogs than cats.

    Having a dog can also expose you to bites and scratches which may become infected and pose a risk for those with compromised immune systems. And they can introduce zoonotic diseases into your home, including ring worm and Campylobacter, a disease that causes diarrhoea.

    For those sharing the bed there is an elevated the risk of allergies and picking up ringworm. It may result in lost sleep, as dogs move around at night.

    On the other hand some owners report feeling more secure while co-sleeping with their dogs, with the emotional benefit outweighing the possibility of sleep disturbance or waking up with flea bites.

    Proper veterinary care and hygiene practices are essential to minimise these risks.

    Many of us don’t just share a home with a dog – we let them sleep in our beds.
    Claudia Mañas/Unsplash

    What about mental health?

    Many people know the benefits of having a dog are not only physical.

    As companions, dogs can provide significant emotional support helping to alleviate symptoms of anxiety, depression and post-traumatic stress. Their presence may offer comfort and a sense of purpose to individuals facing mental health challenges.

    Loneliness is a significant and growing public health issue in Australia.

    In the dog park and your neighbourhood, dogs can make it easier to strike up conversations with strangers and make new friends. These social interactions can help build a sense of community belonging and reduce feelings of social isolation.

    For older adults, dog walking can be a valuable loneliness intervention that encourages social interaction with neighbours, while also combating declining physical activity.

    However, if you’re experiencing chronic loneliness, it may be hard to engage with other people during walks. An Australian study found simply getting a dog was linked to decreased loneliness. People reported an improved mood – possibly due to the benefits of strengthening bonds with their dog.

    Walking a dog can make it easier to talk to people in your neighbourhood.
    KPegg/Shutterstock

    What are the drawbacks?

    While dogs can bring immense joy and numerous health benefits, there are also downsides and challenges. The responsibility of caring for a dog, especially one with behavioural issues or health problems, can be overwhelming and create financial stress.

    Dogs have shorter lifespans than humans, and the loss of a beloved companion can lead to depression or exacerbate existing mental health conditions.

    Lifestyle compatibility and housing conditions also play a significant role in whether having a dog is a good fit.

    The so-called pet effect suggests that pets, often dogs, improve human physical and mental health in all situations and for all people. The reality is more nuanced. For some, having a pet may be more stressful than beneficial.

    Importantly, the animals that share our homes are not just “tools” for human health. Owners and dogs can mutually benefit when the welfare and wellbeing of both are maintained.

    Tania Signal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Is owning a dog good for your health? – https://theconversation.com/is-owning-a-dog-good-for-your-health-238888

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: People don’t like a ‘white saviour’, but does it affect how they donate to charity?

    Source: The Conversation (Au and NZ) – By Robert Hoffmann, Professor of Economics, Tasmanian Behavioural Lab, University of Tasmania

    Shutterstock

    Efforts to redress global inequality are facing an unexpected adversary: the white saviour. It’s the idea that people of colour, whether in the Global South or North, need “saving” by a white Western person or aid worker.

    An eclectic mix of white activists have been publicly accused of being white saviours for trying to help different causes in the Global South. They include celebrities who adopted orphaned children, organised benefit concerts such as Live Aid, or called out rights abuses.

    Others include professional and volunteer charity workers and journalists reporting on poverty in Africa. Even activism at home can earn the white saviour label, like efforts to refine the proposal for the Indigenous Voice to Parliament in Australia.

    We conducted a series of studies with 1,991 representative Australians to find out what people thought made a white saviour, how charity appeal photographs create this impression, and how it affected donations.

    White saviourism and charities

    The concern is that white people’s overseas charity, even when well-meaning, can inadvertently hurt rather than help the cause. It could perpetuate harmful stereotypes of white superiority, disempower local people, or misdirect resources to make helpers feel good rather than alleviating genuine need.

    The fear of being labelled a white saviour could make people think twice about giving time or money to worthy causes. It might stop aid organisations using proven appeals to raise donations they need.

    Médecins Sans Frontières (MSF), for instance, released a video apologising for using photos depicting white people in aid settings and which aren’t representative of the majority local staff they employ.

    Therein lies the dilemma: white donors can relate to photos of white helpers, but this is easily interpreted as white savourism.

    What makes someone a white saviour?

    Very little research exists into exactly what white saviourism means. Broadly, it seems to describe people in the Global North who support international causes for selfish reasons, to satisfy their own sentimentality and need for a positive image. We wanted to go deeper.

    In the first of our studies, we showed our participants 26 photographs depicting different Global South aid settings with a white helper.

    The helpers that participants thought of as highly “white saviour” typically had these characteristics:

    • they appeared to be privileged and superior

    • they gave help sentimentally and tokenistically

    • they conformed to the colonial stereotype of the helpless local and powerful foreigner.

    Further analysis showed these characteristics boil down to two essential features: ineffectiveness of the help and entitlement of the helpers.

    These two perceptions of the white saviour explain the problem for charity. Behavioural economics research has identified two main reasons for donating, and these perceptions undermine both.

    Why do people donate at all?

    So to see how much white saviourism affects charities, we need to know why people donate in the first place.

    One reason for giving is pure altruism, the desire to help others with no direct benefit to oneself. The effective altruism movement encourages people to make every donated dollar count – getting the maximum bang for the buck in terms of measurable outcomes for those in need.

    The difficulty for effective altruists is in assessing the impact of different charities vying for their donations. There are now websites that list charities by lives saved per dollar donated.




    Read more:
    How white saviourism harms international development


    Alternatively, donors might look at a charity’s appeal images for clues of how effectively it will use their dollars.

    Depicting white people as saviours can create the impression of tokenistic aid that only serves the helper’s sentimental needs. Evidence shows people resent impure motives in others (including organisations) and might try to penalise them.

    Behavioural economics research also shows, as you might expect, that some people are more concerned about themselves than others when giving. This is known as “warm glow” giving.

    Warm glow givers have several self-serving motivations. They include giving to gain self-respect or social status.

    People also have a desire to meet their social obligations. For richer folks this could include charitable giving. And giving can reduce guilt they might feel about their privilege.

    Just like the effective altruist, the warm glow giver could be put off by any sign of white saviourism. They don’t want to be seen to be endorsing it.

    Do people still donate?

    All this suggests that seeing a white saviour depiction in a charitable appeal will make people donate less.

    We examined this in another study, in which participants were shown each of the previous photos. This time they were asked, for every photo, if they were willing to donate to a charity that uses it.

    And as we thought, the photos previously rated as high in white saviourism had low intentions to donate.

    Participants were shown photos of white aid workers in the Global South.
    Shutterstock

    But intentions do not always equal actions, as psychologist have demonstrated for many years.

    To overcome this, we measured real donations in another study. Again participants saw the same photos, but this time they had the chance to donate part of their participation fee to a real charity when seeing them.

    What we found surprised us: the white saviour effect disappeared. How high a photo was on the white saviour scale had no impact on how much participants donated when seeing it.

    Does the end justify the motivation?

    Our results summarise the dilemma. Donors might object to white saviourism by charities, but in the end feel that it’s the help that counts, not the motivation behind it.

    We found some evidence for this when we asked participants about their general views of white saviourism.

    Almost 70% agreed that white saviour motives are common in Western help and that this was problematic for recipients. But interestingly, only 42% thought helpers with these motives deserved criticism.

    Together, this might suggest that people feel white saviour help is better than no help. There are voices in the charity community who echo this sentiment: imposing conditions on charitable giving will serve to reduce it.

    In an interview with the Wall Street Journal, Elise Westhoff, president of the Philanthropy Roundtable in the United States, said “by imposing those ‘musts’ and ‘shoulds’, you really limit human generosity”.

    But this doesn’t mean there are no legitimate concerns. There are, but it’s not hard for charities to address them.

    Our results show that white saviour perceptions do not affect actual donations, so read another way, suggests charities can safely replace highly white saviour images without losing donations for their causes.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. People don’t like a ‘white saviour’, but does it affect how they donate to charity? – https://theconversation.com/people-dont-like-a-white-saviour-but-does-it-affect-how-they-donate-to-charity-239307

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: XEC is now in Australia. Here’s what we know about this hybrid COVID variant

    Source: The Conversation (Au and NZ) – By Lara Herrero, Research Leader in Virology and Infectious Disease, Griffith University

    Kateryna Kon/Shutterstock

    Over the nearly five years since COVID first emerged, you’d be forgiven if you’ve lost track of the number of new variants we’ve seen. Some have had a bigger impact than others, but virologists have documented thousands.

    The latest variant to make headlines is called XEC. This omicron subvariant has been reported predominantly in the northern hemisphere, but it has now been detected in Australia too.

    So what do we know about XEC?

    Is COVID still a thing?

    People are now testing for COVID less and reporting it less. Enthusiasm to track the virus is generally waning.

    Nonetheless, Australia is still collecting and reporting COVID data. Although the number of cases is likely to be much higher than the number documented (around 275,000 so far this year), we can still get some idea of when we’re seeing significant waves, compared to periods of lower activity.

    Australia saw its last COVID peak in June 2024. Since then cases have been on the decline.

    But SARS-CoV-2, the virus that causes COVID, is definitely still around.

    Which variants are circulating now?

    The main COVID variants circulating currently around the world include BA.2.86, JN.1, KP.2, KP.3 and XEC. These are all descendants of omicron.

    The XEC variant was first detected in Italy in May 2024. The World Health Organization (WHO) designated it as a variant “under monitoring” in September.

    Since its detection, XEC has spread to more than 27 countries across Europe, North America and Asia. As of mid-September, the highest numbers of cases have been identified in countries including the United States, Germany, France, the United Kingdom and Denmark.

    XEC is currently making up around 20% of cases in Germany, 12% in the UK and around 6% in the US.

    The virus behind COVID continues to evolve.
    Photo by Centre for Ageing Better/Pexels

    Although XEC remains a minority variant globally, it appears to have a growth advantage over other circulating variants. We don’t know why yet, but reports suggest it may be able to spread more easily than other variants.

    For this reason, it’s predicted XEC could become the dominant variant worldwide in the coming months.

    How about in Australia?

    The most recent Australian Respiratory Surveillance Report noted there has been an increasing proportion of XEC sequenced recently.

    In Australia, 329 SARS-CoV-2 sequences collected from August 26 to September 22 have been uploaded to AusTrakka, Australia’s national genomics surveillance platform for COVID.

    The majority of sequences (301 out of 329, or 91.5%) were sub-lineages of JN.1, including KP.2 (17 out of 301) and KP.3 (236 out of 301). The remaining 8.5% (28 out of 329) were recombinants consisting of one or more omicron sub-lineages, including XEC.

    Estimates based on data from GISAID, an international repository of viral sequences, suggests XEC is making up around 5% of cases in Australia, or 16 of 314 samples sequenced.

    Queensland reported the highest rates in the past 30 days (8%, or eight of 96 sequences), followed by South Australia (5%, or five out of 93), Victoria (5%, or one of 20) and New South Wales (3%, or two of 71). WA recorded zero sequences out of 34. No data were available for other states and territories.

    What do we know about XEC? What is a recombinant?

    The XEC variant is believed to be a recombinant descendant of two previously identified omicron subvariants, KS.1.1 and KP.3.3. Recombinant variants form when two different variants infect a host at the same time, which allows the viruses to switch genetic information. This leads to the emergence of a new variant with characteristics from both “parent” lineages.

    KS.1.1 is one of the group commonly known as “FLiRTvariants, while, KP.3.3 is one of the “FLuQE” variants. Both of these variant groups have contributed to recent surges in COVID infections around the world.

    The WHO’s naming conventions for new COVID variants often use a combination of letters to denote new variants, particularly those that arise from recombination events among existing lineages. The “X” typically indicates a recombinant variant (as with XBB, for example), while the letters following it identify specific lineages.

    We know very little so far about XEC’s characteristics specifically, and how it differs from other variants. But there’s no evidence to suggest symptoms will be more severe than with earlier versions of the virus.

    What we do know is what mutations this variant has. In the S gene that encodes for the spike protein we can find a T22N mutation (inherited from KS.1.1) as well as Q493E (from KP.3.3) and other mutations
    known to the omicron lineage.

    Will vaccines still work well against XEC?

    The most recent surveillance data doesn’t show any significant increase in COVID hospitalisations. This suggests the current vaccines still provide effective protection against severe outcomes from circulating variants.

    As the virus continues to mutate, vaccine companies will continue to update their vaccines. Both Pfizer and Moderna have updated vaccines to target the JN.1 variant, which is a parent strain of the FLiRT variants and therefore should protect against XEC.

    However, Australia is still waiting to hear which vaccines may become available to the public and when.

    In the meantime, omicron-based vaccines such as the the current XBB.1.5 spikevax (Moderna) or COMIRNATY (Pfizer) are still likely to provide good protection from XEC.

    It’s hard to predict how XEC will behave in Australia as we head into summer. We’ll need more research to understand more about this variant as it spreads. But given XEC was first detected in Europe during the northern hemisphere’s summer months, this suggests XEC might be well suited to spreading in warmer weather.

    Lara Herrero receives funding from NHMRC.

    ref. XEC is now in Australia. Here’s what we know about this hybrid COVID variant – https://theconversation.com/xec-is-now-in-australia-heres-what-we-know-about-this-hybrid-covid-variant-239292

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: What are the greatest upsets in NRL grand final history?

    Source: The Conversation (Au and NZ) – By Wayne Peake, Adjunct research fellow, School of Humanities and Communication Arts, Western Sydney University

    The Penrith Panthers and Melbourne Storm will contest the National Rugby League (NRL) grand final on Sunday.

    Betting markets have them pretty much equal favourites. However, history shows grand finals don’t always go to plan.

    But what are the biggest upsets in NRL grand final history?

    Using a combination of formlines during the season and in finals, betting odds, media coverage and past performances, here are some of the most outlandish upsets in rugby league’s history.

    1944: Balmain 12, Newtown 8

    In 1944, Newtown was the minor premier while Balmain was second.

    Newtown entered the finals series as hot favourite and looked even hotter after destroying third-placed St George 55–7 in the first semi-final.

    However, in the final, Balmain won 19–6. That wasn’t the end of the story, though.

    Under the rules of the day, Newtown, as minor premier, could seek a rematch in a grand final “challenge”.

    Newton fielded a much stronger side and most expected it to reverse the final result. However, Balmain won again, 12–8.

    1952: Western Suburbs 22, South Sydney 12

    In 1952, Wests were minor premiers, while Souths finished third.

    Souths won the first semi-final 18–10 but Wests, as minor premiers, went straight to the grand final challenge three weeks later anyway. Meanwhile, Souths beat North Sydney to advance.

    According to the Sydney Truth, Wests were “regarded in some quarters as rank outsiders”.

    Then, rumours spread that Wests had “thrown” the first game and the referee assigned to the decider, George Bishop, had placed £400 on them, causing their price to shorten.

    Bishop sent off a player from each team ten minutes into the second half. Souths scored a try with 20 minutes to go to take the lead before Wests scored four tries in the last ten minutes to win.

    Bishop retired after the grand final.

    1963: St George 8, Western Suburbs 3

    In 1963, St George was minor premiers, while Wests were second. However, Wests, which had lost the previous two grand finals to St George, had beaten them twice in the regular rounds and again in the major semi-final, and went into the game favourite.

    On grand final day, the field deteriorated into a quagmire and led to the famous post-match “gladiators” photograph of captains Arthur Summons and Norm Provan shaking hands while coated in mud.

    The foul conditions contributed to a low-scoring game, which St George won 8–3.

    Once more it was suspected the referee, this time Darcy Lawler, had a financial interest in the outcome. He, too, retired immediately.

    Today we view St George’s victory in the context of a huge winning streak of premierships from 1955 to 1966.

    1989: Canberra 19, Balmain 14

    South Sydney had been minor premiers while Balmain finished third, one point clear of Canberra.

    Balmain were generally considered to have been more impressive than Canberra and were favourites for the grand final.

    One media expert, Harry Craven, was so confident Balmain would win he had his “weatherboard” (house) on the Tigers.

    In the grand final, Balmain led 14-8 with 15 minutes to play before Canberra levelled at 14–14 with 90 seconds remaining.

    After 20 minutes of extra time, Canberra won 19–14 and became the first team to win from further back than third in the regular season.

    1995: Canterbury 17, Manly 4

    Possibly the hottest grand final favourites of the past half-century, Manly lost just two games in the regular season and shared the minor premiership with Canberra.

    Canterbury (officially, the “Sydney Bulldogs” in 1995) were sixth and needed to win four straight games to be premier.

    The two sides met once in the regular season, with Manly winning 26-0.

    In the grand final, the Bulldogs led 6–4 at half-time and disaster loomed when Terry Lamb was sin-binned early in the second term.

    Somehow, the Dogs held Manly out until his return, then gained the ascendancy and won comfortably.

    1997: Newcastle 22, Manly 16

    In 1997 we had the first season of the News Limited-funded “Super League”.

    The glamourous Manly side was once more expected to be easy winners over Newcastle, which was contesting its first grand final.

    Only two teams in 70 years had won at their first attempt, while Manly had won its past 11 matches against the Knights.

    The grand final followed its anticipated plot until Newcastle’s Robbie O’Davis evened the score at 16–16. Newcastle missed with two field goal attempts, but after the second, Darren Albert regathered the ball and pierced the Manly defence to score under the posts with six seconds remaining.

    In 1997, the Newcastle Knights secured a maiden title against the Manly Sea Eagles.

    1999: Melbourne 20, St George Illawarra 18

    Odds for the 1999 grand final are unknown but the press anointed St George “hot favourites” while Canterbury champion Ricky Stuart rated them “unbeatable”.

    Melbourne was in just its second year of NRL competition and had never beaten St George.

    Melbourne had pulled off “escapes” against Canterbury and Parramatta to make the decider but the Saints were winning with ease and even crushed Melbourne 34–10 in the qualifying final.

    In the decider, St George led 14–0 and was looking good. Then, in the 51st minute, Anthony Mundine kicked the ball to a vacant try line but fumbled it touching down.

    The Melbourne Storm shocked the NRL world when they won the 1999 grand final.

    Nevertheless, St George maintained an 18–6 advantage midway through the second half, before a Storm fightback.

    With minutes remaining, Melbourne received a penalty try which it converted to win the game.

    The biggest upset: 1969, Balmain 11, South Sydney 2

    Most agree the biggest grand final upset is Balmain’s 11-2 defeat of South Sydney in 1969.

    Bookies had Souths as heavy favourites – they had won the previous two grand finals, while Balmain was a young team lacking grand final experience.

    However, the form lines of the two teams were not dissimilar.

    At the end of the regular season, South Sydney was the minor premier with Balmain just one win behind them.

    Souths defeated Balmain by one point in the semi-final, and a week later, Balmain beat Manly by a point to scrape into the grand final.

    Despite South’s heavy favouritism, Balmain were not friendless. Of six “experts” whose opinion was sought by one newspaper on the morning of the game, two picked Balmain outright and another conceded them an even-money chance.

    It was perhaps the circumstances of the game, as much as the result, that has lent the 1969 grand final its legend status.

    Souths, noted for their attacking potency, were unable to score a try. Balmain scored a single try early in the second half but then several Balmain players set about disrupting the Souths attack by, allegedly, feigning injuries to give their teammates a breather.

    The game has since become known as the “sit-down grand final”.

    Wayne Peake does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What are the greatest upsets in NRL grand final history? – https://theconversation.com/what-are-the-greatest-upsets-in-nrl-grand-final-history-239380

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: How we created a beautiful native wildflower meadow in the heart of the city using threatened grassland species

    Source: The Conversation (Au and NZ) – By Katherine Horsfall, PhD Candidate, School of Agriculture, Food and Ecosystem Sciences, The University of Melbourne

    Matthew Stanton, CC BY-NC

    A city street may seem an unusual place to save species found in critically endangered grasslands. My new research, though, shows we can use plants from these ecosystems to create beautiful and biodiverse urban wildflower meadows. This means cities, too, can support nature repair.

    Species-rich grassy ecosystems are some of the most threatened plant communities on the planet. Occupying easily developed flat land, grassy ecosystems are routinely sacrificed as our cities expand.

    In south-east Australia, the volcanic plains that support Melbourne’s northern and western suburbs were once grasslands strewn with wildflowers, “resembling a nobleman’s park on a gigantic scale”, according to early explorer Thomas Mitchell. But these exceptionally diverse, critically endangered ecosystems have been reduced to less than 1% of their original area. The few remnants continue to be lost to urban development and weed invasion.

    A mix of the seeds used to create the meadow.
    Hui-Anne Tan, CC BY-NC

    Unfortunately, efforts to restore the grasslands around Melbourne have had mixed results. In 2020 the City of Melbourne took matters into its own hands. Recognising it is possible to enrich the diversity of birds, bats and insects by providing low-growing native plants, the council set a goal to increase understorey plants by 20% on the land it manages.

    Creating a large native grassland in inner-city Royal Park would help achieve this goal. Adopting a technique used by wildflower meadow designers, we sowed a million seeds of more than two dozen species from endangered grasslands around Melbourne. All but one of these species established in the resulting native wildflower meadow.

    The recreated native wildflower meadow is close to an inner-city road.
    Matthew Stanton, CC BY-NC

    What were the challenges at this site?

    Existing restoration techniques remove nutrient-enriched topsoils full of weed seeds before sowing native seeds. The target plant community can then establish with less competition from nutrient-hungry weeds.

    However, this approach could not be used at the Royal Park site. Topsoil removal cannot be used on many urban sites where soils are contaminated or there are underground services. Alternative approaches are needed to reduce weed competition while minimising soil disturbance.

    I saw a possible answer in the horticultural approaches used to create designed wildflower meadows.

    Preparing the selected site in Royal Park by raking away mulch.
    Hui-Anne Tan, CC BY-NC

    While still rare in Australia, designed wildflower meadows can increase the amenity and biodiversity of urban environments. They also reduce the costs of managing and mowing turf grass. These meadows are designed to be infrequently mown or burnt.

    Wildflower meadow designers typically use an international suite of species that can be established from seed and persist without fertiliser or regular irrigation. An abundance of flowers makes people more accepting of “messy” vegetation. Recognising this, designers select a mix of species that will flower for as much of the year as possible.

    Seed being spread by hand across the prepared area in April 2020.
    Hui-Anne Tan, CC BY-NC

    To reduce competition from weeds, these meadows are often created on a layer of sand that covers the original site soils. The low-nutrient sand buries weed seeds and creates a sowing surface that resists weed invasion from the surrounding landscape.

    However, the grasslands around Melbourne grow on clay soils, not sand. Would these techniques work for plants from these ecosystems?

    A deep sand layer controls weeds and slugs

    To find out we sowed more than a million seeds on sites with two depths of sand (10mm and 80mm) and one without a sand layer in Royal Park. Within one year, 26 of the 27 species sown had established to form a dense, flowering meadow across all sand depths. These plants included three threatened species.

    The hoary sunray, Leucochrysum albicans subsp. tricolor, is one of the endangered species in the native wildflower meadow.
    Marc Freestone/Royal Botanic Gardens Victoria, CC BY-NC-SA

    Crucially, the deepest sand layer reduced weed numbers and therefore time spent weeding.

    Interestingly, slugs played a role in determining the diversity of the native meadow. South-east Australia’s grasslands have largely evolved without slugs. As a result, seedlings lack chemical or physical defences against grazing by slugs, which can greatly reduce species diversity in native meadows.

    Again, sand provided a real benefit. Fewer slugs occurred on the deepest sand layer compared to bare soil. The suggestion that sand can deter slugs is consistent with meadow research in Europe.

    By September 2020, seedlings are growing on the prepared plots. The roof tile in the foreground is for monitoring slug numbers.
    Hui-Anne Tan, CC BY-NC

    Now to repair nature in all our cities

    Our research gives us another technique to reinstate critically endangered plant communities. We can use it to bring nature back to city parks and streets.

    Working in urban contexts also unlocks other advantages. There’s ready access to irrigation while the meadow gets established and to communities keen to care for natural landscapes. Creating native wildflower meadows in cities also helps native animals survive, including threatened species that call our cities home.

    People will be able to engage with beautiful native plants that are now rare in cities. Enriching our experience of nature can enhance our health and wellbeing.

    The meadow’s plant community was established by November 2020, six months after sowing.
    David Hannah, CC BY-NC

    My colleagues and I trialled these approaches with the support of the City of Melbourne. We are continuing our research to improve the scale and sustainability of native wildflower meadows in other municipalities.

    Native wildflower meadows and grassland restoration projects could genuinely help Australia meet its commitment to restore 30% of degraded landscapes. But first we need to invest much more in seed production. Reinstating native species on degraded land requires a lot of seed.

    Once seed supply is more certain, we will be able to bring back native biodiversity and beauty to streets, parks and reserves across the country.


    I would like to acknowledge the Traditional Custodians of the land on which the project took place, the Wurundjeri and Bunurong people of the Kulin Nations, and we pay our respects to their Elders, past, present and emerging. I also acknowledge my colleagues listed as co-authors on the research paper that formed the basis of this article: urban ecologists Nicholas S.G. Williams and Stephen Livesley, and seed ecologists Megan Hirst and John Delpratt.

    Katherine Horsfall received funding from the City of Melbourne to undertake this research and receives funding from the Australian Research Training Program.

    ref. How we created a beautiful native wildflower meadow in the heart of the city using threatened grassland species – https://theconversation.com/how-we-created-a-beautiful-native-wildflower-meadow-in-the-heart-of-the-city-using-threatened-grassland-species-240332

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: From cheeky thrill to grande dame – the Moulin Rouge celebrates 135 years of scandal and success

    Source: The Conversation (Au and NZ) – By Will Visconti, Teacher and researcher, Art History, University of Sydney

    Henri de Toulouse-Lautrec At the Moulin Rouge – The Dance, 1890 Henri de Toulouse-Lautrec/Wikimedia Commons

    When the Moulin Rouge first opened on October 6 1889, it drew audiences from across classes and countries.

    The Moulin offered an array of fin-de-siècle (end-of-the-century) entertainments to Paris locals and visitors. Located in Montmartre, its name, the “red windmill”, alluded to Montmartre’s history as a rural idyll. The neighbourhood was also associated with artistic bohemia, crime, and revolutionary spirit. This setting added a certain thrill for bourgeois audiences.

    From irreverent newcomer to a French institution, the Moulin Rouge has survived scandal, an inferno and found new ways to connect with audiences.




    Read more:
    How the Eiffel Tower became silent cinema’s icon


    Red and electric

    In 1889, the Moulin Rouge was not the only red landmark to open in Paris. The Eiffel Tower, built as part of the Universal Exhibition and originally painted red, had opened earlier that same year. What set them apart, however, was their popularity.

    The Moulin Rouge was an instant hit, capitalising on the global popularity of a dance called the cancan. Dancers like Moulin Rouge headliner La Goulue (“The Glutton”, real name Louise Weber) were seen as more appropriate emblems for the city than the Tower, which many considered an eyesore.

    In an illustration from Le Courrier Français newspaper, a dancer modelled on a photograph of La Goulue holds her leg aloft, flashing her underwear with the caption “Greetings to the provinces and abroad!”.

    Every aspect of the Moulin spoke to the zeitgeist, from its design to the performances, the use of electric lights that adorned its façade, and its advertising.

    Its managers, the impresario team of Joseph Oller and Charles Harold Zidler, had a string of successful venues and businesses to their names. They recognised the importance of modern marketing, using print media, publicity photographs, and posters to spark public interest.

    Among the most iconic images of the Moulin is Henri de Toulouse-Lautrec’s 1891 poster. At its centre is La Goulue, kicking her legs amid swirling petticoats.

    Henri Toulouse-Lautrec’s 1891 poster.
    Shutterstock

    She certainly can cancan

    Found primarily in working-class dance halls from as early as the 1820s, the cancan became a staple of popular entertainment the world over.

    Part of the dance’s thrill lay in the dancers’ freedom of movement and titillation of spectators, as well as its anti-establishment energy. Women used the cancan to thumb their nose at authority via steps like the coup de cul (“arse flash”) or coup du chapeau (removing men’s hats with a high kick).

    The cancan was not the only attraction at the Moulin. There were themed spaces, sideshows, and variety performances ranging from belly dancers and conjoined twins to Le Pétomane (“The Fartomaniac”) who was a flatulist and the highest-paid performer. People watching was equally popular.

    Famous farter, Le Pétomane (Joseph Pujol).
    Wikimedia Commons

    Scandals, riots, and royalty

    Over the years, the Moulin has been no stranger to controversy.

    In its early years, it cultivated an air of misbehaviour and featured in pleasure guides for visiting sex tourists.

    In 1893 it hosted the Bal des Quat’z’Arts (Four-Arts Ball) held by students from local studios. Accusations of public indecency were made against the models and dancers in attendance, and violent protests followed after the women were arrested.

    In 1907 the writer Colette appeared onstage at the Moulin in an Egyptian-inspired pantomime with her then-lover, Missy, the Marquise de Belbeuf. When the act culminated in a passionate kiss, a riot broke out.

    Historical footage shows the Moulin Rouge as it was.

    Kicking on and on

    Over time, the Moulin Rouge shows changed their format to keep pace with public taste, though the cancan remained. The venue hosted revues and operettas, and various stars including Edith Piaf, Ella Fitzgerald, Frank Sinatra and Liza Minnelli.

    Famous guests have included British royalty: from Edward VII (while Prince of Wales) to his great-granddaughter, Queen Elizabeth II, and her son, Prince Edward.

    Since its opening, the Moulin’s fortunes have waxed and waned.

    In 1915 the Moulin Rouge burned down but was rebuilt in 1921. Its famous windmill sails fell off overnight earlier this year but were swiftly repaired.

    In the 1930s, it survived the Depression and rise of cinema (also capturing the attention of several filmakers). It also survived the Nazi occupation of Paris in the 1940s.

    By the early 1960s, Jacki Clerico was managing the Moulin’s show after his father had revamped the venue as a dinner theatre destination. The younger Clérico oversaw additions like a giant aquarium where dancers swam with snakes, and its now-famous “nude line” – a chorus of topless dancers – in its shows.

    In 1963, the Moulin Rouge struck upon a winning formula: revues, all named by Clérico with titles beginning with the letter “F” – from Frou Frou to Fantastique and Formidable. Since 1999, the revue Féerie (“Fairy”, also a French genre of stage extravaganza) has been performed almost without interruption.

    The Moulin Rouge or ‘red mill’ today, with its famous windmill.
    Rafa Barcelos/Shutterstock

    Ticket sales were boosted thanks to Baz Luhrmann’s 2001 film Moulin Rouge! and more recently Moulin Rouge! The Musical.

    Since COVID, the Moulin Rouge management have diversified. The windmill’s interior has been rented out via AirBnB and the Moulin’s dance troupe has performed on France’s televised New Year’s Eve celebrations. This year, the Moulin Rouge and its dancers were part of the Paris Olympics celebrations, dancing in heavy rain.

    Though people have come to appreciate the Eiffel Tower too, the Moulin Rouge can still argue its status as the pinnacle of live entertainment in the French capital: immediately recognisable, internationally visible, and quintessentially Parisian.

    Will Visconti is the author of Beyond the Moulin Rouge: The Life & Legacy of La Goulue (2022), published by the University of Virginia Press.

    ref. From cheeky thrill to grande dame – the Moulin Rouge celebrates 135 years of scandal and success – https://theconversation.com/from-cheeky-thrill-to-grande-dame-the-moulin-rouge-celebrates-135-years-of-scandal-and-success-239849

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: 71% of Australian uni staff are using AI. What are they using it for? What about those who aren’t?

    Source: The Conversation (Au and NZ) – By Stephen Hay, Senior Lecturer, School of Education and Professional Studies, Griffith University

    Yanz Island/Shutterstock

    Since ChatGPT was released at the end of 2022, there has been a lot of speculation about the actual and potential impact of generative AI on universities.

    Some studies have focused on students’ use of AI. There has also been research on what it means for teaching and assessment.

    But there has been no large-scale research on how university staff in Australia are using AI in their work.

    Our new study surveyed more than 3,000 academic and professional staff at Australian universities about how they are using generative AI.

    Our study

    Our survey was made up of 3,421 university staff, mostly from 17 universities around Australia.

    It included academics, sessional academics (who are employed on a session-by-session basis) and professional staff. It also included adjunct staff (honorary academic positions) and senior staff in executive roles.

    Academic staff represented a wide range of disciplines including health, education, natural and physical sciences, and society and culture. Professional staff worked in roles such as research support, student services and marketing.

    The average age of respondents was 44.8 years and more than half the sample was female (60.5%).

    The survey was open online for around eight weeks in 2024.

    We surveyed academic and professional staff at universities around Australia.
    Panitan/Shutterstock

    Most university staff are using AI

    Overall, 71% of respondents said they had used generative AI for their university work.

    Academic staff were more likely to use AI (75%) than professional staff (69%) or sessional staff (62%). Senior staff were the most likely to use AI (81%).

    Among academic staff, those from information technology, engineering, and management and commerce were most likely to use AI. Those from agriculture and environmental studies, and natural and physical sciences, were least likely to use it.

    Professional staff in business development, and learning and teaching support, were the most likely to report using AI. Those working in finance and procurement, and legal and compliance areas, were least likely to use AI.

    Given how much publicity and debate there has been about AI in the past two years, the fact that nearly 30% of university staff had not used AI suggests adoption is still at an early stage.

    What tools are staff using?

    Survey respondents were asked which AI tools they had used in the previous year. They reported using 216 different AI tools, which was many more than we anticipated.

    Around one-third of those using AI had only used one tool, and a further quarter had used two. A small number of staff (around 4%) had used ten tools or more.

    General AI tools were by far the most frequently reported. For example, ChatGPT was used by 88% of AI users and Microsoft Copilot by 37%.

    University staff are also commonly using AI tools with specific purposes such as image creation, coding and software development, and literature searching.

    We also asked respondents how frequently they used AI for a range of university tasks. Literature searching, writing and summarising information were the most common, followed by course development, teaching methods and assessment.

    ChatGPT was the most common generative AI tool used by our respondents.
    Monkey Business Images/ Shutterstock

    Why aren’t some staff using AI?

    We asked staff who had not yet used AI for work to explain their thinking. The most common reason they gave was AI was not useful or relevant to their work. For example, one professional staff member stated:

    While I have explored a couple of chat tools (Chat GPT and CoPilot) with work-related questions, I’ve not needed to really apply these tools to my work yet […].

    Others said they weren’t familiar with the technology, were uncertain about its use or didn’t have time to engage. As one academic told us plainly, “I don’t feel confident enough yet”.

    Ethical objections to AI

    Others raised ethical objections or viewed the technology as untrustworthy and unreliable. As one academic told us:

    I consider generative AI to be a tool of plagiarism. The uses to date, especially in the creative industries […] have involved machine learning that uses the creative works of others without permission.

    They also also raised about AI undermining human activities such as writing, critical thinking and creativity – which they saw as central to their professional identities. As one sessional academic said:

    I want to think things through myself rather than trying to have a computer think for me […].

    Another academic echoed:

    I believe that writing and thinking is fundamental to the work we do. If we’re not doing that, then […] why do we need to exist as academics?

    How should universities respond?

    Universities are at a crucial juncture with generative AI. They face an uneven uptake of the technology by staff in different roles and divided opinions on how universities should respond.

    These different views suggest universities need to have a balanced response to AI that addresses both the benefits and concerns around this technology.

    Despite differing opinions in our survey, there was still agreement among respondents that universities need to develop clear, consistent policies and guidelines to help staff use AI. Staff also said it was crucial for universities to prioritise staff training and invest in secure AI tools.

    Alicia Feldman receives an Australian Government Research Training Program Scholarship and Fee Offset.

    Paula McDonald receives funding from the Australian Research Council.

    Abby Cathcart and Stephen Hay do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. 71% of Australian uni staff are using AI. What are they using it for? What about those who aren’t? – https://theconversation.com/71-of-australian-uni-staff-are-using-ai-what-are-they-using-it-for-what-about-those-who-arent-240337

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself

    Source: The Conversation (Au and NZ) – By Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

    AlexandraPopova/Shutterstock

    For almost a decade, researchers have been gathering evidence that the social media platform Facebook disproportionately amplifies low-quality content and misinformation.

    So it was something of a surprise when in 2023 the journal Science published a study that found Facebook’s algorithms were not major drivers of misinformation during the 2020 United States election.

    This study was funded by Facebook’s parent company, Meta. Several Meta employees were also part of the authorship team. It attracted extensive media coverage. It was also celebrated by Meta’s president of global affairs, Nick Clegg, who said it showed the company’s algorithms have “no detectable impact on polarisation, political attitudes or beliefs”.

    But the findings have recently been thrown into doubt by a team of researchers led by Chhandak Bagch from the University of Massachusetts Amherst. In an eLetter also published in Science, they argue the results were likely due to Facebook tinkering with the algorithm while the study was being conducted.

    In a response eLetter, the authors of the original study acknowledge their results “might have been different” if Facebook had changed its algorithm in a different way. But they insist their results still hold true.

    The whole debacle highlights the problems caused by big tech funding and facilitating research into their own products. It also highlights the crucial need for greater independent oversight of social media platforms.

    Merchants of doubt

    Big tech has started investing heavily in academic research into its products. It has also been investing heavily in universities more generally. For example, Meta and its chief Mark Zuckerberg have collectively donated hundreds of millions of dollars to more than 100 colleges and universities across the United States.

    This is similar to what big tobacco once did.

    In the mid-1950s, cigarette companies launched a coordinated campaign to manufacture doubt about the growing body of evidence which linked smoking with a number of serious health issues, such as cancer. It was not about falsifying or manipulating research explicitly, but selectively funding studies and bringing to attention inconclusive results.

    This helped foster a narrative that there was no definitive proof smoking causes cancer. In turn, this enabled tobacco companies to keep up a public image of responsibility and “goodwill” well into the 1990s.

    Big tobacco ran a campaign to manufacture doubt about the health effects of smoking.
    Ralf Liebhold/Shutterstock

    A positive spin

    The Meta-funded study published in Science in 2023 claimed Facebook’s news feed algorithm reduced user exposure to untrustworthy news content. The authors said “Meta did not have the right to prepublication approval”, but acknowledged that The Facebook Open Research and Transparency team “provided substantial support in executing the overall project”.

    The study used an experimental design where participants – Facebook users – were randomly allocated into a control group or treatment group.

    The control group continued to use Facebook’s algorithmic news feed, while the treatment group was given a news feed with content presented in reverse chronological order. The study sought to compare the effects of these two types of news feeds on users’ exposure to potentially false and misleading information from untrustworthy news sources.

    The experiment was robust and well designed. But during the short time it was conducted, Meta changed its news feed algorithm to boost more reliable news content. In doing so, it changed the control condition of the experiment.

    The reduction in exposure to misinformation reported in the original study was likely due to the algorithmic changes. But these changes were temporary: a few months later in March 2021, Meta reverted the news feed algorithm back to the original.

    In a statement to Science about the controversy, Meta said it made the changes clear to researchers at the time, and that it stands by Clegg’s statements about the findings in the paper.

    Unprecedented power

    In downplaying the role of algorithmic content curation for issues such as misinformation and political polarisation, the study became a beacon for sowing doubt and uncertainty about the harmful influence of social media algorithms.

    To be clear, I am not suggesting the researchers who conducted the original 2023 study misled the public. The real problem is that social media companies not only control researchers’ access to data, but can also manipulate their systems in a way that affects the findings of the studies they fund.

    What’s more, social media companies have the power to promote certain studies on the very platform the studies are about. In turn, this helps shape public opinion. It can create a scenario where scepticism and doubt about the impacts of algorithms can become normalised – or where people simply start to tune out.

    This kind of power is unprecedented. Even big tobacco could not control the public’s perception of itself so directly.

    All of this underscores why platforms should be mandated to provide both large-scale data access and real-time updates about changes to their algorithmic systems.

    When platforms control access to the “product”, they also control the science around its impacts. Ultimately, these self-research funding models allow platforms to put profit before people – and divert attention away from the need for more transparency and independent oversight.

    Timothy Graham receives funding from the Australian Research Council (ARC) for his Discovery Early Career Researcher Award, ‘Combatting Coordinated Inauthentic Behaviour on Social Media’. He also receives ARC funding for the Discovery Project, ‘Understanding and combatting “Dark Political Communication”‘ (2024–2027).

    ref. Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself – https://theconversation.com/is-big-tech-harming-society-to-find-out-we-need-research-but-its-being-manipulated-by-big-tech-itself-240110

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Down and under pressure: US and UK artists are taking over Australian charts, leaving local talent behind

    Source: The Conversation (Au and NZ) – By Tim Kelly, PhD Candidate, University of Technology Sydney

    Shutterstock

    Missy Higgins’ recent ARIA number-one album, The Second Act, represents an increasingly rare sighting: an Australian artist at the top of an Australian chart.

    My recently published analysis of Australia’s best-selling singles and albums from 2000 to 2023 shows a significant decline in the representation of artists from Australia and non-English-speaking countries.

    The findings suggest music streaming in Australia – together with algorithmic recommendation – is creating a monoculture dominated by artists from the United States and United Kingdom. This could spell bad news for our music industry if things don’t change.

    Who dominates Australian charts?

    In 2023, Australia’s recorded music industry was worth about A$676 million, up 10.9% year on year.

    Building a strong local music industry is important, not only to support diverse cultural expression, but also to create jobs and boost Australia’s reputation on a global stage.

    When Australian artists succeed, this attracts global investment, which in turn stimulates all aspects of the local music industry. Conversely, a weak music economy can lead to global disinvestment, thereby disadvantaging local companies, artists and consumers.

    My research shows how the rise of music streaming – which became the dominant format for Australian recorded music sales in 2017 – has had a noticeable impact on the diversity of artists represented in the ARIA top 100 single and album charts.

    In the year 2000, the top 100 singles chart featured hits from 14 different countries. By contrast, only seven countries were represented in 2023.

    The percentage of Australian and New Zealand artists in the top 100 single charts declined from an average of 16% in 2000–16 to around 10% in 2017–23, and just 2.5% in 2023.

    Album share also declined from an average of 29% in 2000–16 to 18% in 2017–23, and 4% in 2023.

    This chart shows changes in diveristy in the ARIA top 100 albums chart over 22 years.
    Author provided

    Similarly, the proportion of artists from outside the Anglo bloc of North America, the UK and Australia/New Zealand declined from an average of 11.1% in 2000–16 to 7.3% in 2017–23 – while album share declined from 5% in 2000–16 to 2.3% in 2017–23.

    My study also found representation of Indigenous artists remained low, but stable, over the period studied – and in line with population ratios.

    Concetration of power

    The findings suggest the decline in Australian and non-Anglo representation in the ARIA top 100 charts is linked.

    Some economists and academics have argued easier access to independent music and global distribution via streaming will lead to greater diversity in music. But this hasn’t been the case in Australia, at least as far as chart-topping artists are concerned.

    The global recorded music industry has consolidated in recent years. In the early 2000s there were five major music labels. Currently there are just three: Universal, Sony and Warner.

    Last year, these three labels were responsible for more than 95% of the Australian top 100 single and album charts. Meanwhile, Spotify, Apple Music and YouTube make up an estimated 97% of the Australian streaming market.

    These concentrations of power allow a handful of record labels and distributors to have a disproportionate influence over music design, production, distribution and governance – thereby limiting opportunities for diversity.

    The need for new policy

    My findings align with European research that found markets with a strong cultural differentiator of language are showing increased national diversity with streaming.

    However, countries without a distinctive language are being increasingly dominated by global music production. In Australia’s case, we’re becoming reliant on the star-making machinery of the US.

    Recently, Australia’s live music crisis came under scrutiny at a federal government inquiry, which highlighted the significant power imbalance between artists and multinational promoters.

    As I and many others have suggested, targeted cultural policies are necessary to combat our highly concentrated and US-dependent market.

    Relying on labels and streaming platforms will do little to preserve and promote our nation’s unique musical and cultural identity.

    Previous employment at Sony Music, Universal Music, Inertia Music. ARIA Chart Committee member 2005-2017. Employment at these labels ceased by 2017. No continued professional relationship with any of the companies.

    ref. Down and under pressure: US and UK artists are taking over Australian charts, leaving local talent behind – https://theconversation.com/down-and-under-pressure-us-and-uk-artists-are-taking-over-australian-charts-leaving-local-talent-behind-239822

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: ADHD prescribing has changed over the years – a new guide aims to bring doctors up to speed

    Source: The Conversation (Au and NZ) – By Brenton Prosser, Professor of Public Policy and Leadership, UNSW Sydney

    Ketut Subiyanto/Pexels

    Attention-deficit hyperactivity disorder (ADHD) is the most diagnosed childhood neurological disorder in Australia.

    Over the years, it has been the subject of controversy about potential misdiagnosis and overdiagnosis. There has also been variation in levels of diagnosis and drug prescription, depending on where you live and your socioeconomic status.

    To address these concerns and improve consistency in ADHD diagnosis and prescribing, the Australasian ADHD Professionals Association has released a new prescribing guide. This will help the health-care workforce to consistently get the right treatment to the right people, with the right mix of medical and non-medical supports.

    Here’s how ADHD prescribing has changed over time and what the new guidelines mean.

    What is ADHD and how is it treated?

    Up to one in ten young Australians experience ADHD. It is diagnosed due to inattention, hyperactivity and impulsivity that has negative effects at home, school or work.

    Psychostimulant medication is a central pillar of ADHD treatment.

    However, the internationally recognised approach is to combine medicines with non-medical interventions in a multimodal approach. These non-medical interventions include cognitive behavioural therapy (CBT), occupational therapy, educational strategies and other supports.

    Medication use has changed over time

    In Australia, Ritalin (methylphenidate) was originally the most prescribed ADHD medication. This changed in the 1990s after the introduction of dexamphetamine, along with the subsequent availability of Vyvance (lisdexamfetamine).

    Perhaps the most significant change has come with “slow release” versions of the above medications that can last more than eight hours (longer than a school day).

    When following clinical guidelines, prescribing medication for ADHD is safe practice. Yet the use of amphetamines to treat young people with ADHD has caused public concern. This highlights the importance of consistent guidelines for prescribing professionals.

    Medication for ADHD can be combined with other non-drug approaches.
    Caleb Woods/Unsplash

    Growth in diagnosis and prescribing

    Starting from low levels, there was a dramatic rise in diagnosis and drug treatment in the 1990s. Much of this was overseen by a small number of psychiatrists and paediatricians in each state or territory. While this promised the potential of consistency in the early days, it also raised concerns about best practice.

    This led to the development of the first ADHD clinical guidelines by the National Medical Health and Research Council in 1997.

    It was followed by several refinements as prescription expanded due to changing diagnostic criteria (expanding to include a dual diagnosis with autism) and the need for best practice with the growing prescription by GPs. These guidelines enhanced the consistency of approaches nationally and reduced the likelihood of misdiagnosis or overdiagnosis.

    However, a recent Senate inquiry found diagnosis and drug treatment continued to grow substantially in the five years to 2022. It emphasised the need for a more consistent approach to diagnosis and prescribing.

    First the ingredients, then the recipe

    The most recent clinical guidelines, released by the Australasian ADHD Professionals Association in 2022, outlined a roadmap for ADHD clinical practice, research and policy. They did so by drawing on the lived experience of those with ADHD. They also emphasised broader health questions, such as how to respond to ADHD as a holistic condition.

    It remains difficult to predict individual responses to different medication. So the new prescribing guide offers practical advice about safe and responsible prescribing. This aims to reduce the potential for incorrect prescribing, dosing and adjusting of ADHD medication, across different age groups, settings and individuals.

    To put this visually, the clinical guidelines describe what the ingredients of the cake should be, while the prescribing guidelines provide step-by-step recipes.

    So what do they recommend?

    An important principle in both these documents is that medication should not be the first and only treatment. Not every drug works the same way for every child. In some cases they do not work at all.

    The possible side effects of medication vary and include poor appetite, sleep problems, headaches, stomach aches, moodiness and irritability. These guidelines assist in adapting medication to reduce these side effects.

    Medication provides an important window of opportunity for many young people to gain maximum value from psychosocial and psychoeducational supports. These supports can, among others, include:

    Support for ADHD can also include parent training. This is not to suggest parents cause ADHD. Rather, they can support more effective treatment, especially since the rigours of ADHD can be a challenge to even the “perfect” parent.

    Getting the right diagnosis

    There have been reports of people seeking to use TikTok to self-diagnose, as well as a rise in people using ADHD stimulants without a prescription.

    However, the message from these new guidelines is that ADHD diagnosis is a complex process that takes a specialist at least three hours. Online sources might be useful to prompt people to seek help, but diagnosis should come from a qualified health-care professional.

    Finally, while we have moved beyond unhelpful past debate about whether ADHD is real to consolidate best diagnostic and prescribing practice, there is some way to go in reducing stigma and changing negative community attitudes to ADHD.

    Hopefully in future we’ll be better able to cherish diversity and difference, and not just see it as a deficit.

    Brenton Prosser is a Board Member of the Council of Academic Public Health Institutions Australasia and affiliated with the School of Population Health at UNSW.

    ref. ADHD prescribing has changed over the years – a new guide aims to bring doctors up to speed – https://theconversation.com/adhd-prescribing-has-changed-over-the-years-a-new-guide-aims-to-bring-doctors-up-to-speed-240313

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Curious Kids: What does the edge of the universe look like?

    Source: The Conversation (Au and NZ) – By Sara Webb, Lecturer, Centre for Astrophysics and Supercomputing, Swinburne University of Technology

    Greg Rakozy/Shutterstock

    What does the edge of the universe look like?

    Lily, age 7, Harcourt

    What a great question! In fact, this is one of those questions humans will continue to ask until the end of time. That’s because we don’t actually know for sure.

    But we can try and imagine what the edge of the universe might be, if there is one.

    Looking back in time

    Before we begin, we do need to go back in time. Our night sky has looked the same for all of human history. It’s been so reliable, humans from all around the world came up with patterns they saw in the stars as a way to navigate and explore.

    To our eyes, the sky looks endless. With the invention of telescopes about 400 years ago, humans were able to see farther – more than just our eyes ever could. They continued to discover new things in the sky. They found more stars, and then eventually started to notice that there were a lot of strange-looking cosmic clouds.

    Astronomers gave them the name “nebula” from the Latin word for “mist” or “cloud”.

    It was less than 100 years ago that we first confirmed these cosmic clouds or nebulas were actually galaxies. They are just like Milky Way, the galaxy our own planet is in, but very far away.

    What is amazing is that in every direction we look in the universe, we see more and more galaxies. In this James Webb Space Telescope image, which is looking at a part of the sky no bigger than a grain of sand, you can see thousands of galaxies.

    It’s hard to imagine there is an edge where all of this stops.

    The edge of the universe

    However, there is technically an edge to our universe. We call it our “observable” universe.

    This is because we don’t actually know if our universe is infinite – meaning it continues forever and ever.

    Unfortunately, we might never know because of one pesky thing: the speed of light.

    We can only ever see light that’s had enough time to travel to us. Light travels at exactly 299,792,458 metres per second. Even at those speeds, it still takes a long time to cross our universe. Scientists estimate the size of the universe is at least 96 billion light years across, and likely even bigger.

    You can learn a little more about that and our universe as a whole in this video below.

    What would we see if there was an edge?

    If we were to travel to the very, very edge of the universe we think exists, what would there actually be?

    Many other scientists and I theorise that there would just be … more universe!

    As I said, there is a theory that our universe doesn’t actually have an edge, and might continue on indefinitely.

    But there are other theories, too. If our universe does have an edge, and you cross it, you might just end up in a completely different universe altogether. (That is best saved for science fiction for now.)

    Even though there isn’t a straightforward answer to your question, it is precisely questions like these that help us continue to explore and discover the universe, and allow us to understand our place within it. You’re thinking like a true scientist.

    Sara Webb does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Curious Kids: What does the edge of the universe look like? – https://theconversation.com/curious-kids-what-does-the-edge-of-the-universe-look-like-233111

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: NSW will remove 65,000 years of Aboriginal history from its syllabus. It’s a step backwards for education

    Source: The Conversation (Au and NZ) – By Michael Westaway, Australian Research Council Future Fellow, Archaeology, School of Social Science, The University of Queensland

    The NSW Education Standards Authority has announced that teaching of the Aboriginal past prior to European arrival will be excluded from the Year 7–10 syllabus as of 2027.

    Since 2012, the topic “Ancient Australia” has been taught nationally in Year 7 as part of the Australian Curriculum. In 2022, a new topic called the “deep time history of Australia” was introduced to provide a more detailed study of 65,000 years of First Nations’ occupation of the continent.

    However, New South Wales has surprisingly dropped this topic from its new syllabus, which will be rolled out in 2027. Instead, students will only learn First Nations’ history following European colonisation in 1788.

    This directly undermines the Alice Springs (Mparntwe) Education Declaration of 2020. This is a national agreement, signed by education ministers from all jurisdictions, which states:

    We recognise the more than 60,000 years [sic] of continual connection by Aboriginal and Torres Strait Islander peoples as a key part of the nation’s history, present and future.

    If the planned change to the syllabus goes through, the only Aboriginal history taught to NSW students would be that which reflects the destruction of traditional Aboriginal society. It also means Aboriginal students in NSW will be denied a chance to learn about their deep ancestral past.

    The significance of Australia’s deep time past

    Bruce Pascoe’s groundbreaking 2014 book Dark Emu (which sold more than 500,000 copies), and the associated documentary, have highlighted an enormous appetite for learning about Australia’s deep time past.

    Hundreds of thousands of Australians engaged with Dark Emu. As anthropologist Paul Memmott notes, the book prompted a debate that encouraged a better understanding of Aboriginal society and its complexity.

    It also generated research that investigated whether terms such as “hunter-gatherers” are appropriate for defining past Aboriginal society and economic systems.




    Read more:
    Farmers or foragers? Pre-colonial Aboriginal food production was hardly that simple


    In schools, teachers have used Pascoe’s book Young Dark Emu to introduce students to sophisticated land and aquaculture systems used by First Peoples prior to colonisation.

    The book raises an important question. If you lived in a country that invented bread and the edge-ground axe – a culture that independently developed early trade and social living – and did all of this without resorting to land war – wouldn’t you want your children to know about it?

    For many students, the history they learn at school is knowledge they carry into their adult lives – and knowledge is the strongest antidote to ignorance. Rather than abandoning the Aboriginal deep time story, schools should be encouraging students to engage with it.

    Learning on Country

    One of the strengths of the current NSW history syllabus is the requirement for students to undertake a “site study” in Years 8 and 9. Currently, NSW is the only jurisdiction that has made this mandatory.

    Site studies are an excellent opportunity for students to learn on Country. Many teachers organise excursions to Aboriginal cultural sites where students can directly engage with local Traditional Owners and Elders.

    New South Wales is brimming with sites of cultural significance to Aboriginal people. The map below highlightssome of these, ranging from megafauna sites, to extensive fish traps, to the enigmatic rock art galleries and ceremonial engravings (petroglyphs).



    How students will miss out

    The Ngambaa people and archaeologists from the University of Queensland are currently investigating one of the largest midden complexes in Australia. This complex, located at Clybucca and Stuart’s Point on the north coast, spans some 14 kilometres and dates back to around 9,000 years ago.

    Middens, or “living sites”, are accumulations of shell that were built over time through thousands of discarded seafood meals. Since the shells help reduce the acidic chemistry of the soil, animal bones and plant remains are more likely to be preserved in middens.

    For instance, the Clybucca-Stuarts Point midden complex contains remains from seals and dugongs. Both of these animals were once part of the local ecosystem, but no longer are.

    The middens also extend back to before the arrival of dingoes, so studying them could help us understand how biodiversity changed once dingoes replaced thylacines and Tasmanian devils on the mainland.

    Local school students, especially Aboriginal students, will be actively participating in this cutting-edge research alongside the Ngambaa people, archaeologists and teachers. Among other things, the students will learn how the Ngambaa people sustainably managed land and sea Country over thousand of years during periods of dramatic environmental change.

    But innovative programs like this will no longer be as relevant if Australia’s deep time history is removed from the NSW syllabus.

    An opportunity for leadership

    The study of First Nations archaeological sites, history and cultures tells us a broader human story of continuity and adaptability over deep time. Indigenising the curriculum – wherein Aboriginal knowledge is braided with historical and archaeological inquiry – is a powerful way to reconcile different approaches to understanding the past.

    The NSW Education Standards Authority’s proposed changes risk sending young people the message that Australia’s “history” before colonisation is not an important part of the country’s historic narrative.

    But there is still time to show leadership – by reversing the decisions and by connecting teachers and students to powerful stories from Australia’s deep time past.

    Michael Westaway receives funding from the Australian Research Council and Humanities and Social Science at the University of Queensland .

    Bruce Pascoe is the author of the texts mentioned in this article, Dark Emu and Young Dark Emu: A Truer History. He also has positions on the boards of Black Duck Foods, the Twofold Aboriginal Corporation and First Languages Australia.

    Louise Zarmati receives research funding from the ARC Centre of Excellence of Australian Biodiversity and Heritage.

    ref. NSW will remove 65,000 years of Aboriginal history from its syllabus. It’s a step backwards for education – https://theconversation.com/nsw-will-remove-65-000-years-of-aboriginal-history-from-its-syllabus-its-a-step-backwards-for-education-240111

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: New video shows sharks making an easy meal of spiky sea urchins, shedding light on an undersea mystery

    Source: The Conversation (Au and NZ) – By Jeremy Day, PhD researcher, University of Newcastle

    Author provided

    Long-spined sea urchins have emerged as an environmental issue off Australia’s far south coast. Native to temperate waters around New South Wales, the urchins have expanded their range south as oceans warm. There, they devour kelp and invertebrates, leaving barren habitats in their wake.

    Lobsters are widely accepted as sea urchins’ key predator. In efforts to control urchin numbers, scientists have been researching this predator-prey relationship. And the latest research by my colleagues and I, released today, delivered an unexpected result.

    We set up several cameras outside a lobster den and placed sea urchins in it. We filmed at night for almost a month. When we checked the footage, most sea urchins had been eaten – not by lobsters, but by sharks.

    This suggests sharks have been overlooked as predators of sea urchins in NSW. Importantly, sharks seem to very easily consume these large, spiky creatures – sometimes in just a few gulps! Our findings suggest the diversity of predators eating large sea urchins is broader than we thought – and that could prove to be good news for protecting our kelp forests.

    A puzzling picture

    The waters off Australia’s south-east are warming at almost four times the global average. This has allowed long-spined sea urchins (Centrostephanus rodgersii) to extend their range from NSW into waters off Victoria and Tasmania.

    Sea urchins feed on kelp and in their march south, have reduced kelp cover. This has added to pressure on kelp forests, which face many threats.

    Scientists have been looking for ways to combat the spread of sea urchins. Ensuring healthy populations of predators is one suggested solution.

    Overseas research on different urchin species has focused on predators such as lobsters and large fish. It found kelp cover can be improved by protecting or reinstating these predators.

    Sea urchins feed on kelp.
    Nathan Knott

    In NSW, eastern rock lobsters are thought to be important urchin predators. The species has been over-fished in the past but stocks have significantly bounced back in recent years.

    But despite this, no meaningful reduction in urchin populations, or increase in kelp growth, has been observed in NSW.

    Why not? Could it be that lobsters are not eating urchins in great numbers after all? Certainly, there is little empirical evidence on how often predators eat urchins in the wild.

    What’s more, recent research in NSW suggested the influence of lobsters on urchin populations was low, while fish could be more important.

    Our project aimed to investigate the situation further.

    Eastern rock lobsters are thought to be major urchin predators.
    Flickr/Richard Ling, CC BY

    What we did

    We tied 100 urchins to blocks outside a lobster den off in Wollongong for 25 nights. This tethering meant the urchins were easily available to predators and stayed within view of our cameras.

    Then we set multiple cameras to remotely turn on at sunset and turn after sunrise each day, to capture nocturnal feeding. We used a red-filtered light to film the experiments because invertebrates don’t like the white light spectrum.

    We expected our cameras would capture lobsters eating the urchins. But in fact, the lobsters showed little interest in the urchins and ate just 4% of them. They were often filmed walking straight past urchins in search of other food.

    Sharks, however, were very interested in the urchins. Both crested horn sharks (Heterodontus galeatus) and Port Jackson sharks (H. portusjacksonii) entered the den and ate 45% of the urchins.

    As the footage below shows, sharks readily handled very large urchins (wider then 12 centimetres) with no hesitation.

    Until now, it was thought few or no predators could handle urchins of this size. Larger urchins have longer spines, thicker shells and attach more strongly to the seafloor, making them harder to eat.

    But the sharks attacked urchins from their spiny side, showing little regard for their sharp defences. This approach differs from other predators, such as lobsters and wrasses, which often turn urchins over and attack them methodically from their more vulnerable underside.

    In fact, some sharks were so eager to eat urchins, they started feeding before the cameras turned on at sunset. This meant we had to film by hand.

    Footage captured by the researchers showing crested horn sharks eating sea urchins. Horn sharks generally do not pose a threat to humans.

    A complex food web

    Our experiment showed the effect of lobsters on urchins in the wild is less than previously thought.
    This may explain why efforts to encourage lobster numbers have not helped control urchin numbers.

    We also revealed a little-considered urchin predator: sharks.

    Lobsters are capable but hesitant predators, whereas sharks seem eager to eat urchins. And crested horn sharks are an abundant, hardy species that is not actively fished.

    When interpreting these findings, however, a few caveats must be noted.

    First, sharks (and lobsters) are not the only animals to prey on urchins. Other predators include bony fishes, and more are likely to be identified in future.

    Second, other factors can control urchin numbers, such as storm damage and the influx of fresh water.

    And finally, it is unsurprising that we found a key predator when we intentionally searched for it by laying out food. Tethering urchins creates an artificial environment. We don’t know if the results would be replicated in the wild.

    And even though we now know some shark species eat sea urchins, we don’t yet know if they can control urchins numbers.

    But our research does confirm predators capable of handling large urchins may be more widespread than previously thought.

    Jeremy Day received funding from University of Newcastle, Ecological Society of Australia, Royal Zoological Soceity of New South Wales and Fisheries Research and Development Corporation.

    ref. New video shows sharks making an easy meal of spiky sea urchins, shedding light on an undersea mystery – https://theconversation.com/new-video-shows-sharks-making-an-easy-meal-of-spiky-sea-urchins-shedding-light-on-an-undersea-mystery-240205

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: There’s a renewed push to scrap junior rates of pay for young adults. Do we need to rethink what’s fair?

    Source: The Conversation (Au and NZ) – By Kerry Brown, Professor of Employment and Industry, School of Business and Law, Edith Cowan University

    NT_Studio/Shutterstock

    Should young people be paid less than their older counterparts, even if they’re working the same job? Whether you think it’s fair or not, it’s been standard practice in many industries for a long time.

    The argument is that young people are not fully “work-ready” and require more intensive employer support to develop the right skills for their job.

    But change could be on the horizon. Major unions and some politicians are pushing for reform – arguing “youth wages” should be scrapped entirely for adults.

    Why? They say the need to be fairly paid for equal work effort, as well as economic considerations such as the high cost of living and ongoing housing crisis, mean paying young adults less based on their age is out of step with modern Australia.

    So is there a problem with our current system, and if so, how might we go about fixing it?

    What are youth wages?

    In Australia, a youth wage or junior pay rate is paid as an increasing percentage of an award’s corresponding full adult wage until an employee reaches the age of 21.

    This isn’t the case in every industry – some awards require all adults to be paid the same minimum rates.

    But for those not covered by a specific award, as well as those working in industries including those covered by the General Retail Industry Award, Fast Food Industry Award and Pharmacy Industry Award, employees younger than 21 are not paid the full rate.

    Why pay less?

    Conventionally, junior rates have been thought of as a “training wage”. Younger people are typically less experienced, so as they gain more skills on the job over time, they are paid a higher hourly rate.

    But there are a few key problems with this approach, which may not be relevant given many employers’ expectations for their workers to start “job-ready” and a lack of consistency in the training they provide.

    Training up and developing skills is an important part of building any career. But it isn’t always provided by their employers.

    Many young adults undergo training prior to starting work and at their own expense.
    Best smile studio/Shutterstock

    Many young workers train themselves in job-related technical education and short courses, often at their own expense and prior to starting work.

    Employers reap the benefit of this pre-employment training and so a “wage discount” for younger workers may be irrelevant in this instance.

    None of this is to say employers aren’t offering something important when they take on young employees.

    Younger workers coming into employment relatively early have access to more than just a paid job, but also become part of a team, with responsibilities and job requirements that support “bigger-picture” life skills.

    Those who employ them may be contributing to their broader social and cultural engagement, something that could be considered part of a more inclusive training package. Whether that justifies a significant wage discount is less clear.




    Read more:
    Why real wages in Australia have fallen while they’ve risen in most other OECD countries


    Calls for a rethink

    There are growing calls for a rethink on the way we compensate young people for their efforts.

    An application by the Shop Distributive and Allied Employees’ Association – the union for retail, fast food and warehousing workers – seeks to remove junior rates for adult employees on three key awards. This action will be heard by the Fair Work Commission next year.

    Sally McManus, Secretary of the Australian Council of Trade Unions, said the peak union body will lobby the government to legislate such changes if this application fails. The Greens have added their support.

    That doesn’t have to mean abolishing youth wages altogether. But 21 years of age is a high threshold, especially given we get the right to major adult responsibilities such as voting and driving by 18.

    A transition strategy could consider gradually lowering this threshold, or increasing the wage percentages over time.

    Lessons from New Zealand

    We wouldn’t be the first to make such a bold change if we did.

    Our geographically and culturally close neighbour, New Zealand, has already removed the “youth wage” – replacing it with a “first job” rate and a training wage set at 80% of the full award rate in 2008.

    A common argument against abolishing youth wages – and increasing the minimum wage in general – is that it will stop businesses hiring young people and thus increase unemployment.

    But a 2021 study that examined the effects of New Zealand’s experience with increasing minimum wages – including this change – found little discernible difference in employment outcomes for young workers.

    The authors did note, however, that New Zealand’s economic downturn post-2008 had a marked effect on the employment of young workers more generally.

    New Zealand has already taken major steps in reforming junior pay rates.
    Stephan Roeger/Shutterstock

    What’s fair?

    It’s easy to see how we arrived at the case for paying younger adults less. But younger workers should not bear the burden of intergenerational inequity by “losing out” on wages in the early part of their working life.

    The debate we see now echoes the discussions about equal pay for equal work value run in the 1960s and ‘70s in relation to women’s unequal pay.

    We were warned that paying women the same as men would cause huge economic dislocation. Such a catastrophe simply did not come to pass.

    Kerry Brown is a member of the National Tertiary Education Union.

    ref. There’s a renewed push to scrap junior rates of pay for young adults. Do we need to rethink what’s fair? – https://theconversation.com/theres-a-renewed-push-to-scrap-junior-rates-of-pay-for-young-adults-do-we-need-to-rethink-whats-fair-240548

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: OECD comparisons reveal an unflattering picture of inequality in NZ – could that change?

    Source: The Conversation (Au and NZ) – By Colin Campbell-Hunt, Emeritus Professor in Business, University of Otago

    Getty Images

    Recent research showing the richest New Zealanders pay less tax than their counterparts in nine similar OECD countries raises, yet again, serious questions about wealth, equality and fairness.

    How unequal is the distribution of income in New Zealand? How do we compare with some of the countries we might benchmark against? And, if we don’t like what we see, can we change it?

    The metric most widely used by economists to measure inequality in incomes is called the Gini coefficient (named after the Italian statistician Corrado Gini who developed it).

    It brings together income data across all households, typically divided into groupings of 10% or 20% of the total. When there is no inequality of incomes between groups, Gini equals zero. When the top group captures all income, Gini equals 1.

    Measuring inequality

    The graph below shows Gini coefficients, before taxes and welfare payments (known as “transfers”), for all 37 countries in the OECD in 2019 (before the COVID pandemic disrupted household surveys). Ginis are ranked left to right, from least to most unequal.



    The Gini before taxes and transfers is a measure of the inequality produced by the structures of a country’s economy: the way value chains operate, the markets for products and services, the scarcity of certain skills, rates of unionisation, and so on.

    This gives us a measure of structural inequalities in a country. Governments, however, use taxes and transfers to shift income between households. They take taxes from some and boost incomes of the more disadvantaged.

    Ginis of incomes after taxes and transfers give us a measure of how well members of a society can support similar standards of living. They are shown in the following graph, again from least to most unequal. These give us a measure of social inequalities.



    Focusing just on social inequality, it is no surprise Scandinavian countries are among the least unequal, as well as Canada and Ireland. Neither is it surprising the UK and US approach the highest levels of social inequality in the OECD.

    Inequalities in Australia and New Zealand lie between these, but further from the Scandinavians and closer to the Anglo-Americans.

    Social inequality in NZ

    When we look at the difference between structural and social inequalities, we can see the extent to which taxes and transfers – government redistribution of income – reduce inequality.

    As we can see, New Zealand’s structural inequality, shaped by the economic reforms of the mid-1980s, is middling by comparison to other OECD countries.

    But New Zealand’s social inequality lies near the bottom third of OECD measures. A halving of top income tax rates in the mid-1980s and the rollback of the welfare state in the 1990s (after then finance minister Ruth Richardson’s 1991 “mother of all budgets”) significantly contributed to this.

    The downward columns in the following graph show the effect of government redistributive measures, ranked from most to least active. The result of these government redistributions in New Zealand is weaker even than in the laissez-faire economies of the United Kingdom and United States.



    Where does NZ sit?

    How do New Zealand’s inequalities compare with countries we might choose to benchmark against?

    Below, the Scandinavian countries famous for their egalitarian social systems are shown in orange. In green are countries that tolerate slightly higher social inequality: Sweden, Canada and Ireland.

    And the UK and US – exemplars of free-market capitalism that were the models for New Zealand’s reforms of the mid-1980s – are highlighted in grey.



    Reducing inequality

    How hard would it be to change? Could New Zealand, for example, reduce its level of social inequality to match Canada? Absolutely, yes.

    Other OECD data show Canada significantly cut its inequalities between 2010 and 2019. The country moved from a position identical to Luxembourg (haven for Europe’s wealthy) to be roughly level with Sweden.

    To match Canada’s level now, New Zealand would need to reduce structural inequalities further, or redistribute about as much as Norway and Denmark do. It can be done, in other words.

    Indeed, Finland shows government redistributions can transform some of the worst levels of structural inequality to produce outcomes comparable to other Scandinavian countries.

    New Zealand can aspire to goals for social equality matching those in the upper half of OECD countries. Beyond revisions to taxation and transfers, inequalities in health and education would also need to come down to reduce the social and economic costs of poverty and disadvantage that should bring shame to us all.


    The author acknowledges the contribution of data provided by Max Rashbrooke.


    Colin Campbell-Hunt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. OECD comparisons reveal an unflattering picture of inequality in NZ – could that change? – https://theconversation.com/oecd-comparisons-reveal-an-unflattering-picture-of-inequality-in-nz-could-that-change-239306

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: How can we improve public health communication for the next pandemic? Tackling distrust and misinformation is key

    Source: The Conversation (Au and NZ) – By Shauna Hurley, PhD candidate, School of Public Health, Monash University

    Pexels/The Conversation

    There’s a common thread linking our experience of pandemics over the past 700 years. From the black death in the 14th century to COVID in the 21st, public health authorities have put emergency measures such as isolation and quarantine in place to stop infectious diseases spreading.

    As we know from COVID, these measures upend lives in an effort to save them. In both the recent and distant past they’ve also given rise to collective unrest, confusion and resistance.

    So after all this time, what do we know about the role public health communication plays in helping people understand and adhere to protective measures in a crisis? And more importantly, in an age of misinformation and distrust, how can we improve public health messaging for any future pandemics?

    Last year, we published a Cochrane review exploring the global evidence on public health communication during COVID and other infectious disease outbreaks including SARS, MERS, influenza and Ebola. Here’s a snapshot of what we found.




    Read more:
    Why are we seeing more pandemics? Our impact on the planet has a lot to do with it


    The importance of public trust

    A key theme emerging in analysis of the COVID pandemic globally is public trust – or lack thereof – in governments, public institutions and science.

    Mounting evidence suggests levels of trust in government were directly proportional to fewer COVID infections and higher vaccination rates across the world. It was a crucial factor in people’s willingness to follow public health directives, and is now a key focus for future pandemic preparedness.

    Here in Australia, public trust in governments and health authorities steadily eroded over time.

    Initial information from governments and health authorities about the unfolding COVID crisis, personal risk and mandated protective measures was generally clear and consistent across the country. The establishment of the National Cabinet in 2020 signalled a commitment from state, territory and federal governments to consensus-based policy and public health messaging.

    During this early phase of relative unity, Australians reported higher levels of belonging and trust in government.

    But as the pandemic wore on, public trust and confidence fell on the back of conflicting state-federal pandemic strategies, blame games and the confusing fragmentation of public health messaging. The divergence between lockdown policies and public health messaging adopted by Victoria and New South Wales is one example, but there are plenty of others.

    When state, territory and federal governments have conflicting policies on protective measures, people are easily confused, lose trust and become harder to engage with or persuade. Many tune out from partisan politics. Adherence to mandated public health measures falls.

    Our research found clarity and consistency of information were key features of effective public health communication throughout the COVID pandemic.

    We also found public health communication is most effective when authorities work in partnership with different target audiences. In Victoria, the case brought against the state government for the snap public housing tower lockdowns is a cautionary tale underscoring how essential considered, tailored and two-way communication is with diverse communities.




    Read more:
    What pathogen might spark the next pandemic? How scientists are preparing for ‘disease X’


    Countering misinformation

    Misinformation is not a new problem, but has been supercharged by the advent of social media.

    The much-touted “miracle” drug ivermectin typifies the extraordinary traction unproven treatments gained locally and globally. Ivermectin is an anti-parasitic drug, lacking evidence for viruses like COVID.

    Australia’s drug regulator was forced to ban ivermectin presciptions for anything other than its intended use after a sharp increase in people seeking the drug sparked national shortages. Hospitals also reported patients overdosing on ivermectin and cocktails of COVID “cures” promoted online.

    The Lancet Commission on lessons from the COVID pandemic has called for a coordinated international response to countering misinformation.

    As part of this, it has called for more accessible, accurate information and investment in scientific literacy to protect against misinformation, including that shared across social media platforms. The World Health Organization is developing resources and recommendations for health authorities to address this “infodemic”.

    National efforts to directly tackle misinformation are vital, in combination with concerted efforts to raise health literacy. The Australian Medical Association has called on the federal government to invest in long-term online advertising to counter health misinformation and boost health literacy.

    People of all ages need to be equipped to think critically about who and where their health information comes from. With the rise of AI, this is an increasingly urgent priority.

    Many people turned to unproven treatments for COVID.
    Alina Kruk/Shutterstock

    Looking ahead

    Australian health ministers recently reaffirmed their commitment to the new Australian Centre for Disease Control (CDC).

    From a science communications perspective, the Australian CDC could provide an independent voice of evidence and consensus-based information. This is exactly what’s needed during a pandemic. But full details about the CDC’s funding and remit have been the subject of some conjecture.

    Many of our key findings on effective public health communication during COVID are not new or surprising. They reinforce what we know works from previous disease outbreaks across different places and points in time: tailored, timely, clear, consistent and accurate information.

    The rapid rise, reach and influence of misinformation and distrust in public authorities bring a new level of complexity to this picture. Countering both must become a central focus of all public health crisis communication, now and in the future.

    This article is part of a series on the next pandemic.

    Rebecca Ryan receives funding from the National Health and Medical Research Council through funding to Australian Cochrane entities, and was previously commissioned by the World Health Organization to undertake a rapid evidence review on communication for COVID-19 prevention and control (2020).

    Shauna Hurley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How can we improve public health communication for the next pandemic? Tackling distrust and misinformation is key – https://theconversation.com/how-can-we-improve-public-health-communication-for-the-next-pandemic-tackling-distrust-and-misinformation-is-key-226718

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Return-to-office mandates may not be the solution to downtown struggles that Canadian cities are banking on

    Source: The Conversation (Au and NZ) – By Alexander Wray, PhD Candidate in Geography, Western University

    In recent months, many Canadian employers in both the public and private sectors have implemented return-to-office mandates, requiring workers that transitioned to remote or hybrid work during the COVID-19 pandemic to work in-person again.

    Employers are justifying these mandates by arguing they improve productivity, build more collaborative teams and improve mentorship for junior employees.

    Employers are not the only group ecstatic about these mandates. Municipalities and business owners are also expressing hope that the presence of office workers will spin off into greater consumer spending at restaurants and other businesses near office buildings. The expectation is that office workers will once again start spending money on coffee, lunch or after-work beverages.

    In 2022, the mayor of Ottawa partially blamed the downtown core’s economic struggles on the fact that federal public service workers were still largely working remotely. Federal workers have since been mandated to return to work in-person three days a week in late fall.

    The Canadian Federation of Independent Business similarly criticized the slow return to offices as a leading factor behind why small and medium-size businesses, especially restaurants and bars, are facing challenges in downtown areas.

    Insight into restaurant success

    During the pandemic, there were predictions that more than half of Canada’s independent restaurants would fail as part of their customer base — office workers — shifted to working from home.

    Our recent study investigated which operational, demographic and land use factors affected restaurant survival during the first year of the pandemic in London, Ont.

    We found no significant differences between restaurants that failed and restaurants that survived based on proximity to office uses. Instead, operational decisions made by restaurants individually were much more predictive of their survival than any geographic factor, including the presence of offices.

    Restaurants are seen along Richmond Street in downtown London, Ontario, in June 2021.
    (Alexander Wray), CC BY-NC-SA

    We found that restaurants located in areas receiving more CERB (Canadian Emergency Response Benefit) payments, and with a higher density of entertainment venues around them, were less likely to survive.

    Restaurants that adapted by offering pickup and delivery options were more likely to survive, though only for those that did their own delivery in-house rather than relying on platforms like UberEats and SkipTheDishes. Restaurants that had drive-thrus, held liquor licenses, or had been established for more than five years were more likely to survive. These older, more established restaurants were likely more resilient because of financial stability and customer loyalty.

    Table-service restaurants fared better than fast food outlets, likely because they could offer large patio dining spaces during the summer. Restaurants with liquor licenses substantially benefited, especially after a regulatory change by the Ontario government that allowed alcohol sales with takeout and delivery — a first for the province.

    In short, restaurant success was driven more by individual business decisions rather than being in a specific location. People working remotely instead of in the office did not significantly affect restaurant survival during the first year of the pandemic.

    Downtown struggles

    As Canadian downtowns look to recover, many face ongoing challenges. Activity levels are down by about 20 per cent from pre-pandemic levels in many places, lagging behind many similarly sized downtowns in the United States.

    This downturn has been partially attributed to a combination of higher office building vacancies and fewer workers downtown. For the first time, downtown office vacancy rates have exceeded suburban rates in the Greater Toronto Area. There has also been tremendous housing growth within many downtown cores.

    At the same time, downtowns have become a highly visible focal point of Canada’s growing addictions, mental health and housing crises. The pandemic fully revealed the deeper social, economic and health challenges happening in Canadian society.

    While violent incidents are rare, the social incivilities and disorder on display — public urination and defecation, open drug use, visible tents and property crime — contributes to a perception that Canadian downtowns are unsafe. This perception, whether accurate or not, has an impact on the willingness of people to engage with their downtowns.

    A way forward

    The damage to the reputation of Canada’s downtowns has been done. Downtown London now has the highest office vacancy rate in the country. The Workplace Safety Insurance Board of Ontario, for instance, recently chose to consolidate its offices in the outskirts of London, rather than downtown.

    Many people now elect to spend their time and money in areas that have embraced the “experience economy.” These are places that provide highly manicured entertainment and shopping destinations, with restaurants being the bedrock of enabling high quality experiences in these areas.

    Foot traffic is at an all-time high in suburban shopping centres. The downtowns of cities that are widely known as global tourist destinations — Las Vegas, Miami and Nashville — have activity levels close to or higher than their pre-pandemic levels.

    These are places that are developing highly attractive economies that provide people with the safe, fun and exciting experiences they are looking for locally and internationally. Instead of trying to force unwilling workers back to the office, Canadian cities should instead focus on developing downtowns that people genuinely want to visit and experience.

    One potential way to do this is to provide wrap-around support services and direct pathways to stable housing across the entire community, as the City of London has done. By spreading care and outreach services across the entire city, rather than concentrating them exclusively in downtown areas, the negative effects from Canada’s homelessness crisis can be reduced on urban cores.

    This type of strategy will direct those who need help away from downtowns, and may even permanently lift them out of poverty. In turn, Canadian downtowns can return to being places for everyone to shop, eat, relax, and work in comfort.

    Alexander Wray is President of the Town and Gown Association of Ontario, and a Board Member of Mainstreet London.

    Jamie Seabrook, Jason Gilliland, and Sean Doherty do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Return-to-office mandates may not be the solution to downtown struggles that Canadian cities are banking on – https://theconversation.com/return-to-office-mandates-may-not-be-the-solution-to-downtown-struggles-that-canadian-cities-are-banking-on-239682

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: How to help your child return to school after a long illness, new diagnosis or an accident

    Source: The Conversation (Au and NZ) – By Sarah Jefferson, Senior Lecturer in Education, Edith Cowan University

    It is very common for children to have a day or two away from school due to illness. But children can also miss much longer periods of schooling if they have a serious illness or injury.

    This could be a severe episode of mental illness, a diagnosis of Type 1 diabetes or in my family’s case, our youngest child being hit by a car at a pedestrian crossing, requiring months of rehab.

    After the initial shock, treatment and recovery, families then need to navigate a complex return to school – to make things as normal as possible for the student while handling their ongoing medical needs.

    How can families support their child?

    How many students are missing school?

    There are many reasons why children may need to have a significant break from school.

    At least one in every ten children under the age of 14 live with a chronic health condition.

    These conditions, which can include heart disease, diabetes and asthma, mental illness and cancers can lead to weeks or months in hospital.

    A 2018 study found 70,000 Australians under 16 are also hospitalised with a serious injury each year.

    Students can end up missing a significant amount of school due to injury or chronic illness.
    moonmovie/Shutterstock

    Come back with a plan

    We know going to school is central to children’s social and emotional wellbeing, as well as their academic progress. So getting back to school is a key part of a student’s ongoing health and wellbeing.

    The Royal Children’s Hospital Melbourne warns children can get mentally and physically tired after a long or serious illness.

    So they recommended returning to school gradually. Students may just go for half days or for a few hours initially.

    To make this as smooth as possible, parents or caregivers should meet with the school before you hope to return. This meeting should include the student if possible, relevant teachers (such as class teachers and year-level coordinators) and school nurse.

    Not all schools have a dedicated nurse. But if there is one available, they can play an important liaison role and manage a child’s medications or situation at school. If there is no nurse, make sure you include the school’s administration team.

    The meeting with the school should make a clear plan around what new support the student needs and how they will receive this. They may need changes to their uniform, timetable or where they physically go in the school. Students may also need extra time to do work, extra academic help and extra breaks.

    Families may also want to schedule regular catch-ups with the school.

    Students may not initially be able to return to school full time.
    engagestock/Shutterstock

    How is the student feeling?

    Children can be worried about not fitting in, especially if something significant has happened to them that makes them feel different from their peers. They may not want a huge fuss when they come back.

    Arranging time to talk to or see friends before they come back can help ease a student into their new routine.

    Depending on the situation, you could enlist a trusted buddy to help with bags or walk a bit more slowly with them between classes.

    Or students may get special permission to leave class a bit early to avoid crowds, or to be able to go and see the nurse without asking the teacher each time and drawing attention to themselves.

    As your child returns, make sure the focus is not just on catching up academically but catching up with friends as well. If their hours are reduced at school, try and allow for social time (such as including recess or lunch) as well as lessons.

    Your child will likely be dealing with a lot, both mentally and physically. So keep talking to them as much as possible about how they are feeling and going as they return.

    Things may have changed for them (and for you), but with time and support, school can feel like a normal part of life again.

    Sarah Jefferson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How to help your child return to school after a long illness, new diagnosis or an accident – https://theconversation.com/how-to-help-your-child-return-to-school-after-a-long-illness-new-diagnosis-or-an-accident-240012

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Limestone and iron reveal puzzling extreme rain in Western Australia 100,000 years ago

    Source: The Conversation (Au and NZ) – By Milo Barham, Associate Professor, Earth and Planetary Sciences, Curtin University

    Limestone pinnacles of the Nambung National Park karst. Matej Lipar

    Almost one-sixth of Earth’s land surface is covered in otherworldly landscapes with a name that may also be unfamiliar: karst. These landscapes are like natural sculpture parks, with dramatic terrain dotted with caves and towers of bedrock slowly sculpted by water over thousands of years.

    Karst landscapes are beautiful and ecologically important. They also represent a record of Earth’s past temperature and moisture levels.

    However, it can be quite challenging to figure out exactly when karst landscapes formed. In our new work published today in Science Advances, we show a new way to find the age of these enigmatic landscapes, which will help us understand our planet’s past in more detail.

    Flowstones, stalactites and caverns within Jenolan Caves, NSW, Australia.
    Matej Lipar

    The challenge

    Karst is defined by the removal of material. The rock towers and caves we see today are what is left after water dissolved the rest during wet periods of the past.

    This is what makes their age hard to determine. How do you date the disappearance of something?

    Traditionally, scientists have loosely bracketed the age of a karst surface by dating the material above and beneath. However, this approach blurs our understanding of ancient climate events and how ecosystems responded.

    Geological clocks

    In our study, we found a way to measure the age of pebble-sized iron nodules that formed at the same time as a karst landscape.

    This method has the technical name of (U/Th)-He geochronology. In it, we measure how much helium is produced by the natural radioactive decay of tiny amounts of the elements uranium and thorium in the iron nodules. By comparing the amounts of uranium, thorium and helium in a sample, we can very accurately calculate the age of the nodules.

    How iron nodules can reveal their age.
    Milo Barham

    We dated microscopic fragments of iron-rich nodules from the iconic Pinnacles Desert in Nambung National Park, Western Australia.

    This world-famous site is renowned for its otherworldly karst landscape of acres of limestone pillars towering metres above a sandy desert plain. The Pinnacles form part of the most extensive belt of wind-blown carbonate rock in the world, stretching more than 1,000km along coastal southwestern WA.

    The Western Australia ThermoChronology Hub (WATCH) ultra-high vacuum gas extraction line for measurements of radiogenic helium.
    Martin Danišik

    We examined multiple microscopic shards of iron nodules that were removed from the surface of limestone pinnacles. These nodules formed in the soil that lay on top of the limestone during the period of intense weathering that created the karst. As a result, they serve as time capsules of the environmental conditions that shaped the area.

    A scanning electron microscope image of iron-rich cement (lighter grey in centre) binding darker grey, rounded quartz sand grains within an analysed nodule.
    Aleš Šoster

    The big wet

    We consistently found an age of around 100,000 years for the growth of the iron nodules. This date is supported by known ages from the rocks above and beneath the karst surface, proving the reliability of our new approach.

    At the same time as chemical reactions caused growth of the iron-rich nodules within the ancient soil, limestone bedrock was rapidly and extensively dissolved to leave only remnant limestone pinnacles seen today.

    From examining the entire rock sequence in the area, we think this period of intensive weathering was the wettest time in this part of WA over at least the past half a million years.

    We don’t know what drove this increased rainfall. It may have been changes to atmospheric circulation patterns, or the greater influence of the ancient Leeuwin Current that runs along the shore.

    Such a humid interval is in dramatic contrast to the recent droughts and increasingly dry climate of the region today.

    Implications for our past

    Iron-rich nodules are not unique to the Nambung Pinnacles. They have recently been used to track dramatic past environmental change elsewhere in Australia.

    Dating these iron nodules will help to better document the dramatic fluctuations in Earth’s climate over the past three million years as ice sheets have grown and shrunk.

    Understanding the timing and environmental context of karst formation throughout this time offers profound insights into past climate conditions, environments and the landscapes in which ancient creatures lived.

    Dark iron-rich nodules attached to the side of the base of a limestone pinnacle in the Nambung National Park.
    Matej Lipar

    Climate changes and resulting environmental shifts have been crucial in shaping ecosystems. In particular, they have had a profound influence on our ancient hominin and human ancestors.

    By linking karst formation to specific climatic intervals, we can better understand how these environmental changes may have affected early human populations.

    Looking forward

    The more we know about the conditions that led to the formation of past landscapes and the flora and fauna that inhabited them, the better we can appreciate the evolutionary pressures that shaped the ecosystems we see today. This in turn offers valuable information for preparing for future changes.

    As human-driven climate change accelerates, learning about past climate variability and biosphere responses equips us with knowledge to anticipate and mitigate future impacts.

    The ability to date karst features with greater precision may seem like a small thing – but it will help us understand how today’s landscapes and ecosystems might respond to ongoing and future climate changes.

    Milo Barham has previously received research funding from the Minerals Research Institute of Western Australia.

    Andrej Šmuc, John Allan Webb, Kenneth McNamara, Martin Danisik, and Matej Lipar do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Limestone and iron reveal puzzling extreme rain in Western Australia 100,000 years ago – https://theconversation.com/limestone-and-iron-reveal-puzzling-extreme-rain-in-western-australia-100-000-years-ago-238801

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: More consumption, more demand for resources, more waste: why urban mining’s time has come

    Source: The Conversation (Au and NZ) – By Michael Odei Erdiaw-Kwasie, Lecturer in Sustainability| Business and Accounting Discipline, Charles Darwin University

    Lynda Disher/Shutterstock

    Pollution and waste, climate change and biodiversity loss are creating a triple planetary crisis. In response, UN Environment Programme executive director Inger Andersen has called for waste to be redefined as a valuable resource instead of a problem. That’s what urban mining does.

    We commonly think of mining as drilling or digging into the earth to extract precious resources. Urban mining recovers these materials from waste. It can come from buildings, infrastructure and obsolete products.

    An urban mine, then, is the stock of precious metals or materials in the waste cities produce. In particular, electronic waste, or e‑waste, has higher concentrations of precious metals than many mined ores. Yet the UN Global E‑waste Monitor estimates US$62 billion worth of recoverable resources was discarded as e‑waste in 2022.

    Urban mining can recover these “hidden” resources in cities around the world. It offers sustainable solutions to the problems of resource scarcity and waste management. And it happens in the very cities that are centres of overconsumption and hotspots for the greenhouse gas emissions driving climate change.

    What sort of waste can be mined?

    Materials such as concrete, pipes, bricks, roofing materials, reinforcements and e‑waste can be recovered for reuse. Urban waste can be “mined” for metals such as gold, steel, copper, zinc, aluminium, cobalt and lithium, as well as glass and plastic. Mechanical or chemical treatments are used to retrieve these metals and materials.

    Simply disposing of this waste has high financial and environmental costs. In Australia, about 10% of waste is hazardous. Landfill costs are soaring as cities run out of space to discard their waste.

    The extent of this fast-growing problem is driving the growth of urban mining around the world. We are then salvaging materials whose supply is finite, while reducing the impacts of waste disposal.

    Many plastics can be recycled and turned into new products.
    MAD.vertise/Shutterstock

    What’s happening globally?

    In Europe, the focus is largely on construction and demolition waste. Europe produces 450 million to 500 million tonnes of this waste each year – more than a third of all the region’s waste. Through its urban mining strategy, the European Commission aims to increase the recovery of non-hazardous construction and demolition waste to at least 70% across member countries by 2030.

    In Asia, urban mining has focused on e‑waste. However, the region recovers only about 12% of its e‑waste stock. Rates of e‑waste recycling vary greatly: 20% for East Asia, 1% for South Asia, and virtually zero for South-East Asia. China, Japan and South Korea are leading the way in Asia.

    Australia is on the right track. Our recovery rate for construction and demolition materials climbed to 80% by 2022 — the highest among all types of waste streams. However, we recover only about a third of the value of materials in our e-waste.

    Africa has also recognised the growing value of urban mining resources. Regional initiatives include the Nairobi Declaration on e‑waste, the Durban Declaration on e‑Waste Management in Africa and the Abuja Platform on e‑Waste.

    Urban mining solves many problems

    The OECD forecasts that global materials demand will almost double from 89 billion tonnes in 2019 to 167 billion tonnes in 2060. The United Nations’ Global Waste Management Outlook 2024 shows the amount of waste and costs of managing it are soaring too. It’s estimated the world will have 82 million tonnes of e‑waste to deal with by 2030.

    These trends mean urban mining is becoming ever more relevant and important.

    Urban mining also helps cut greenhouse gas emissions. Unlocking resources near where they are needed reduces transport costs and emissions. Urban mining also provides resource independence and creates employment.

    In addition, increasing recovery and recycling rates reduce the pressure on finite natural resources.

    Urban mining underpins circular economy alternatives such as the “deposit and return” schemes that give people financial incentives to return e‑waste and containers for recycling in cities such as Singapore, Sydney, Darwin and San Francisco. By 2030, San Francisco aims to halve disposal to landfill or incineration and cut solid waste generation by 15%.

    What more needs to be done?

    Governments have a role to play by adopting and enforcing policies, laws and regulations that encourage recycling through urban mining instead of sending waste to landfill. European Union laws, for example, mandate increased recycling targets for municipal waste overall and for packaging waste, including 80% for ferrous metals and 60% for aluminium.

    In Australia, 2019 legislation prohibits landfills from accepting anything with a plug, battery or cord. Anything with a plug is designated as e-waste.

    Product design is an important consideration. A designer must balance a product’s efficiency with making it easy to recycle. Products with greater efficiency and easy-to-recycle parts are more likely to use less energy, lead to less waste and hence less natural resource extraction.

    Our urban mining research documents a more sustainable approach to product design. Increasing product stewardship initiatives are expected to encourage better product design and standards that promote reuse and recycling, producer responsibility and changes in consumer behaviour.

    Good information about the available resources is essential too. The Urban Mine Platform, ProSUM and Waste and Resource Recovery Data Hub collect data on e‑waste, end-of-life vehicles, batteries and building and mining waste. These centralised databases allow easy access to data on the sources, stocks, flows and treatment of waste.

    Traditional mining is not the only method for extracting raw materials for the green transition. Waste is set to be increasingly recycled, reducing demand for virgin materials. A truly circular economy can become a reality if governments develop and apply an urban mining agenda.

    Michael Odei Erdiaw-Kwasie receives funding from the Foundation for Rural and Regional Renewal (FRRR).

    Matthew Abunyewah receives funding from the Foundation for Rural and Regional Renewal (FRRR) and Northern Western Australia and Northern Territory Drought Resilience Adoption and Innovation Hub (Northern Hubb)

    Patrick Brandful Cobbinah receives funding from Lincoln Institute of Land Policy. He is a member of Planning Institute of Australia.

    ref. More consumption, more demand for resources, more waste: why urban mining’s time has come – https://theconversation.com/more-consumption-more-demand-for-resources-more-waste-why-urban-minings-time-has-come-232484

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Joker: Folie à Deux as ‘ruin porn’ – how the new sequel plays with duplication and disintegration

    Source: The Conversation (Au and NZ) – By Anna-Sophie Jürgens, Senior Lecturer in Science Communication (Pop Culture Studies), Australian National University

    Warner

    Like two-headed playing cards, Joker stories are about dual identity, doubles and duplicity.

    Throughout DC comics and films, the Joker turns others into facsimiles of himself, grinning widely. He shares his state of mind through infectious laughter and mass “clownification”, creating copies as he goes.

    Film sequel Joker: Folie à Deux, directed by Todd Phillips and released in cinemas today, participates in this rich tradition. It also challenges it by introducing a Joker haunted by his own lost futures – the glam clown, homicidal entertainer and irresistible lover he could have become.

    What can we learn from the Joker character about our cultural fascination with duplication and disintegration?

    Madness by imitation

    Doubling, split consciousness and double meanings have been ingredients in Joker stories since the character’s creation in the 1940s.

    He offers different origin stories himself in the 2008 movie blockbuster The Dark Knight (with Heath Ledger as the Joker). He is presented as many in the recent comic series Three Jokers. The Joker shuffles his own “selves like a croupier deals cards” in the 2007 Batman comic The Clown at Midnight.

    Within the DC clowniverse, the Joker turns others into Joker copies and clowns, usually through the use of biological or chemical weapons or poisons, virology, hypnotism or sheer charisma. Joker copies include Joker fans and followers in clown costumes and masks, as in the 2019 film starring Joaquin Phoenix. In comics he is described as having an influence that

    […] affects people, on an almost subconscious, primal level. For most people – regular people – he inspires fear. For the less stable people – he simply inspires.

    For more than 80 years, his laughter has spread like a virus and caused mass-clownification countless times.

    ‘The whole world smiles with you.’ The new Joker sequel plays with dual identity and shadow selves.

    Multiplying his potency

    Joker stories tend to revolve around three scenarios of imitation, doubling and multiplication: several people acting as one (that is, the Joker), one person acting as many (as in Batman: R.I.P. when Batman tries to understand the Joker by experiencing his state of mind like a second consciousness), and a number of personalities nestled within the Joker wreaking havoc. All of these scenarios are powerful reminders clown laughter and humour need not be funny.

    The Joker character was inspired by famous films from the 1920s and ’30s, including Robert Wiene’s The Cabinet of Dr Caligari (1920), F.W. Murnau’s Nosferatu (1922), Fritz Lang’s Metropolis (1926), Roland West’s The Bat (1926) and Paul Leni’s The Man Who Laughs (1928). Many of these works feature hapless or unhappy (comic) performers, who all struggle with identity.

    The cultural mould to which the Joker belongs is linked with the more than century-old fascination with doppelgangers, male nervousness, violent and involuntary laughter and the loss of agency and sense of the self.

    The Joker has long played with ideas of duality.
    IMDB/Warner

    Haunting through absence

    The new sequel, Joker: Folie à Deux, draws on all these very Joker traditions. Arthur Fleck and his Joker (Phoenix again) struggles with his split identities.

    Set two years after the events of the previous film, Fleck is a patient at Arkham State Hospital, where he meets the dual character Lee Quinzel/Harley Quinn (played by Lady Gaga). She wants him to lean into his Joker self.

    Although she is neither the clown nor a scientist as she’s portrayed in other stories, she also wants to be a Joker version. Arthur himself wants to be the Joker, but for reasons both external and internal he ends up not really becoming the Joker we recognise from the first film.

    The sequel is ultimately a trick played on the audience. “There is no Joker,” Arthur confirms at the end, just Arthur. Folie à Deux is about a broken dream’s loveliness.

    The Joker is a collective dream that fails to come true. He appears in the form of fantasies. He is the past, but at the same time present and absent. This is how the concept of hauntology has been defined – a split between realities. The film glamorises and exploits disillusion as we watch the Joker and his future possibilities disintegrate.

    In this way, Joker: Folie à Deux is a clown version of ruin porn, inviting us to enjoy the “decay” of a character. It gives us glimpses of a post-double version of the Joker, a non-Joker, left in pieces.

    Joker: Folie à Deux is in cinemas now.

    Anna-Sophie Jürgens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Joker: Folie à Deux as ‘ruin porn’ – how the new sequel plays with duplication and disintegration – https://theconversation.com/joker-folie-a-deux-as-ruin-porn-how-the-new-sequel-plays-with-duplication-and-disintegration-240311

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Is stress turning my hair grey?

    Source: The Conversation (Au and NZ) – By Theresa Larkin, Associate Professor of Medical Sciences, University of Wollongong

    Oksana Klymenko/Shutterstock

    When we start to go grey depends a lot on genetics.

    Your first grey hairs usually appear anywhere between your twenties and fifties. For men, grey hairs normally start at the temples and sideburns. Women tend to start greying on the hairline, especially at the front.

    The most rapid greying usually happens between ages 50 and 60. But does anything we do speed up the process? And is there anything we can do to slow it down?

    You’ve probably heard that plucking, dyeing and stress can make your hair go grey – and that redheads don’t. Here’s what the science says.

    What gives hair its colour?

    Each strand of hair is produced by a hair follicle, a tunnel-like opening in your skin. Follicles contain two different kinds of stem cells:

    • keratinocytes, which produce keratin, the protein that makes and regenerates hair strands
    • melanocytes, which produce melanin, the pigment that colours your hair and skin.

    There are two main types of melanin that determine hair colour. Eumelanin is a black-brown pigment and pheomelanin is a red-yellow pigment.

    The amount of the different pigments determines hair colour. Black and brown hair has mostly eumelanin, red hair has the most pheomelanin, and blonde hair has just a small amount of both.

    So what makes our hair turn grey?

    As we age, it’s normal for cells to become less active. In the hair follicle, this means stem cells produce less melanin – turning our hair grey – and less keratin, causing hair thinning and loss.

    As less melanin is produced, there is less pigment to give the hair its colour. Grey hair has very little melanin, while white hair has none left.

    Unpigmented hair looks grey, white or silver because light reflects off the keratin, which is pale yellow.

    Grey hair is thicker, coarser and stiffer than hair with pigment. This is because the shape of the hair follicle becomes irregular as the stem cells change with age.

    Interestingly, grey hair also grows faster than pigmented hair, but it uses more energy in the process.

    Can stress turn our hair grey?

    Yes, stress can cause your hair to turn grey. This happens when oxidative stress damages hair follicles and stem cells and stops them producing melanin.

    Oxidative stress is an imbalance of too many damaging free radical chemicals and not enough protective antioxidant chemicals in the body. It can be caused by psychological or emotional stress as well as autoimmune diseases.

    Environmental factors such as exposure to UV, pollution, as well as smoking and some drugs, can also play a role.

    Melanocytes are more susceptible to damage than keratinocytes because of the complex steps in melanin production. This explains why ageing and stress usually cause hair greying before hair loss.

    Scientists have been able to link less pigmented sections of a hair strand to stressful events in a person’s life. In younger people, whose stems cells still produced melanin, colour returned to the hair after the stressful event passed.

    4 popular ideas about grey hair – and what science says

    1. Does plucking a grey hair make more grow back in its place?

    No. When you pluck a hair, you might notice a small bulb at the end that was attached to your scalp. This is the root. It grows from the hair follicle.

    Plucking a hair pulls the root out of the follicle. But the follicle itself is the opening in your skin and can’t be plucked out. Each hair follicle can only grow a single hair.

    It’s possible frequent plucking could make your hair grey earlier, if the cells that produce melanin are damaged or exhausted from too much regrowth.

    2. Can my hair can turn grey overnight?

    Legend says Marie Antoinette’s hair went completely white the night before the French queen faced the guillotine – but this is a myth.

    It is not possible for hair to turn grey overnight, as in the legend about Marie Antoinette.
    Yann Caradec/Wikimedia, CC BY-NC-SA

    Melanin in hair strands is chemically stable, meaning it can’t transform instantly.

    Acute psychological stress does rapidly deplete melanocyte stem cells in mice. But the effect doesn’t show up immediately. Instead, grey hair becomes visible as the strand grows – at a rate of about 1 cm per month.

    Not all hair is in the growing phase at any one time, meaning it can’t all go grey at the same time.

    3. Will dyeing make my hair go grey faster?

    This depends on the dye.

    Temporary and semi-permanent dyes should not cause early greying because they just coat the hair strand without changing its structure. But permanent products cause a chemical reaction with the hair, using an oxidising agent such as hydrogen peroxide.

    Accumulation of hydrogen peroxide and other hair dye chemicals in the hair follicle can damage melanocytes and keratinocytes, which can cause greying and hair loss.

    4. Is it true redheads don’t go grey?

    People with red hair also lose melanin as they age, but differently to those with black or brown hair.

    This is because the red-yellow and black-brown pigments are chemically different.

    Producing the brown-black pigment eumelanin is more complex and takes more energy, making it more susceptible to damage.

    Producing the red-yellow pigment (pheomelanin) causes less oxidative stress, and is more simple. This means it is easier for stem cells to continue to produce pheomelanin, even as they reduce their activity with ageing.

    With ageing, red hair tends to fade into strawberry blonde and silvery-white. Grey colour is due to less eumelanin activity, so is more common in those with black and brown hair.

    Your genetics determine when you’ll start going grey. But you may be able to avoid premature greying by staying healthy, reducing stress and avoiding smoking, too much alcohol and UV exposure.

    Eating a healthy diet may also help because vitamin B12, copper, iron, calcium and zinc all influence melanin production and hair pigmentation.

    Theresa Larkin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Is stress turning my hair grey? – https://theconversation.com/is-stress-turning-my-hair-grey-239100

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Lessons from Cyclone Gabrielle: 5 key health priorities for future disaster response

    Source: The Conversation (Au and NZ) – By Holly Thorpe, Professor in Sociology of Sport and Gender, University of Waikato

    Getty Images

    “The climate crisis is a health crisis.” So says World Health Organization Director-General Tedros Ghebreyesus.

    The World Economic Forum agrees. Its report this year highlighted how climate change is taking a toll on global health due to increasingly frequent extreme weather events.

    These issues are on the official agenda here too, especially since severe tropical cyclone Gabrielle caused extensive damage in the South-west Pacific and northern New Zealand in early 2023.

    Between February 13 and 14 it slammed into Te Tairāwhiti/East Coast and Te Matau a Māui/Hawkes Bay, with disastrous results for the land and its inhabitants. Communities were displaced, homes destroyed, power and telecommunications cut, water systems compromised, and many roads and bridges badly damaged.

    Shortly after Gabrielle hit, Manatū Hauora/Ministry of Health commissioned us to investigate the impacts of adverse weather events on health systems and community health and wellbeing.

    Our community research teams interviewed 143 residents in the two affected regions. They included first responders, heath workers, council staff and members of the public. Their stories were emotional, powerful and insightful.

    Our recently published report amplifies these community voices and local knowledge, and offers recommendations about planning for future, inevitable events. Here we offer five key messages.

    1. Prioritise vulnerable people

    Many older people and those with disabilities or existing health conditions were deprioritised or simply forgotten during evacuations and in the days and weeks after the cyclone. As one community responder in Tairāwhiti recalled:

    Some of them couldn’t move out because they were so old and frail. The water was so powerful, they couldn’t move anywhere. Some just stayed in their room until somebody turned up. For instance, there was a lady [who] was stuck in her wheelchair, and by the time people found her, the water was at her neck.

    Our report identified the need for health and social services to work more closely to ensure at-risk, vulnerable older people and those with disabilities or complex needs are prioritised during evacuations, so their medical and physical needs are met during and after an extreme weather event.

    2. Invest in mental health support and trauma recovery

    Those in the most affected communities had high levels of stress, grief and trauma during and after emergencies and evacuations.

    Staff and volunteers in front-line roles during the state of emergency experienced similar mental health effects. Many felt mental health support was not there when they needed it most.

    Almost everyone we spoke to had some negative mental health impacts. These included sleep disruption, rain anxiety and stress from road closures, insurance claims and land instability.

    Māori participants also told of their grief over environmental damage and destruction, highlighting the links between whenua (land) and hauora (health). They described drawing on cultural practices to support whānau recovery. For example, a leader of local volunteer efforts spoke about the personal impact of the cyclone:

    I was not good […] it was seeing the impact on how it was for your own community whānau. I think it hit me quite a bit later on. I fell into depression […] It just built up over time. I’m still in healing therapy for the last probably six to seven months since Gabrielle, just trying to get my wairua [spirit] and my tinana [body] and everything back in place.

    Overall, the research shows a need for greater awareness and investment in weather-related trauma recovery and mental health support.

    3. Ensure medical supplies can reach remote areas

    Rural and isolated communities had heightened health challenges, particularly due to road and communication failures.

    Transporting medical staff into these communities often required creative solutions (driving, using helicopters or hiking through bush and across farmland when roads were damaged, for example).

    Access to medicines was a major concern. It took co-ordinated effort to get pharmaceuticals to such communities. Helicopters were crucial in getting supplies and patients in and out of remote areas. Not everyone who needed attention received it, however.

    The most effective responses involved organisations (such as the NZ Police and Civil Defence) working together with communities. As one police officer told us:

    Our whānau up the coast needed medicine, prescriptions. Getting access from the helicopter to the home was a challenge. So, the police leant in and helped out. We used [an all-terrain vehicle] to get to places and spaces to get medicine in.

    People need to be prepared for power and telcommunications failures.
    Getty Images

    4. Resource and co-ordinate local support networks

    Fiscally challenged health systems were stretched during the emergency and struggled with power and telecommunications outages. But we heard of many health workers going “above and beyond” to care for patients and communities.

    Many continued working even when their own families, homes and communities were directly under threat. Anticipating this and supporting these workers will be important as adverse weather becomes more frequent with climate change.

    We also found marae, schools, local social services and non-profit organisations played key roles after the cyclone, but were often outside the direct ambit of the health system.

    Often the people working in these organisations have strong community relationships and knowledge that is essential to supporting emergency and recovery processes. These connections should be mapped and integrated for future events.

    5. Shift resources and build common will

    Local communities are full of knowledge. Many have learnt from recent events to better prepare their families, workplaces and organisations.

    Whānau told us about the importance of having cash in case of power outages and telecommunications failure. Others identified battery-powered radio as a critical source of information when systems were down. Pharmacists and doctors told of the importance of hard-copy evidence of prescriptions, to be able to dispense when electronic systems are out.

    Checking in on neighbours, sharing resources and making time for a cup of tea were all important for people in the recovery and rebuilding phases. A key lesson is to harness the power of community connections, trust and relationships in climate change resilience and recovery.

    Although knowledge, experience and wisdom lie in the hands of communities, our research highlights how financial resources mostly sit with central government. The challenge is to shift resources and build common will for climate action, before the inevitable next event.

    The report is receiving attention in parliament. We hope local experience can be central to planning around the health impacts of climate change and decision-making at all levels.


    We acknowledge the important contributions of our wider research team and community partners, particularly Manu Caddie (Te Weu Charitable Trust), Josie McClutchie (project lead), Dayna Chaffey, Haley Maxwell and Hiria Philip-Barbara (community researchers) in Tairāwhiti, and Emma Horgan and John Bell (Sustainable HB Centre for Climate & Resilience) in Hawkes Bay.


    Holly Thorpe received support from the Manatū Hauora/Ministry of Health funding secured to conduct this research.

    Fiona Langridge received support from the Ministry of Health funding secured to conduct this research.

    George Laking received funding from The Ministry of Health to conduct the research. He is an Executive Board member of OraTaiao, the New Zealand Climate and Health Council.

    Judith McCool receives funding from the Ministry of Health (Polynesia Health Corridors) and the Health Research Council.

    ref. Lessons from Cyclone Gabrielle: 5 key health priorities for future disaster response – https://theconversation.com/lessons-from-cyclone-gabrielle-5-key-health-priorities-for-future-disaster-response-239392

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: When even fringe festival venues exclude people with disability, cities need to act on access

    Source: The Conversation (Au and NZ) – By Shane Clifton, Associate Professor of Practice, School of Health Sciences and the Centre for Disability Research and Policy, University of Sydney

    Sanit Fuangnakhon/Shutterstock

    It’s about time city councils did more to make our cities accessible. I recently tried to buy tickets to two Sydney Fringe Festival events, only to be told by the box office that the venues were not wheelchair-accessible.

    Sydney remains a place where people with disability feel like they don’t belong. The same is true of other Australian cities. But local councils don’t bear all the blame.

    Event organisers are responsible for selecting venues. In the case of the Fringe Festival, they chose locations inaccessible to wheelchair users and others with mobility challenges. It’s a bitter irony that a fringe festival, which ostensibly empowers artists and creatives on the margins, would exclude people with disability.

    If event organisers (and every one of us) decided never to hire inaccessible venues, then the market might solve the issue. But those of us with disability are realistic enough to know most people don’t care – or don’t give us a thought. The market hasn’t solved the problem, so it’s up to governments.

    The problems go beyond arts venues

    Inaccessible venues are only the tip of the iceberg. Countless restaurants, shops and offices are inaccessible, with steps on entry, inaccessible bathrooms and narrow and cluttered aisles.

    “Spend the day in my wheelchair” programs are sometimes criticised for trivialising the challenge of disability. However, they do unmask how frustrating and alienating our cities and towns can be.

    Google Maps now indicates whether premises are accessible. Those that are bear the universal symbol of disability access – the stylised blue wheelchair. Even then, a person with a disability is just as likely as not to turn up and discover a lift has broken down, a doorway has been blocked off, a bathroom has been used for storage, or a venue is only partially accessible (it’s always the cool spaces that are out of reach).

    The Commonwealth and states brought in disability discrimination laws in the 1990s. These have made some difference, but their many exemptions let businesses off the hook. (See the Disability Royal Commission’s recommendations to amend the Disability Discrimination Act 1992.)

    More than 30 years down the track, our cities and towns remain bastions of exclusion.

    Newtown Hotel is marked as accessible on Google Maps, but the upstairs room used for a Sydney Fringe Festival event was not.
    Slow Walker/Shutterstock



    Read more:
    What does a building need to call itself ‘accessible’ – and is that enough?


    Better access benefits everyone

    Landowners and businesses typically complain providing access for the few affected people is too costly. In reality, making our public spaces accessible often requires little more than determined creative design. The costs are a mere fraction of what we spend on other things we judge as more important.

    We also underestimate the value added by accessible design.

    The Kerb-Cut Effect, for example, describes how designing for people with disability often benefits everyone. The term refers to the impact of activist action in California in the 1970s. Disability advocates in the city of Berkeley poured concrete onto road kerbs to create ramps giving wheelchair users access to footpaths.

    These ramps also proved valuable to parents pushing children in strollers, older people and cyclists. Refined into kerb cuts, they spread rapidly around the world.

    There are many other examples. Television captioning, developed for people who are deaf and hard of hearing, is now widely used by non-disabled people. Audiobooks, developed for people who are blind, are now a common way that many other people enjoy books.

    Accessible venues will not just benefit wheelchair users. Older people, those with impaired mobility and people who push prams and tow suitcases all benefit. Indeed, if we make venues accessible to those on the margins, no one is excluded.

    The UN Convention on the Rights of Persons with Disabilities highlights the importance of universal design. The convention insists on

    the design of products, environments, programs and services to be usable by all people, to the greatest extent possible, without the need for adaptation or specialised design.

    Why use steps that exclude some people when everyone can use a ramp or a lift?

    Kerb cuts are now common since it became obvious how many people benefited from designing ramps into road-crossing points.
    John Robert McPherson/Wikimedia Commons, CC BY-SA

    Why councils must lead the way

    Accessibility in cities is about more than just wheelchairs; it requires a comprehensive approach to urban planning to meet the varied needs of all citizens. This includes providing sensory aids like audio signals, braille signage and visual measures for people who are blind, deaf or hard of hearing. It’s also crucial that information on public services and events is easily available to everyone in formats they can access and understand.

    My focus has been on access to public spaces, but we also need to turn our attention to private homes. Wheelchair users and people with other mobility impairments can’t access most private houses in Australia. There is a drastic lack of accessible housing for people with disability and the cost of retrofitting access is exorbitant.

    New South Wales is yet to follow the lead of other states and territories by signing up to the Silver Liveable Housing Design Standards. These standards are part of the revised National Construction Code. They require new housing developments to offer basic accessibility for all people.

    We can and must do better. Every level of government can contribute to change.

    However, new builds and renovations are often decided upon at the regional level. This means local councils should bear much of the responsibility.

    A determined effort by our mayors and councillors to insist premises are accessible will be better for everyone. From a selfish perspective, it might mean I could go out to dinner or a festival without worrying if I can get in the door.

    Shane Clifton is affiliated with the Centre for Disability Research and Policy at the University of Sydney.

    ref. When even fringe festival venues exclude people with disability, cities need to act on access – https://theconversation.com/when-even-fringe-festival-venues-exclude-people-with-disability-cities-need-to-act-on-access-239937

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Final budget outcome shows 2023-24 surplus of $15.8 billion

    Source: The Conversation (Au and NZ) – By Michelle Grattan, Professorial Fellow, University of Canberra

    The budget surplus for last financial year has come in at $15.8 billion, well exceeding the $9.3 billion that was forecast in the May budget.

    Treasurer Jim Chalmers, just back from talks in Beijing on China’s economic outlook, will announce the result on Monday.

    The government says the better-than-forecast outcome has been driven entirely by lower spending. Revenue was also lower than the budget anticipated. Areas of savings included the National Disability Insurance Scheme, payments to the states, and various grant programs that don’t exist anymore.

    This is the government’s second consecutive surplus. The May budget has predicted deficits for the coming years.

    Across 2022-23 and 2023-24 the budget position has improved by a cumulative $172.3 billion, compared with what was forecast in the official Pre-election Economic and Fiscal Outlook, released immediately before the 2022 election.

    The government says it has made $77.4 billion in savings, including $12.2 billion in 2023-24.

    Payments were 25.2% of GDP in 2023-24. This compared to the PEFO forecast of 27.1%

    Chalmers said this was the “first government to post back-to-back surpluses in nearly two decades”. The surpluses hadn’t come at the expense of cost-of-living relief, he said in a statement.

    Speaking in Beijing on Friday Chalmers said it remained to be seen whether China’s just-announced stimulus measures would work.

    “But we’ve seen on earlier occasions when the authorities here, the administration here, steps in to support activity in the economy that is typically a good thing for Australia – good for our businesses and workers, our industries, our investors, and good for the global economy as well.

    “Like a lot of people around the world, we have been concerned about the softer conditions here in the Chinese economy. Subject to the details [of measures] that will be made public in good time, any efforts to boost growth and support activity here is a welcome one around the world and especially at home in Australia.”

    Chalmers on Monday is likely to face further questions on the Treasury’s work on negative gearing, news of which leaked out last week.

    Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Final budget outcome shows 2023-24 surplus of $15.8 billion – https://theconversation.com/final-budget-outcome-shows-2023-24-surplus-of-15-8-billion-240093

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Retraction: why we removed an article about a link between exam results and ceiling height

    Source: The Conversation (Au and NZ) – By Misha Ketchell, Editor, The Conversation

    Today we removed an article titled “Should we ditch big exam halls? Our research shows how high ceilings are associated with a lower score”, because the original research has been found to contain errors and has been retracted by the academic journal that published it.

    The Conversation’s article, published on July 3, 2024, was based on a study published online by The Journal of Environmental Psychology on June 26, 2024. It looked at the impact of ceiling heights on the exam performance of Australian students, and found that even after accounting for other factors such as age or past exam experience, higher ceiling heights were statistically correlated with poorer exam results.

    After the study was published, a query from a reader of the journal article led the authors to review their calculations.

    The authors discovered some honest errors in their work, leading them to conclude that the relationship between ceiling heights and exam score was “more nuanced” than presented in the paper.

    The revised research manuscript was reviewed by the same anonymous peer-reviewers who looked at the original research. One reviewer did not feel comfortable assessing the statistical corrections, one advised against publishing the corrected manuscript, and a third recommended revisions.

    On this basis, the Journal of Environmental Psychology rejected the amended version. The journal’s response can be found here.

    The authors, lead by Isabella Bower, apologise for the error, and are working to resubmit their updated research to another journal.

    The Conversation has decided that, in light of the current status of the research, the most appropriate option is to retract our coverage of the study. We are committed to providing accurate and reliable information, and to acknowledging errors in an open and transparent way when they occur.

    ref. Retraction: why we removed an article about a link between exam results and ceiling height – https://theconversation.com/retraction-why-we-removed-an-article-about-a-link-between-exam-results-and-ceiling-height-239930

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Meta has launched the world’s ‘most advanced’ glasses. Will they replace smartphones?

    Source: The Conversation (Au and NZ) – By Martie-Louise Verreynne, Professor in Innovation and Associate Dean (Research), The University of Queensland

    Humans are increasingly engaging with wearable technology as it becomes more adaptable and interactive. One of the most intimate ways gaining acceptance is through augmented reality (AR) glasses.

    Last week, Meta debuted a prototype of the most recent version of their AR glasses – Orion. They look like reading glasses and use holographic projection to allow users to see graphics projected through transparent lenses into their field of view.

    Meta chief Mark Zuckerberg called Orion “the most advanced glasses the world has ever seen”. He said they offer a “glimpse of the future” in which smart glasses will replace smartphones as the main mode of communication.

    But is this true or just corporate hype? And will AR glasses actually benefit us in new ways?

    Old technology, made new

    The technology used to develop Orion glasses is not new.

    In the 1960s, computer scientist Ivan Sutherland introduced the first augmented reality head-mounted display. Two decades later, Canadian engineer and inventor Stephen Mann developed the first glasses-like prototype.

    Throughout the 1990s, researchers and technology companies developed the capability of this technology through head-worn displays and wearable computing devices. Like many technological developments, these were often initially focused on military and industry applications.

    In 2013, after smartphone technology emerged, Google entered the AR glasses market. But consumers were disinterested, citing concerns about privacy, high cost, limited functionality and a lack of a clear purpose.

    This did not discourage other companies – such as Microsoft, Apple and Meta – from developing similar technologies.

    Looking inside

    Meta cites a range of reasons for why Orion are the world’s most advanced glasses, such as their miniaturised technology with large fields of view and holographic displays. It said these displays provide:

    compelling AR experiences, creating new human-computer interaction paradigms […] one of the most difficult challenges our industry has ever faced.

    Orion also has an inbuilt smart assistant (Meta AI) to help with tasks through voice commands, eye and hand tracking, and a wristband for swiping, clicking and scrolling.

    With these features, it is not difficult to agree that AR glasses are becoming more user-friendly for mass consumption. But gaining widespread consumer acceptance will be challenging.

    A set of challenges

    Meta will have to address four types of challenges:

    1. ease of wearing, using and integrating AR glasses with other glasses
    2. physiological aspects such as the heat the glasses generate, comfort and potential vertigo
    3. operational factors such as battery life, data security and display quality
    4. psychological factors such as social acceptance, trust in privacy and accessibility.

    These factors are not unlike what we saw in the 2000s when smartphones gained acceptance. Just like then, there are early adopters who will see more benefits than risks in adopting AR glasses, creating a niche market that will gradually expand.

    Similar to what Apple did with the iPhone, Meta will have to build a digital platform and ecosystem around Orion.

    This will allow for broader applications in education (for example, virtual classrooms), remote work and enhanced collaboration tools. Already, Orion’s holographic display allows users to overlay digital content and the real world, and because it is hands-free, communication will be more natural.

    Creative destruction

    Smart glasses are already being used in many industrial settings, such as logistics and healthcare. Meta plans to launch Orion for the general public in 2027.

    By that time, AI will have likely advanced to the point where virtual assistants will be able to see what we see and the physical, virtual and artificial will co-exist. At this point, it is easy to see that the need for bulky smartphones may diminish and that through creative destruction, one industry may replace another.

    This is supported by research indicating the virtual and augmented reality headset industry will be worth US$370 billion by 2034.

    The remaining question is whether this will actually benefit us.

    There is already much debate about the effect of smartphone technology on productivity and wellbeing. Some argue that it has benefited us, mainly through increased connectivity, access to information, and productivity applications.

    But others say it has just created more work, distractions and mental fatigue.

    If Meta has its way, AR glasses will solve this by enhancing productivity. Consulting firm Deloitte agrees, saying the technology will provide hands-free access to data, faster communication and collaboration through data-sharing.

    It also claims smart glasses will reduce human errors, enable data visualisation, and monitor the wearer’s health and wellbeing. This will ensure a quality experience, social acceptance, and seamless integration with physical processes.

    But whether or not that all comes true will depend on how well companies such as Meta address the many challenges associated with AR glasses.

    Martie-Louise Verreynne receives funding from the ARC and NHMRC.

    ref. Meta has launched the world’s ‘most advanced’ glasses. Will they replace smartphones? – https://theconversation.com/meta-has-launched-the-worlds-most-advanced-glasses-will-they-replace-smartphones-240023

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Scientists recently studied the body of one of the world’s strongest men. This is what they found

    Source: The Conversation (Au and NZ) – By Justin Keogh, Associate Dean of Research, Faculty of Health Sciences and Medicine, Bond University

    The development of “superhuman” strength and power has long been admired in many cultures across the world.

    This may reflect the importance of these physical fitness characteristics in many facets of our lives from pre-history to today: hunting and gathering, the construction of large buildings and monuments, war, and more recently, sport.

    Potentially, the current peak of human strength and power is demonstrated in the sport of strongman.

    What is strongman?

    Strongman is becoming more common, with competitions now available at regional, national and international levels for men and women of different ages and sizes.




    Read more:
    Strongman used to be seen as a super-human novelty sport. Now more women and novices are turning to it


    Strongman training and competitions typically involve a host of traditional barbell-based exercises including squats, deadlifts and presses but also specific strongman events.

    The specific strongman events – such as the vehicle pull, farmer’s walk, sandbag/keg toss or stones lift – often require competitors to move a range of awkward, heavy implements either higher, faster or with more repetitions in a given time period than their competitors.

    Researching one of the greats

    Strongman has enjoyed substantial growth and development since the introduction of the World’s Strongest Man competition in the late 1970s.

    However, from a scientific perspective, there are few published studies focusing on athletes at the elite level.

    In particular, very little is currently known about the overall amount of muscle mass these athletes possess, how their mass is distributed across individual muscles and to what extent their tendon characteristics differ to people who are not training.

    However a recent study sought to shed some light on these extreme athletes. It examined the muscle and tendon morphology (structure) of one of the world’s strongest ever men – England’s Eddie Hall.

    Measuring an exceptionally strong person such as Hall – who produced a 500kg world record deadlift and won the “World’s Strongest Man” competition in 2017 – provided the opportunity to understand what specific muscle and tendon characteristics may have contributed to his incredible strength.

    Eddie Hall is one of world strongman’s finest competitors.

    What can we learn from a single case study?

    A limited number of athletes reach the truly elite level of strongman and even fewer set world records or win premier events.

    Because it’s so difficult to recruit even a small group of such rare athletes, conducting a case study with one elite strongman provided a unique opportunity to understand more about his muscle and tendon characteristics.

    Case studies have many limitations, including an inability to determine cause and effect or generalise findings to other individuals from the same group.

    However, the study of Hall was insightful, as his muscle and tendon results could be compared directly with various groups from the authors’ earlier published research.

    These groups included untrained people, people who have regularly resistance trained for several years, and competitive track sprinters.

    The inclusion of these comparative populations allowed meaningful interpretation of what makes Hall’s muscle and tendon characteristics so special.

    What they found

    Hall’s lower body muscle size was almost twice that of an untrained group of healthy active young men.

    And the manner in which his muscle mass was distributed across his lower body exhibited a very specific pattern.

    Three long thin muscles, referred to as “guy ropes”, were particularly large (some 2.5 to three times bigger) compared to untrained people.

    The guy rope muscles connect to the shin bone via a shared tendon and provide stability to the thigh and hips by fanning out and attaching to the pelvis at diverse locations.

    Highly developed guy rope muscles would be expected to offer enhanced stability with heavy lifting, carrying and pulling.

    Hall’s thigh (quadriceps) muscle structure was more than twice that of untrained people, yet the tendon at the knee that is connected to this muscle group was only 30% larger than an untrained population.

    This finding indicates muscle and tendon growth, within this case of extreme quadriceps muscle development, do not occur to the same extent.

    What do the results mean?

    The obvious implication is, the larger the relevant muscles, the greater the potential for strength and power.

    However, sports like strongman and even everyday activities like climbing stairs, carrying groceries and lifting objects off the ground require the coordinated activity of many stabilising muscles as well as major propulsive muscles such as the quadriceps.

    While Hall’s quadriceps were substantially bigger than untrained people, the largest relative differences occurred in the calves and the long thin “guy rope” muscles that help stabilise the hip and knee.

    These results pose a question about whether additional or more specific training for these smaller muscles may further enhance strength and power.

    This could benefit strongman athletes as well as everyday people.

    Also, the relatively small differences in tendon size between Hall and untrained populations suggests tendons do not grow to the same extent as muscles do.

    As muscular forces are transmitted through tendons to the bones, the substantially greater growth of muscle than tendon may mean athletes such as Hall have a greater relative risk of tendon than muscle injury.

    This view is somewhat consistent with the high proportion of tendinitis and strains reported in strength sport athletes, including strongman and weightlifters.

    Justin Keogh is the Associate Dean of Research, Faculty of Health Sciences and Medicine, Bond University, an exercise scientist and a former strongman competitor.

    Tom Balshaw is a Lecturer in Kinesiology, Strength and Conditioning employed by Loughborough University

    ref. Scientists recently studied the body of one of the world’s strongest men. This is what they found – https://theconversation.com/scientists-recently-studied-the-body-of-one-of-the-worlds-strongest-men-this-is-what-they-found-238873

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Can Australia prosecute foreigners for genocide overseas? Here’s how our atrocity laws work

    Source: The Conversation (Au and NZ) – By Alister McKeich, Lecturer and Researcher in Law, Criminology and Indigenous Studies, Victoria University, Victoria University

    Shutterstock

    The onslaught in the Middle East has brought to the world’s attention once again the “crime of crimes”, genocide.

    Both the the International Court of Justice and International Criminal Court (ICC) have brought allegations of genocide against Israel as a state and Israeli and Hamas leaders as individuals.

    The Australian government’s response to the Gaza crisis has included temporarily freezing of A$6 million of funding to the United Nations Relief and Works Agency for Palestine. Though funding has been flowing again since March, Prime Minister Anthony Albanese has been referred to the ICC by a law firm for being “an accessory to genocide”.

    Against this backdrop, Australia’s own genocide legislation is under parliamentary scrutiny. A bill tabled by independent Senator Lidia Thorpe (for whom I work as a casual legal researcher) seeks to change the way Australia deals with genocide.

    So what do our current laws say and what’s the case for changing them?

    What do our laws say?

    Australia ratified the Genocide Convention in 1949.

    Yet it was not until 2002, once the ICC was established, that the Commonwealth Criminal Code was amended to create a new division of atrocity crimes.

    Through this legislation, Australia may prosecute any person accused of a Rome Statute crime (such as genocide) under Australian law.

    At the moment, written consent from the attorney-general is required before legal proceedings about genocide and other atrocity crimes can commence. This is called the “attorney-general’s fiat”.

    Further, the attorney-general’s decision is final. It “must not be challenged, appealed against, reviewed, quashed or called into question”.

    Thorpe’s bill seeks to overturn these two measures.

    The explanatory memorandum in the 2002 amendment did not say why the attorney-general’s consent was necessary.

    Consent from an attorney-general (or similar position) is not an international requirement.

    Australia is only one of a handful of other countries (including the United Kingdom, New Zealand and Canada) where the fiat also exists.

    Why is it a problem?

    The Australian government has justified the rule on the basis that prosecutions for atrocity crimes against individuals could affect Australia’s international relations and national security.

    However, submissions from legal experts and community groups to a senate inquiry looking at the issue point out flaws.

    They say this rule prevents access to justice for victims and survivors of atrocity crimes. It can also create the potential for government bias.

    Submissions also say the lack of explanation or appeal process ignores fundamental principles of jurisprudence.

    Has the rule been used?

    The attorney-general’s fiat has been used in a limited number of cases.

    In 2009, Palestinian rights groups Australians for Palestine issued a request for consent for the prosecution of former Israeli prime minister Ehud Olmert, who was visiting at the time.

    The Australian Centre for International Justice states in its submission how then-attorney-general Robert McClellend denied the request. He cited matters of international state sovereignty and the difficulties of pursuing such a case in an overseas jurisdiction.

    Then, in 2011, Arunchalam Jegastheeswaran, an Australian citizen of Tamil
    background, sought the attorney-general’s consent for the prosecution of then Sri Lankan President, Mahinda Rajapaksa, who was due to visit Australia.

    McClellend again denied the request, saying Rajapaska was protected under “head of state immunity”. This concept is controversial in international law, given it’s often heads of state who commit atrocity crimes.

    Head of state protection was also offered to former Myanmar (Burma) leader Aung San Suu Kyi, who was in government when the 2017 genocide against the Rohingya was committed.

    With Suu Kyi due to be in Australia for an ASEAN conference in 2018, the Australian Rohingya community sought a prosecution. It was denied by then attorney-general Christian Porter.

    And in 2019, retired Sri Lankan General Jagath Jayasuriya visited Australia. Despite concerted efforts to raise evidence to prosecute Jayasuriya of war crimes, delays with the Australian Federal Police meant the case never reached the point of attorney-general consent.

    First Nations plaintiffs such as Paul Coe and Robert Thorpe have also sought to bring cases of genocide before the domestic courts, with no success.

    What would changing the laws mean?

    As it’s unlikely an attorney-general would consent to prosecutions against its own government, submissions to the inquiry argue the rule creates a direct conflict of interest.

    For First Nations people seeking justice for crimes of “ongoing genocide” perpetuated by the Commonwealth, any government is hardly going to rule in their favour.

    Some Indigenous community groups argue the high rates of First Nations children in protection, deaths in custody, hyper-incarceration and cultural, land and environmental damage amount to genocide crimes.

    Submissions to the inquiry recommend instead of requiring the consent of the attorney-general, claims of genocide should be directed to the Commonwealth Director of Public Prosecutions. This would ensure greater independence from government.

    The director has a mandate for this sort of work. It already investigates similar crimes such as people smuggling, human trafficking, slavery and child exploitation.

    Internationally, the implications of this bill, if passed, will be consequential. The Australian Centre for International Justice estimates up to 1,000 Australian citizens have returned to Israel to fight as part of the Israel Defense Forces. Israel has been accused of serious atrocity crimes in Gaza.

    Should any of those citizens return, there could be attempts to mount a case. The government would then have to consider Australia’s political and economic ties with Israel.

    Whether the bill is passed will depend on parliament. But the situation highlights a paradox: the state itself will be deciding whether to remove its own inbuilt protections against charges of genocide.

    Alister McKeich is a casual legal researcher with the office of Senator Lidia Thorpe.

    ref. Can Australia prosecute foreigners for genocide overseas? Here’s how our atrocity laws work – https://theconversation.com/can-australia-prosecute-foreigners-for-genocide-overseas-heres-how-our-atrocity-laws-work-236394

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Online spaces are rife with toxicity. Well-designed AI tools can help clean them up

    Source: The Conversation (Au and NZ) – By Lucy Sparrow, Lecturer in Human-Computer Interaction, The University of Melbourne

    MMD Creative/Shutterstock

    Imagine scrolling through social media or playing an online game, only to be interrupted by insulting and harassing comments. What if an artificial intelligence (AI) tool stepped in to remove the abuse before you even saw it?

    This isn’t science fiction. Commercial AI tools like ToxMod and Bodyguard.ai are already used to monitor interactions in real time across social media and gaming platforms. They can detect and respond to toxic behaviour.

    The idea of an all-seeing AI monitoring our every move might sound Orwellian, but these tools could be key to making the internet a safer place.

    However, for AI moderation to succeed, it needs to prioritise values like privacy, transparency, explainability and fairness. So can we ensure AI can be trusted to make our online spaces better? Our two recent research projects into AI-driven moderation show this can be done – with more work ahead of us.

    Negativity thrives online

    Online toxicity is a growing problem. Nearly half of young Australians have experienced some form of negative online interaction, with almost one in five experiencing cyberbullying.

    Whether it’s a single offensive comment or a sustained slew of harassment, such harmful interactions are part of daily life for many internet users.

    The severity of online toxicity is one reason the Australian government has proposed banning social media for children under 14.

    But this approach fails to fully address a core underlying problem: the design of online platforms and moderation tools. We need to rethink how online platforms are designed to minimise harmful interactions for all users, not just children.

    Unfortunately, many tech giants with power over our online activities have been slow to take on more responsibility, leaving significant gaps in moderation and safety measures.

    This is where proactive AI moderation offers the chance to create safer, more respectful online spaces. But can AI truly deliver on this promise? Here’s what we found.

    ‘Havoc’ in online multiplayer games

    In our Games and Artificial Intelligence Moderation (GAIM) Project, we set out to understand the ethical opportunities and pitfalls of AI-driven moderation in online multiplayer games. We conducted 26 in-depth interviews with players and industry professionals to find out how they use and think about AI in these spaces.

    Interviewees saw AI as a necessary tool to make games safer and combat the “havoc” caused by toxicity. With millions of players, human moderators can’t catch everything. But an untiring and proactive AI can pick up what humans miss, helping reduce the stress and burnout associated with moderating toxic messages.

    But many players also expressed confusion about the use of AI moderation. They didn’t understand why they received account suspensions, bans and other punishments, and were often left frustrated that their own reports of toxic behaviour seemed to be lost to the void, unanswered.

    Participants were especially worried about privacy in situations where AI is used to moderate voice chat in games. One player exclaimed: “my god, is that even legal?” It is – and it’s already happening in popular online games such as Call of Duty.

    Our study revealed there’s tremendous positive potential for AI moderation. However, games and social media companies will need to do a lot more work to make these systems transparent, empowering and trustworthy.

    Right now, AI moderation is seen to operate much like a police officer in an opaque justice system. What if AI instead took the form of a teacher, guardian, or upstander – educating, empowering or supporting users?

    Enter AI Ally

    This is where our second project AI Ally comes in, an initiative funded by the eSafety Commissioner. In response to high rates of tech-based gendered violence in Australia, we are co-designing an AI tool to support girls, women and gender-diverse individuals in navigating safer online spaces.

    We surveyed 230 people from these groups, and found that 44% of our respondents “often” or “always” experienced gendered harassment on at least one social media platform. It happened most frequently in response to everyday online activities like posting photos of themselves, particularly in the form of sexist comments.

    Interestingly, our respondents reported that documenting instances of online abuse was especially useful when they wanted to support other targets of harassment, such as by gathering screenshots of abusive comments. But only a few of those surveyed did this in practice. Understandably, many also feared for their own safety should they intervene by defending someone or even speaking up in a public comment thread.

    These are worrying findings. In response, we are designing our AI tool as an optional dashboard that detects and documents toxic comments. To help guide us in the design process, we have created a set of “personas” that capture some of our target users, inspired by our survey respondents.

    Some of the user ‘personas’ guiding the development of the AI Ally tool.
    Ren Galwey/Research Rendered

    We allow users to make their own decisions about whether to filter, flag, block or report harassment in efficient ways that align with their own preferences and personal safety.

    In this way, we hope to use AI to offer young people easy-to-access support in managing online safety while offering autonomy and a sense of empowerment.

    We can all play a role

    AI Ally shows we can use AI to help make online spaces safer without having to sacrifice values like transparency and user control. But there is much more to be done.

    Other, similar initiatives include Harassment Manager, which was designed to identify and document abuse on Twitter (now X), and HeartMob, a community where targets of online harassment can seek support.

    Until ethical AI practices are more widely adopted, users must stay informed. Before joining a platform, check if they are transparent about their policies and offer user control over moderation settings.

    The internet connects us to resources, work, play and community. Everyone has the right to access these benefits without harassment and abuse. It’s up to all of us to be proactive and advocate for smarter, more ethical technology that protects our values and our digital spaces.


    The AI Ally team consists of Dr Mahli-Ann Butt, Dr Lucy Sparrow, Dr Eduardo Oliveira, Ren Galwey, Dahlia Jovic, Sable Wang-Wills, Yige Song and Maddy Weeks.

    Dr Lucy Sparrow receives funding from the eSafety Commissioner’s Preventing Tech-Based Abuse Against Women grant program for the “AI Ally” project.

    Dr Eduardo Oliveira receives funding from the eSafety Commissioner’s Preventing Tech-Based Abuse Against Women grant program for the “AI Ally” project.

    Dr Mahli-Ann Butt receives funding from the eSafety Commissioner’s Preventing Tech-Based Abuse Against Women grant program for the “AI Ally” project.

    ref. Online spaces are rife with toxicity. Well-designed AI tools can help clean them up – https://theconversation.com/online-spaces-are-rife-with-toxicity-well-designed-ai-tools-can-help-clean-them-up-239590

    MIL OSI AnalysisEveningReport.nz