There’s now lots of evidence which shows that our own diets and the foods we eat can influence the outcome if we are unlucky enough to suffer from cancer.
Scientists are especially interested in how this happens, in particular the cellular and molecular mechanisms behind these associations. This would better inform nutritional recommendations and help us understand how cancer forms so we can prevent it.
Now, a study has identified a molecular link between linoleic acid, a common fat contained in cooking oils, and aggressive breast cancer, renewing the discussion about dietary choices and cancer risk. The findings, while significant, require careful interpretation to avoid unnecessary alarm and give useful guidance to the public.
Common fatty acid
Linoleic acid is an omega-6 fatty acid which is found in abundant quantities in soybean, sunflower and corn oils. Researchers at Weill Cornell Medicine in New York showed it can directly activate a growth pathway in triple-negative breast cancer cells – a type of breast cancer especially known for its aggressiveness and limited treatment options.
Triple negative breast cancer makes up about 15% of all breast cancer cases, but because breast cancer is so common, this affects a lot of people. The researchers found that linoleic acid binds to a protein called FABP5 (fatty acid-binding protein 5), which is at high levels in these cancer cells.
This binding triggers the mTORC1 pathway – a critical regulator of cell growth and metabolism – fuelling tumour progression in preclinical research, including animal studies. My current research focuses on this pathway in a variety of normal and cancer cells.
In the new study, mice fed a high linoleic-acid diet developed larger tumours, suggesting dietary intake may exacerbate this cancer’s growth. There was a link to people too: elevated FABP5 and linoleic acid levels were detected in blood samples from triple-negative breast cancer patients, strengthening the biological plausibility of this link. Dr John Blenis, the senior author of the paper, said:
This discovery helps clarify the relationship between dietary fats and cancer, and sheds light on how to define which patients might benefit the most from specific nutritional recommendations in a personalised manner.
It’s also possible that the implications extend beyond triple negative breast cancer to other tumours such as prostate cancer.
Linoleic acid is an essential fatty acid so it must be obtained from food. It plays a role in skin health, cell membrane structure and inflammation regulation. However, modern diets, which are rich in processed foods, ultraprocesed foods and seed oils, often provide excessive omega-6 fats, including linoleic acid, while lacking omega-3s, which are found in fish, flaxseeds and walnuts.
The study therefore suggests that linoleic acid may directly drive cancer growth in specific contexts. This challenges earlier observational studies that found no clear association between dietary linoleic acid and overall breast cancer risk. For example, a 2023 meta-analysis of 14 studies in over 350,000 women concluded that linoleic acid intake had no significant effect on breast cancer risk in the general population.
The discrepancy highlights the importance of researchers looking specifically at cancer subtypes and also individual factors, such as FABP5 levels in cancers themselves. Another study showed that linoleic acid was protective against breast cancer, which demonstrates why it’s important to consider everything in context.
Don’t panic
Media headlines can often oversimplify complex research. While this new study highlights a plausible mechanism linking linoleic acid to cancer growth, it does not prove that cooking oils cause breast cancer – far from it. Other factors, such as genetics, overall diet and environmental exposures, play significant roles.
The findings do not warrant blanket avoidance of seed oils but suggest moderation and selectivity, especially for high-risk individuals. Many oils such as olive oil contain less linoleic acid and higher monounsaturated or saturated fats, which are more stable at high heat.
A recent study that comprehensively analysed eating habits over 30 years showed that diets that are rich in fruits, vegetables, whole grains, nuts and low-fat dairy products were linked to healthy ageing. In that study, the Harvard team followed more than 100,000 people between 1986 and 2016. Fewer than 10% of respondents achieved healthy ageing, defined by a lack of 11 major chronic diseases and no impairment in cognitive, physical or mental function by the age of 70.
Organisations like the World Cancer Research Fund emphasise that moderate use of vegetable oils is safe and that obesity, not specific fats, is the primary dietary driver of cancer risk.
This study, then, underscores the importance of contextualising dietary fats in cancer research. While linoleic acid’s role in triple-negative breast cancer is a critical discovery, it’s one piece of a vast puzzle. A balanced, wholefood diet remains an important cornerstone of cancer prevention, and a strategy everyone can adopt.
Justin Stebbing does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Nicolas Forsans, Professor of Management and Co-director of the Centre for Latin American & Caribbean Studies, University of Essex
Daniel Noboa has been re-elected as president of Ecuador with a margin that has surprised most observers. Just weeks before the April 13 runoff, polls had him neck and neck with his left-wing rival, Luisa González. In the end, Noboa secured about 56% of the vote against González’s 44%, a difference of more than 1 million votes.
The victory gives Noboa, a 37-year-old businessman and political outsider, a full four-year mandate. Noboa won a shortened presidential term in November 2023 in a snap election called when his predecessor, Guillermo Lasso, dissolved congress in an attempt to escape impeachment.
It also marks the third consecutive presidential defeat for the movement led by former president, Rafael Correa, whose influence remains polarising in Ecuadorian politics.
González is, at the time of writing, refusing to concede, claiming “grotesque” electoral fraud. “I refuse to believe that the people prefer lies over the truth”, she has said. But she has presented no evidence to support the allegation.
International observers, including the EU and the Organisation of American States, have confirmed the elections were free and fair. In the absence of proof, the fraud claims appear to be more political theatre than a real challenge to the integrity of the vote.
Political scion to dominant incumbent
Noboa’s campaign leaned heavily on security – a theme that has come to dominate Ecuadorian public life as the country grapples with record levels of violence. Since assuming the presidency in 2023, Noboa has governed under a permanent state of emergency.
He declared an “internal armed conflict” in early 2024, deployed the military in prisons and on the streets, and launched a wide-ranging security plan called Plan Fénix. This plan includes building a new maximum-security prison in the coastal province of Santa Elena modelled on El Salvador’s much-criticised approach to curbing violence.
Initially, these measures won Noboa widespread support. But the picture soon darkened. January 2025 was Ecuador’s most violent month on record, with 781 homicides. Criminal groups remain entrenched in the country’s port cities and prisons. And human rights organisations have raised serious concerns about arbitrary arrests, the excessive use of force, and the militarisation of civilian life.
Despite these setbacks, Noboa’s message of strength and order clearly resonated with voters. Ecuadorians, exhausted by spiralling violence, appear willing to accept more authoritarian governance in exchange for safety. This is a trend seen across the region, from President Nayib Bukele’s 2024 re-election in El Salvador to rising approval for militarised policing in Brazil, Honduras and Mexico.
The challenges Noboa now faces are daunting. The most pressing is Ecuador’s descent into organised crime and narco-violence. Situated between Colombia and Peru, the country has become a major transit hub for cocaine bound for the US and Europe. Powerful international cartels have partnered with local gangs, and the state has lost control over large swaths of territory.
In response, Noboa has not only empowered the armed forces but has also sought international assistance. In 2024, he met with Erik Prince, the founder of Blackwater, a controversial US private military contractor. This raised concerns about the outsourcing of Ecuador’s security and its implications for human rights. He has also floated the idea of hosting foreign troops in Ecuador, a proposal that would require a constitutional amendment.
But militarised solutions alone did not bring an end to violence during Noboa’s first term, nor are they likely to succeed in his second.
Ecuador’s security crisis is not just a matter of policing – it is a crisis of state capacity. The judiciary is riddled with corruption, prisons have become centres of criminal coordination, and police officers are often outgunned and underpaid. Without reforming these institutions, Noboa’s war on crime risks becoming a war without end.
At the same time, Ecuador’s economy is faltering. In 2024, the country fell into recession, with GDP contracting and inflation rising. Ecuador is reliant on hydropower for its electricity generation, and a historic drought that year caused blackouts lasting up to 14 hours a day. This revealed years of under-investment in infrastructure.
In response, Noboa raised VAT, cut fuel subsidies, and secured a US$4 billion (roughly £3 billion) loan from the International Monetary Fund. These unpopular measures provoked grumbling but not mass protests, a fact some analysts attribute to exhaustion rather than approval.
Inequality remains high, especially for young people and those living in rural and coastal regions. Unemployment and underemployment affect nearly half of the working-age population, and around one-third of Ecuadorians live in poverty. Noboa has announced new cash transfers and youth employment programmes, but these are palliative, not structural.
To make matters worse, Noboa governs with limited support in the National Assembly. His party, Acción Democrática Nacional, holds 66 of the chamber’s 151 seats – one less than González’s Citizen Revolution.
The Indigenous Pachakutik party controls a crucial bloc of nine seats, but is itself internally divided. Passing legislation will require building coalitions and compromising. These are skills that Noboa has yet to demonstrate at scale.
Noboa’s credibility has also been challenged. His family’s banana export company, Noboa Trading, has been linked to multiple drug seizures in Europe. While there is no evidence implicating Noboa directly, the revelations raise uncomfortable questions about the president’s anti-drug narrative and potential conflicts of interest.
Towards democratic reform
Noboa’s victory gives him an opportunity, but not a blank cheque. His success will now depend on whether he can pivot from ruling by decree to governing by consensus. The public expects results: less violence, more jobs and greater political stability.
To meet these expectations, he will need to restore the rule of law, protect human rights and build inclusive institutions capable of resisting criminal capture. This means professionalising the police, strengthening the judiciary and tackling the deep inequalities that fuel violence and despair.
It also means stepping back from theatrical gestures, such as alliances with foreign mercenaries, and focusing on the slow, often frustrating work of state-building.
In the coming months, Noboa will face a simple but profound test: can he translate his electoral mandate into real, lasting progress for a country on the edge? Ecuador’s future may depend on the answer.
Nicolas Forsans does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The UK government has announced £65 million in funding for a new system called Borealis which is intended to help the UK military defend its satellites against threats. Borealis is a software system that collates and processes data to strengthen the UK military’s ability to monitor what’s going on in space.
The government’s investment, announced on March 7, underlines the increasingly critical role played by space systems in the modern world. Space services play a key role in managing critical infrastructure such as the energy grid, transport systems and communications networks.
For example, SpaceX’s Starlink system has been vitally important for communication on the battlefield during Ukraine’s war with Russia. It is just one example of the game changing potential of satellite based services.
The investment in Borealis also shows that the UK government is taking the threat to space systems increasingly seriously. From as long ago as 2019, senior US officials have warned that space is no longer considered a “benign environment”.
In 2021, a US general claimed that states were constantly conducting attacks on satellites, including jamming and cyber-attacks. Announcing the Borealis system in 2025, Major General Paul Tedman, the commander of UK Space Command, characterised space as “increasingly contested”.
As the international order is coming under increasing pressure, nations are engaging in more combative behaviour, not just in space, but in cyberspace, and under the seas.
A space system is composed of four parts – traditionally called segments. These include the space segment (satellites and other spacecraft), the ground segment (ground stations, control rooms), and the user segment (a signal receiver, for example). Communications between these parts of the system form what’s called the link segment.
In addition to intentional attacks, satellites can also experience problems because of physical collisions with orbiting debris, from cosmic radiation, and activity on the Sun, which can interfere with onboard systems. For satellites, security against attacks has often been a secondary consideration. It was hard enough to build a system which could survive in space without introducing the additional costs and challenges of securing it against attacks from adversaries.
Addressing threats to assets in space will require an all-encompassing approach, as I have argued in a recent report. First, security needs to cover all four segments of space infrastructure. The easiest way to interrupt a space system might be to target the ground or the user segment, rather than trying to interfere directly with a satellite.
Starlink has been vitally important in Ukraine during the war with Russia. LanKS / Shutterstock
Second, security needs to be considered across the life cycle of the system, from design and construction, through launch, to operations and application. Consider, for example, if the detailed specifications of a satellite have already been leaked to a malicious party. That might provide them with an in-depth understanding of how to attack the spacecraft – and in such a way that may be difficult to defend against without going back to redesign it.
This type of issue was less of a problem when satellites were developed almost entirely by government agencies and large aerospace companies. ongoing expansion of the commercial space sector, start-ups and new entrants to the sector may not have the same approach to security as more seasoned organisations.
Third, security needs to include the whole range of threats facing space infrastructure, of which a satellite is just one part. We must therefore consider the physical security of hardware, information security, cybersecurity, the personnel working on the project, and supply chain security.
Vulnerable to sabotage
The range of threats facing space systems parallels those facing other critical systems, such as underwater telecommunications cables. There have been several recent incidents of subsea cables being cut in the Baltic Sea, for example. There is also at least one reported instance of hackers burrowing deep inside core telecommunications networks.
It is becoming painfully clear that much of the infrastructure underpinning the economy and our daily lives is fundamentally insecure. Determined attackers are increasingly operating across both the physical world and cyberspace.
Retrofitting security onto space systems is technically challenging and hugely expensive. There are also tough policy questions here. Governments simply do not have the resources or the legal powers to act alone on this issue. Neither is it clear that the private sector will voluntarily commit to higher security standards and a vast programme of investment in existing infrastructure.
Another issue is the global nature of space systems: differing security regulations make it challenging to ensure a coordinated approach to infrastructure across states.
This underscores the importance of raising public awareness around the scale and scope of threats to space systems – and making clear what the impact would be on the public if this infrastructure ceased to operate. If governments are going to invest more in securing space systems, then people will need to understand why this is critical.
However, the challenge of reverse engineering security into the complex and rapidly expanding network of space systems may ultimately be beyond the resources and appetites of governments and companies.
If that is the case, then in addition to raising awareness around security risks, governments and other organisations should also consider efforts to increase the resilience of space systems to attacks. In addition to thinking about how to better secure our space infrastructure, it may be prudent to consider how we might live without it.
Jessie Hamill-Stewart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. Dr Neil Ashdown contributed to this article. He is the head of research at Tyburn St Raphael.
For many people in the UK work is changing: how we work, what we do and where we do it. The change is faster for some than it is for others – and it’s not always changing for the better.
A new national survey — organised and managed by my colleagues and I — paints a mixed picture of UK working life. What makes the Skills and Employment Survey 2024 unique is that it the eighth in of a series that stretches back to the mid-1980s .
The survey focuses on people’s working lives: what skills they use, how and where they work, and what they think of their job. The data series consists of interviews with nearly 35,000 workers with around 5,500 taking part in 2024.
Some people have good things to say about the way their working lives have changed. Other people’s work lives are not improving. For many of us, it’s a bit of both.
Good news
One piece of good news is that very few workers regard their jobs as having no value. Contrary to estimates by some scholars that around 40% of people “find themselves labouring at tasks which they consider pointless”, our survey suggests that only 5% of respondents think that their job is meaningless and has no value.
So-called “bullshit jobs” are rare. Instead, nearly 70% reported their jobs gave them a sense of achievement either always or most of the time, while 76% said that their work was useful.
Work is becoming more skilled too. In 2024, 46% of workers reported that they would need a graduate level qualification if they were to apply for their current job today. This is up from 20% in 1986.
A further piece of good news is that the rate of over-qualification has declined. In 2024 35% of workers reported that they held qualifications that were higher than those currently required for their jobs compared to 39% in 2006.
The job quality gender gap is narrowing. The pay gap has fallen steadily, but the gap in the physical environment of work – in working time quality, and in job skills – has also narrowed. For example, the proportion of men who reported that their health or safety was at risk from their work declined from 38% in 2001 to 21% in 2024, while among women it has remained stable at 22%.
Bad news
However, all not is well in the world of work. Workplace abuse is common – 14% of UK workers experienced bullying, violence or sexual harassment at work. The risk of abuse is much higher for women, LGBTQ+ workers, nurses, teachers and those who work at night.
One of the most striking findings of our survey is the large fall in the ability of employees to take decisions about their immediate job tasks. In 2024, 34% of employees said they had “a great deal of influence” over which tasks they did, how they did them and how hard they worked. This is down from 44% in 2012 and 62% in 1992.
The mechanisms for greater worker control have grown over time, but this has not translated into greater control at an individual level.
Mixed news
Another striking, if not unsurprising, finding is the growth in the number of people woking from home. But the long-running nature of the shift may come as a surprise. The survey shows that the growth of hybrid working started back in 2006, well before the term became fashionable.
The survey also sheds light on where within the home people work. It shows that 45% can insulate themselves from others in the household by creating a home office. The rest must make do with the kitchen table, the sofa or the corner of a room.
After years of declining trade union membership, the survey shows that the tide may eventually have turned. Membership levels have plateaued, and rates of union presence in the workplace and union influence over pay increased between 2017 and 2024.
A rising proportion of trade union members also say their union has a great or fair amount of influence over how work is organised – up from 42% in 2001 to 51% in 2024.
Technological change brings opportunities as well as benefits. The survey found that digital technology played a role in nearly all jobs, with 78% of workers considering computers “essential” or “very important” in their jobs, up from 45% in 1997.
The share of AI users surged during the period of data collection, indicating its rapid adoption. But there are few signs that it is displacing workers, at least for the time being.
Regular monitoring of all the issues raised here – and many besides – is only possible if regular and robust surveys such as the Skills and Employment Survey are carried out. These are invaluable components of our knowledge infrastructure which must be treasured, protected and supported if we are to accurately assess how the world of work is changing.
Alan Felstead receives funding from a range of organisations. The Skills and Employment Survey 2024 is funded by the Economic and Social Research Council, the Department for Education, and the Advisory and Conciliation and Arbitration Service with additional funding from the Department for the Economy to extend the survey to Northern Ireland (ES/X007987/1)
Source: The Conversation – UK – By Rich Grenyer, Associate Professor in Biogeography and Biodiversity, University of Oxford
One of the biotech company’s ‘dire wolves’.Colossal
With wildlife populations globally 73% smaller on average than in 1970 and large mammals missing from much of the world, surely there’s never been a better time to “de-extinct” species? US biotech company Colossal Biosciences Inc claimed to do just that recently by resurrecting the dire wolf from Game of Thrones (a species that also lived in our world, several thousand years ago).
The potential seems huge. A species in trouble? Get a high-quality genome and you’ve made it a save game point, ready to replay when the environment improves. Didn’t get there in time? Never mind – you can use frozen remains in the permafrost, or shotgun-blasted specimens in a museum collection. And pretty soon, even if you don’t have those, a dose of generative AI and you can probably infer some of that genome anyway. A little genetic engineering and you have a species back from the dead, ready to go.
What’s the problem? Well, pretty much everything. These aren’t species returned from extinction. They aren’t going to be very useful, and in fact may well not survive at all. Most worrying of all, like the Freys and Boltons hidden in the hall before the Red Wedding, it’s the ethos of de-extinction hidden in these “dire wolf” puppies that will likely do the most damage to biodiversity if it establishes itself.
Extinction has not been reversed
The dire wolf was a very large carnivore that lived in the Americas about 10,000 years ago. Anatomically, it resembled a big, muscular, extra-toothy grey wolf: the species alive today that everyone thinks of when they say “wolf”.
The two pups revealed by Colossal Biosciences are not dire wolves. They are grey wolves, with 14 genes modified to produce an animal that resembles what we think a dire wolf looked like. Actually, only one of the 14 was a gene directly from a dire wolf specimen – the others were gene variants from existing grey wolf populations chosen to give physical features that made the engineered wolves bigger and whiter.
Over time, gene editing technology could increase the possible number of genes that can be engineered into a host species, and increase the complexity of the traits being inserted. But it’s not species being revived, it’s a few of their characteristics being borrowed by a species from today. It’s like claiming to have brought Napoleon back from the dead by asking a short French man to wear his hat.
The argument for this kind of genetic engineering revolves around the notion that the new hybrids might be useful for environmental restoration. As a top predator, the dire wolf could in theory bring the same revolutionary changes to ecosystems that reintroducing grey wolves to Yellowstone national park in the US famously caused in the 1990s. In other words, a more complete ecosystem, with wolves checking the voracious appetite of deer such that more complex and biodiverse habitats rebound.
However, in ecosystems where the dire wolf would reign supreme the grey wolf can very clearly fill the same role (just as it did in Yellowstone) without any of the unnecessary technology – if only people stopped trying to shoot them and exempt them from endangered species legislation.
There’s also the problem that captive breeding programmes seeking to release endangered species into the wild today regularly butt against: that the new animals have little or no idea what to do or how to live in their new habitat.
Operation Migration, dramatised in the 1996 film Fly Away Home, saw a dedicated team of pilots teach endangered migratory birds how to traverse North America by having them chase microlight aircraft for thousands of miles. This is just one example of the intensive training necessary, and which is never guaranteed to be successful. It’s obviously more difficult to train apex predators by example – I will not be volunteering for the “intro to pack hunting” session.
No quick fixes
The word “de-extinction” is not just itself untrue, but it seeks to diminish the inconvenient truth of the biodiversity crisis: we know what causes extinction, and it’s us.
Food systems have to destroy less habitat and use much less protein from animals, wild and farmed. Energy systems have to burn less carbon, so that there are fewer deaths among species (including ours) trying to adapt to higher temperatures and the changes they bring. To do both these things, our landscapes have to leave more space for nature and much of what remains must be used more efficiently to provide food, fuel and living space.
There are definite signs that we can make good on these promises: conservation does work, for humans and for other species.
But these changes require us to recognise that certain economic and political philosophies are no longer tenable. They require sacrifice by everyone and a willingness by rich people and countries to pay with money, trade policy, intellectual property rights and energy supply, so that many of the poorest people and countries can flourish while avoiding the environmental damage that those rich countries caused over their own histories.
What motivates people to cope with these changes is a desire for justice, a need to nurture, a drive to make things better and a recognition that while habitats can sometimes be restored, species extinctions are irreversible dead-ends which can only be avoided. That recognition is under threat.
The Trump administration is trying to defang the US Endangered Species Act. In the UK, a wholesale revision of legislation to prevent biodiversity loss has begun with the targeting of the habitat regulations, in preemptive defence of the government’s need to “build, build, build” in a desperate search for more economic growth. How useful would it be if the risk of extinction could be averted with a simple “don’t worry, we’ll pay to de-extinct it afterwards”?
There won’t be a dire wolf, and even if there were to be one, we’d have no idea what it was for (and neither would it). We’ll all pay for the mistaken belief that extinction is a solved problem, and that the business-as-usual global economy that has caused the sixth mass extinction is no big deal, because its casualties aren’t actually dead – just temporarily inconvenienced by an extinction that is no longer forever.
Don’t have time to read about climate change as much as you’d like?
Rich Grenyer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
SAN FRANCISCO, April 14, 2025 (GLOBE NEWSWIRE) — Octolane, which provides the first AI-driven CRM that automates sales actions and CRM updates, announced today an oversubscribed $2.6M seed round. Founded by two young immigrants, Octolane is challenging Salesforce and other legacy CRM providers with a new AI-driven platform that automates sales actions and CRM updates. Investors include angels Brian Shin (one of the earliest investors in both HubSpot and Drift), Kulveer Taggar, Cindy Bi (CapitalX) and Dave Messina (Pioneer Fund). Y Combinator, Lan Xuezhao (Basis Set Ventures), and General Catalyst Apex also participated. Octolane will utilize the funds primarily for expanding its team and for infrastructure investments that will allow it to meet strong demand for its platform.
Traditional CRMs have become glorified databases that force sales teams to spend hours manually entering data after every customer interaction. Most reps hate using them because they create work rather than reducing it. Octolane reinvented what a CRM actually does, transforming it from a passive “System of Record” that demands constant manual updates into an intelligent “System of Actions” that predicts and executes the next steps needed to close deals. When reps log into Octolane in the morning, instead of a list of administrative to-dos, they see a list of actions already executed with recommendations for further steps reps can take to see deals progress. Reps can then spend their time actually selling.
“Octolane’s daily company updates on Twitter caught my attention so I went to visit their SF office on a Sunday,” said investor Cindy Bi. “About half an hour in, I decided to invest after learning about the founders’ journey to the U.S., talking about their ambitions, and checking their product demo. It’s obvious that Octolane has a very strong market pull from customers of all sizes that are eager to switch from Salesforce and HubSpot and that’s how an AI-first CRM should be: a system of actions, not just records. Each interaction I’ve had with the Octolane team boosts my confidence because you can tell that nothing can stop them from earning more customers from this $300B market cap opportunity.”
“What drew me to Octolane was their rare combination of customer obsession and extraordinary output,” said investor Taggar. “They’re constantly shipping improvements based on real user feedback. One and Rafi understand that retention is the true north star in this space, and they’ve been bold enough to tackle the CRM category with genuinely fresh thinking. Seeing fast-growing companies switch to their platform validates that this approach is exactly what the market has been waiting for.”
Octolane was co-founded by immigrants One Chowdhury and Md Abdul Halim Rafi – best friends since high school who taught themselves to code by watching YouTube tutorials.
Chowdhury was inspired to start a company after a visit to San Francisco. “I noticed Salesforce tower, and a friend told me, ‘It’s a big CRM software company that everyone hates,’” said Chowdhury. “I thought, if everyone hates them, why do they have the tallest building in San Francisco? I did some research and found that while Salesforce was viewed as very disruptive at its launch decades ago, it was now viewed as obsolete – AI has passed it by. I called Rafi, and we decided to build an AI-driven, sales-focused CRM from scratch that would eliminate the need for salespeople to manually update their CRM recordkeeping – something that typically eats up to two-thirds of a salesperson’s time, leading to late nights and lost time with family.” Chowdhury dropped out of Duke University (Class of 2025) to start Octolane.
Octolane, launched earlier this year, has 200 active customers with 5,000 more on a waitlist. Almost all are converts from Salesforce and Hubspot. One is Retell AI. “A CRM is critical for managing inbound volume, but most tools slow us down more than they help,” said Evie Wang, Co-Founder of Retell AI. “With Octolane, we finally found a system that just works. It automatically qualifies leads using AI, and the built-in calendar makes it seamless for high-intent leads to book meetings. We replaced 5–6 other fragmented tools and core HubSpot functionalities with Octolane, which saves thousands of dollars every month, plus deals close faster. Octolane feels like Rippling, but for CRM: everything we need in one place, finally working together. The Octolane team is one of the most reliable we’ve worked with, and the product has become a core part of how we grow.”
About Octolane Octolane is the first AI-native Self Driving CRM that updates itself and takes action, so sales reps can spend less time on admin and more time closing deals. Hundreds of teams have already made the switch from HubSpot and Salesforce, replacing clunky workflows for speed and AI automation. Backed by Y Combinator, General Catalyst Apex and prominent angels like Brian Shin, Kulveer Taggar and Cindy Bi, Octolane helps companies shorten sales cycles, increase win rates, and let reps do what they do best: sell. Learn more at octolane.com.
Media contact: Michelle Faulkner Big Swing 617-510-6998 michelle@big-swing.com
“We heard you, Albertans.” With those words, Alberta Energy Minister Brian Jean put coal mining in Alberta’s Rocky Mountains back on the table last December. Common sense might suggest Jean meant that Albertans are in favour of resuscitating metallurgical coal mining there, but that’s not the case.
Instead, the public strongly opposes reviving metallurgical coal mining — also known as coking coal mining — to supply Asian steelmakers. December’s Coal Industry Modernization Initiative sadly exemplifies what has become too common in politics today — using misinformation to try to win the public’s willingness to accept the unacceptable.
In this case, the government’s treatment of expert opinion compounds its misinformation. It’s blind to expert advice from the International Energy Agency (IEA) and the Australian government questioning the rosiness of metallurgical coal’s future.
Bringing coal miners back to Alberta’s Rockies was extremely contentious between 2020 and 2022. Jason Kenney’s Conservatives removed the de facto exploration and exploitation restrictions in place there since the 1970s. At the same time, Benga Mining Limited proposed to resume coal mining in southwest Alberta. Together, these events ignited a public furore.
Public opposition
Andrew Nikiforuk, a journalist whose books and articles focus on epidemics and the energy industry, was one of the first to bring coal miner ambitions to the public’s attention. He told me the outrage was “probably the most important environmental protest I have ever witnessed in this province.”
Benga’s Grassy Mountain project was summarily dismissed by government regulators in 2021. Eleven weeks before that decision, Alberta created the Coal Policy Committee. It consulted Albertans about the 2020 decision to invite coal miners to return to the Rockies.
The committee gave anyone with a view on coal — positive or negative — the opportunity to contribute to its deliberations. The response was impressive. The committee received nearly 4,400 pieces of correspondence, 176 detailed written submissions and conducted 67 virtual and public meetings.
The consultation confirmed what polling firms had already found: “A significant number of respondents are apprehensive about coal development in Alberta.”
Albertans didn’t believe coal’s economic benefits justified its risks to landscapes and water quality. Only eight per cent of those who answered the committee’s survey question about the economic benefits of coal mining felt they were very important; 64 per cent regarded those benefits as “not important at all.”
This unambiguous public opposition repeated what the federal-provincial review panel into Benga’s Grassy Mountain coal mine proposal revealed in 2020-2021. Ninety-eight per cent of the more than 4,400 public comments left on the review panel’s website opposed the proposal to bring coal mining back to the Crowsnest Pass.
Second, the committee concluded that land-use planning, with public consultation, needed to take place before a decision could be made about permitting coal exploration in the Rockies.
Premier Danielle Smith’s government hasn’t listened. It doesn’t intend to conduct the land-use planning called for by the committee.
Jean has also said he will consult industry — and only industry — as he tries to get his new policy in place this year. He promised “targeted” engagement with coal industry stakeholders. The public and other interests will be mere spectators.
Global coal demand is a myth
Alberta’s coal initiative has an optimistic view of future metallurgical coal demand.
Jean markets his proposal by saying Alberta coal is needed “given the current and anticipated future global demand for coal.” But the IAE doesn’t share that optimism. Nor do experts from the Australian government, the world’s largest exporter of metallurgical coal.
The IEA’s annual coal report is a benchmark for understanding the medium-term global outlook for coal. Its most recent report projects metallurgical coal production will fall by 4.2 per cent from 2024 to 2027. The IEA’s 2024 World Energy Outlook predicted steelmaking coal production would fall over the next two decades as steelmakers reduce greenhouse gas emissions.
In 2050, it expects world coking coal production to drop 35.8 per cent from the 2024 level.
Australia’s pre-eminence comes from producing 46 per cent of global metallurgical coal exports. The Australian government’s March 2025 Resources and Energy Quarterly confirms the general thrust of the IEA’s analyses. A slight increase in the amount of steel produced without metallurgical coal “will likely result in a slight fall in global metallurgical coal demand through to 2030.”
The IEA makes it clear that Australian producers don’t intend to relinquish market share willingly. Forty-seven Australian coal projects are in the pipeline, with most focused on metallurgical coal or metallurgical/thermal coal combined. Three-quarters of Australia’s metallurgical coal exports feed the Asian steel industry.
Then there’s Mongolia. After its “recent extraordinary export growth” into China, Mongolia now supplies nearly one-half of China’s imports. The country is the world’s second largest metallurgical coal exporter. Mongolia’s high-quality coal, proximity to China and improved rail infrastructure will make its production difficult to displace.
It’s unlikely, then, that new coal production from Alberta will gain easy access to Asian markets.
Alberta’s Coal Industry Modernization Initiative illustrates two dangerous trends in politics today — the refusal to heed both the public and experts.
The stakes here are large. Coal mining will undoubtedly have a substantial impact on the headwaters that serve people in Alberta, Saskatchewan and Manitoba. Smith’s Conservatives should in fact embrace common sense and the spirit of party policy from the 1970s. Prohibit coal mining in Alberta’s Rockies.
Ian Urquhart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Headline: How human-AI agent teams will reshape your workforce
AI agents are already becoming powerful partners in knowledge work. Now, as more companies adopt them, they’re poised to reshape work dynamics. In this new reality, leaders will for the first time be able to add intelligence—once a scarce and costly resource—to their organization without increasing headcount. Soon, all businesses will operate with collaborative teams of humans and agents.
This evolution will require every leader to redefine how they think about their teams. Agents will be a true force multiplier—everyone from interns to the C-suite will become an “agent boss” who oversees their own constellation of agents that power business processes. One new imperative will be to find the optimal ratio of humans to agents for whatever task or project your teams are working on.
The upshot: where you once defaulted to relying on human intelligence you can strategically consider whether an agent should handle the task—unlocking scale like never before.
AI-enhanced teams outperform traditional ones
In March, a remarkable field study by Harvard and the University of Pennsylvania’s Wharton School showed how AI can boost performance for individuals and teams. Chronicled in a paper titled “The Cybernetic Teammate,” the experiment involved nearly 800 employees at P&G, the global consumer goods company. The employees were all asked to work on product innovation challenges, but some of them were given AI and others were not.
The study showed that individuals with AI performed as well as teams without it. Teams that used AI were significantly more likely to produce top-tier ideas than any other group. The same study showed how AI breaks down silos: Without AI, R&D professionals suggested more technical solutions, while commercial professionals also leaned into their own expertise. AI users produced balanced solutions regardless of their backgrounds.
These benefits address a critical reality. In times of economic uncertainty, leaders face pressure to drive growth with the current, or a reduced, headcount. Adding agents to the team will allow employees to hand off some of their routine work, relieving some of the pressure and enabling them to focus on higher-value tasks.
What human-agent collaboration looks like
Bringing agents into the mix promises a new dynamic for how individuals operate, with ripple effects for teams. Say you catch wind that a major client is thinking about leaving. You can tap an agent to wade into the data and quickly analyze what might be driving their decision, rather than pulling your human colleagues off whatever they’re working on to investigate.
Agents can research for you, provide expertise you don’t have, or code for you. In the firm of the future, every employee (and every team) will manage a pool of them, with the exact number varying depending on their goals and preferences. I’m starting to see this pattern emerge on my team. Alex Farach, a data scientist and researcher, is deep into a big project, and he’s using three agents to assist him. One agent goes online every day and scoops up relevant new research, another assists with statistical analysis, and a third drafts rich briefs that help him connect the dots.
The trio of personalized agents help Alex get up to speed more quickly on the latest research and spend less time on coding related to data analysis. And it’s not just Alex who becomes more effective: this human-agent collaboration produces insights and outputs that benefit my team more broadly.
Managing a new team dynamic
It’s early days, and most employees aren’t proactively building agent teams to assist them. Managers can’t count on individual employees to make this shift on their own. You need to be intentional—and strategic—about adding digital labor to your teams. Focus on areas where agents can have immediate and substantial impact on your business, build those agents, and deploy them to your people, along with the training they need to work with agents and new workflows. And, importantly, share the results so employees across the organization can learn.
Down the line, you’ll need to start considering a new metric: the human-agent ratio. What’s the ideal balance for unlocking productivity? We expect the ratio to vary by task, process, and industry, but in each context it will be critical to find the right blend of digital labor and human judgment. Get it wrong, and you might miss out on the full value of AI or add AI overwhelm to your employees’ work challenges. Hit the sweet spot, and you unlock the kind of performance demonstrated in the P&G study.
The big picture: organizational impact
If you’re a new company starting from scratch, you have the advantage of designing your processes around human-agent teams from the ground up. Established companies, meanwhile, face the challenge of reinventing—instead of just retrofitting—entire processes to take advantage of what AI offers. And employees will need upskilling to make the most of their partnership with agents.
You’ll also need to redefine roles and responsibilities. You might need new roles for overseeing agentic resources: tracking performance, leading deployment, and monitoring the human-agent balance. In a very dynamic labor market, employees and leaders who emerge as effective “agent bosses” will likely get a leg up.
Many leaders tell me they’re being asked to do more with less. In a difficult economic context, agents can relieve some of the pressure on humans. By bringing agents on board you can simultaneously support your employees and create an infinitely scalable, adaptable organization—and start building the firm of the future.
For more insights on AI and the future of work, subscribe to this newsletter.
Source: United Kingdom – Executive Government & Departments
A study published in JAMA Internal Medicine looks at CT scans and lifetime cancer risk in the USA.
Lynda Johnson, Professional Officer for Clinical Imaging and Radiation Protection, The Society and College of Radiographers, said:
“The Society and College of Radiographers (SoR) welcomes research into the harmful effects of ionising radiation and recognises the importance of balancing benefit and risk information to patients and the public.
“This paper articulates the complexities of large-scale dose estimation and acknowledges the many variables which influence an individual’s likelihood of developing cancer at some point in their lifetime. In the UK, the use of ionising radiation is governed by The Ionising Radiation (Medical Exposure) Regulations 2017 (The Ionising Radiation (Medical Exposure) Regulations (Northern Ireland) 2018). Central to the legislation and UK radiographic practice, as this paper rightly concludes, are the principles of justification and optimisation. Justification means that any exposures to ionising radiation for medical purposes must be demonstrated to provide a greater benefit than risk to the individual. Once justified, the exposure must be optimised, meaning that it is as low as reasonably practicable to provide the intended outcome, or answer the clinical question.
“Computed Tomography (CT) scans are undertaken by highly trained radiographers and nuclear medicine technologists who have met the educational and professional standards required to ensure all CT scans are appropriately justified and optimised. Considering the increased use of CT as an invaluable diagnostic tool, it is imperative that the risk of harm from potential misuse, poor quality referrals, or inappropriate exposure parameters continues to be managed effectively. This is achieved by safeguarding standards of education, training and practical experience, compliance with the regulations, and applying best practice quality standards such as The Quality Standard for Imaging.
“It is particularly important to recognise, as this paper highlights, the increased risk to children from unjustified CT exposures. Staff are trained to give special consideration to the justification and optimisation of CT scans for children and will assess the benefits and risks of using CT against alternative techniques that do not involve ionising radiation such as MRI and Ultrasound.
“Accurate communication around the benefits and risks of CT is essential to protect the public from harm. Focussing on risk alone is not helpful and, in some cases, might prevent a person from attending a scan that could provide early diagnosis of cancer. Anyone undergoing a CT scan must be provided with balanced, accurate and relevant information to enable them to understand what it means to them as an individual in terms of their diagnosis, treatment and potential long-term care.
“The UK Health Security Agency is responsible for undertaking dose audits and producing National Diagnostic Reference levels (NDRLs) for computed tomography. These inform local practices and employers must ensure their organisational doses do not consistently exceed the NDRLs. They are publicly available here alongside helpful dose comparisons here and benefit and risk information for patients here.”
Dr Doreen Lau, Lecturer in Inflammation, Ageing and Cancer Biology at Brunel University of London, said:
“This is a well-conducted modelling study using robust data from US hospitals and established methods for estimating cancer risk from radiation exposure. It provides a timely reminder that while CT scans are often life-saving and essential for diagnosis, they do come with a small but real potential risk of contributing to cancer over a lifetime, especially when used repeatedly, in younger patients, or when not clinically necessary.
“The findings don’t mean that people should avoid CT scans when recommended by a doctor. In most cases, the benefit of detecting or ruling out serious illness far outweighs the very small risk of harm. What this research highlights is the need to minimise unnecessary imaging and use the lowest dose possible, particularly in settings where CT usage is high. Where appropriate, clinicians may also consider alternative imaging methods that do not involve ionising radiation, such as MRI or ultrasound—especially for younger patients or when repeat imaging is anticipated.
“CT scan rates are much higher in the US than in the UK, where imaging is used more conservatively and with stricter clinical justification. That means the estimated risks in this study are likely to be much lower in the UK context, though the message about appropriate use still holds.
“Importantly, this study models estimated cancer risk from radiation exposure. It does not show a direct causal link between specific CT scans and individual cancer cases. These are projections based on population-level data and assumptions about radiation risk, not observed cancer rates. Although the model estimates a small increased risk with each scan, it does not prove that any one scan causes cancer. Other factors such as underlying health issues and clinical decision-making, may also influence who gets scanned and how often.”
Prof Stephen Duffy, Emeritus Professor of Cancer Screening, Centre for Cancer Screening, Prevention and Early Diagnosis, Queen Mary University of London, said:
“This paper reports on a very high quality numerical modelling exercise, estimating the likely number of cancers occurring in the USA as a result of 93 million CT examinations. The authors estimate that just over 100,000 cancers are predicted to occur as a result of radiation from these CT examinations. This amounts to around a 0.1% increase in cancer risk over the patients lifetime per CT examination. When we consider that the lifetime risk of cancer in the general population is around 50%, the additional risk is small. Doctors do not order CT examinations unless they are necessary, and it seems to me that the likely benefit in diagnosis and subsequent treatment of disease outweighs the very small increase in cancer risk.
“I would also remark that the estimates, while based on the best models available to the authors, are indirect, so there is considerable uncertainty about the estimates.
“Thus I would say to patients that if you are recommended to have a CT scan, it would be wise to do so.”
Dr Giles Roditi, Consultant Cardiovascular Radiologist and Honorary Clinical Associate Professor of Radiology, University of Glasgow, said:
“CT scanning is a powerful diagnostic tool and has become a bedrock of modern radiology departments, particularly for emergency department imaging. However, the paper by Smith-Bindman et al. is a timely reminder that with great power comes great responsibility. The paper makes the case that the rise in the utilisation of CT scanning is now at such a scale that its projected use could lead to scenario in which CT-associated cancer eventually accounts for 5% of all new cancer diagnoses annually in the USA. What should we do with this information and how does this translate to and inform practise in the UK ?
“Firstly, the evidence base is sound and there is little new as regards the basic assumptions that the paper is based upon but the authors have updated this with more modern dose estimates and data on the utilisation of CT scanning not only across different age groups but also stratified by gender and the exposure of different organs that have different sensitivities to ionising radiation induced damage. The authors are to be congratulated in the detailed breakdown of CT utilisation across these categories and how lifetime risk of cancer impacts across age and gender etc. as well as the modern dosimetric approach used plus accounting for multiphase CT examinations that inevitably entail higher dose.
“With all medical endeavours there is an element of risk. Risk is generally defined as a situation involving exposure to danger or the possibility that something unpleasant will occur. Furthermore, the use of the word risk often implies an element of chance, uncertainty or unpredictability. However, risk can often be well defined in any particular context as – Risk = (probability of an event) x (impact of event)
“Risk is thus different for ‘well’ versus ‘sick’ patients with the latter deriving greater benefit. This paper helps us better define risk at a population level by updating knowledge on the probable incidence of later CT-associated cancer. A potential limitation that could be levelled at the paper is that not all the risks associated with CT are included, only those related to later development of cancer diagnoses. For example, other relevant factors as a demerit to CT scanning could include the very small risks of anaphylaxis related to the use of contrast medium, used now in a large proportion of scans in Western medicine. Similarly, the small but potential other risks such as cataract acceleration are not mentioned.
“On the other hand, while the authors mention that ‘CT is frequently lifesaving’ they have not in my opinion really put the information in full relevant context. The authors context is that this is approximately 5% of new cancer diagnoses could be attributable to CT i.e. a figure of 100,000 cancers in the USA is where there were 1,777,566 new cancer cases reported in 2021 and 608,366 people died of cancer in 2022 (the latest CDC data available). This is because the natural incidence of cancer induction is 1 in 2 for adults. Hence, an alternative way of looking at this would be that although the figure of 100,000 cancers is alarming this is only a small additional risk over and above an individual’s lifetime risk of developing cancer i.e. a risk rising from about 50% to 52.5%. The authors also do not address how many of these cancer will be fatal although we presume based upon CD data it would be approximately one third.
“The main issue, however, is that the benefits of CT scanning are not more explicitly stated. This is likely because the benefits of most medical imaging in terms of morbidity & mortality have been very difficult to quantify with surprisingly little published in the literature. This is mainly because imaging has too often only been part of an overall therapeutic strategy where the main treatment outcomes depend critically upon the imaging but the imaging itself is not tested (e.g. treatments for stroke and cancer). However, there have been recent trials that provide some context, for example SCOT-HEART was probably the first major trial in which diagnostic CT was shown to save lives. In SCOT-THEART the patients were randomised to a conventional treatment pathway without CT scan or an investigative arm in which the standard care pathway was simply supplemented by a CT scan of the coronary arteries. This trial showed clear benefit for those patients that had CT with a significantly lower mortality rate and this has been shown to persist now up to 10 years following the end of the trial. Similarly trials of lung cancer screening have now shown positive benefit from CT scanning in the detection of early, treatable stage lung cancer in high risk patients.
“So how does this translate into the situation in the UK ? Firstly, there are significant differences in practise due to both cultural and legislative environments. In the UK we operate under the precepts of the Ionising Radiation (Medical Exposure) Regulations last updated in 2017 which mandates that we apply the ALARA/ALARP principles and should opt for diagnostic imaging tests with the lowest radiation dose, or preferably an imaging test with no ionising radiation exposure (e.g. ultrasound or MRI) where this answers the clinical question. Culturally in the UK we also regard all requests for imaging as just that, requests that can be questioned through discussion. In the USA clinicians order scans and radiology departments have little room to manoeuvre when it comes to not performing or changing these orders, particularly since the imaging fees that accompany the scanning activity are the lifeblood of the department. Another issue in the USA in addition to the overuse of CT mentioned in the paper is the repeat imaging that is often performed in a fragmented healthcare system where it is easier (and more profitable) for an institution to simply repeat a scan on a patient referred in from elsewhere rather than seek out and transfer the original scans.
“In the NHS we have systems that allow image transfer between institutions and of course unlike the USA we are very capacity limited and often have long waiting times for scans. One side effect of this is that it tends to reduce demand such that tests unlikely to influence clinical decision-making are less likely to be requested. On the downside is that the CT scanner base in the UK is aging and we know that older scanners inevitably expose patients to higher radiation doses than modern systems for the same type of scan, often with less good image quality. Indeed, on modern generation systems with advanced iterative reconstruction algorithms and AI enhancements in the imaging chain then CT scans can be acquired at doses similar to (or little more than) conventional x-rays. These advances have largely been spurred by the drive to reduce dose in coronary CT scans but the benefits potentially reduce doses across all CT scanning. The paper by Smith-Bindman et al. reminds us that we must advocate more strongly to upgrade our CT scanners for the benefit of our patients.
“So what would I say to a UK patient scheduled to have a CT scan and worried by this paper ? In general terms I would strongly advise them not to worry as they are highly likely to benefit from a well indicated scan, this is particularly so in those who are unwell and in older patients (those > 55 years). For younger patients, particularly those of child-bearing age where the breasts and/or reproductive organs would be included and for those who are physically well then if concerned they can always ask to discuss the merits of alternative scans such as ultrasound and MRI. For example, in our own practise we image all our altruistic potential living kidney donors with MRI rather than CT since our own (unpublished) estimates indicate that if we used CT then 1 in 526 of these well people would have a fatal induced cancer, a risk eliminated by using MRI.”
Prof Richard Wakeford, Honorary Professor in Epidemiology, Centre for Occupational and Environmental Health (COEH), University of Manchester, said:
“Although it is not unreasonable to reiterate guidance on the potential risks to health arising from exposures to low levels of ionising radiation, such as the x-ray doses received from CT scans, considerable caution is required in providing quantitative estimates of the effects produced by such exposures. This is largely because of the substantial assumptions that must be made in applying risk models derived from epidemiological studies of populations briefly exposed to moderate and high doses, primarily the Japanese survivors of the atomic bombings of Hiroshima and Nagasaki, to low-level exposure circumstances. For example, for the purposes of radiological protection, it is prudent to assume that the size of the additional risk is directly proportional to the dose received, with no threshold dose below which the risk is zero, and this is the assumption made by the International Commission on Radiological Protection (ICRP) in making its recommendations. However, ICRP notes that these assumptions “conceal large biological and statistical uncertainties”, and cautions against risk projections based on large numbers of people receiving low doses.
“The direct epidemiological investigation of cancer incidence among patients who have been examined by CT is a worthwhile exercise, but substantial care is required in the interpretation of results – as with all medical diagnostic procedures, people are examined because they are ill, have been ill, or are suspected of being ill, and such selection for exposure leads to difficulties in obtaining reliable conclusions about the effects of radiation exposure from these studies.
“The “bottom line” of the paper is that ~103,000 cases of cancer (which does not include cases of non-melanoma skin cancer, lymphoma, or multiple myeloma) are estimated to result from CT scans conducted in the USA in 2023, an estimate that must be viewed with circumspection. This estimate of ~103,000 cases of cancer is, on the face of it, rather alarming, but it is also uncertain, to an extent that extends (well) beyond the uncertainty limits presented in the paper. ICRP emphasises that all medical exposures must be justified as doing more good than harm, and the potential risk from radiation exposure during a diagnostic examination clearly needs to be factored into clinical judgement about the need for a specific diagnostic procedure. The level of potential risk posed by exposure to low doses of radiation should be taken into account in reaching a balanced decision on whether or not a CT scan is clinically desirable, but this judgement should not be unduly influenced by large, but uncertain, projected numbers of cancers.”
‘Projected Lifetime Cancer Risks From Current Computed Tomography Imaging’ by Rebecca Smith-Bindman et al. was published in JAMA Internal Medicine at 16:00 UK time on Monday 14 April 2025.
DOI: 10.1001/jamainternmed.2025.0505
Declared interests
Prof Stephen Duffy: I have no conflict of interest.
Dr Giles Roditi: Prof Roditi is a Past-President of the British Society of Cardiovascular Imaging/Cardiovascular CT, a Past President of the Society of Magnetic Resonance Angiography and a member of the SCOT-HEART investigators.
Prof Richard Wakeford: “I am, or was, a member of a number of national and international expert committees addressing radiation risks, such as ICRP, UNSCEAR and (previously) COMARE, SAGE, etc.. Details can be found at: https://research.manchester.ac.uk/en/persons/richard.wakeford
Source: United Kingdom – Executive Government & Departments
A meta analysis published in Nature Human Behaviour looks at technology use and cognitive aging.
Dr Davide Bruno, Reader in Psychology, Liverpool John Moores University, said:
“A lot of variables are controlled for in this study, and the results are promising, but a lot of our cognitive resilience may well be genetically determined, which could also lead to greater ease with using technology. The authors do an excellent job of pointing out the limits of their study and acknowledging that there is more work to do. For example, what type of digital activities are better for our brain? This is a well-done study tackling a timely issue. The authors are careful in their conclusions.”
Dr Leah Mursaleen, Head of Clinical Research at Alzheimer’s Research UK says:
“This large-scale analysis reviewed over 50 published studies from around the world to try to unravel the link between use of digital tech and cognitive ability. This study challenges previous research that has suggested digital technology could reduce cognitive function as we age and instead suggests that use of technology may be linked to lower rates of cognitive decline in older adults.
“With technology now embedded in our daily lives, it’s encouraging to see that using digital tools like computers, smart phones and the internet could be linked to better brain health in later life. However, it’s important to note that this analysis could not include measures of physical changes happening in the brain or consider the age that people were first exposed to digital tech.
“Although the authors explore possible reasons as to why the use of digital tech may promote better cognitive function, more research is needed to understand the relationship further especially in people who are the first generation to grow up with these advances”.
A meta-analysis of technology use and cognitive aging’ byJared F. Bengeet al.was published inNature Human Behaviourat 16:00 UK time on Monday 14th April.
DOI: 10.1038/s41562-025-02159-9
Declared interests
Dr Davide Bruno: None
For all other experts, no reply to our request for DOIs was received.
Dr Martin McMahon, a leading expert in 3D printing, has been selected by the Royal Society as one of its Entrepreneurs in Residence.
Dr McMahon, who will lead the cutting-edge Additive Anglia project at Anglia Ruskin University (ARU), is one of just 15 business leaders, entrepreneurs and scientists from across the UK to have been selected for the prestigious scheme.
The Royal Society’s Entrepreneur in Residence programme aims to embed industry expertise within universities, improving awareness of the latest research and development advances while also addressing some of the scientific challenges faced by businesses.
In addition to his role at ARU, Dr McMahon is an independent consultant specialising in additive manufacturing, which is commonly referred to as 3D printing.
ARU’s new Additive Anglia project will be integrating 3D printing technologies into the university curriculum and establishing a 3D printing hub in the East of England.
The initiative involves forming a network with other universities in the region to allow easier access to these technologies for both academic and industry partners. The project also aims to enhance the quality of 3D printed parts, accelerate build rates, and minimise scrap rates.
“I’m honoured to receive the Entrepreneur in Residence award from the Royal Society. ARU’s Additive Manufacturing facilities are exceptional, and I intend to expand their use, raise awareness of the possibilities of 3D printing right across the university, and strengthen our connections with local industries and other universities.
“Over the past five years, 3D printing has become much more widely recognised and is now firmly in the public consciousness. The Additive Anglia project will establish ARU as a true centre of excellence for 3D printing, opening up this technology to various sectors and scales of business, including small and medium sized enterprises.”
Dr Martin McMahon
“I am delighted to welcome Martin to the University and am excited about how we can apply additive manufacturing across so many different disciplines. Crucially, ARU’s engineering students will also be graduating with the latest knowledge and skills needed by industry, meaning they continue to be employment-ready.”
Mark Tree, Head of the School of Engineering and the Built Environment, ARU
Source: Saint Petersburg State University of Architecture and Civil Engineering – Saint Petersburg State University of Architecture and Civil Engineering – The Bridge of the Pobediteley
As part of the international competition “Macaroni Builder”, students studied the theory, made calculations, designed and built towers and bridges from macaroni products. The competition was held at SPbGASU for the fifteenth time, and 26 teams from six Russian regions, as well as from Armenia and the Republic of Belarus, took part in it.
Every year the competition is gaining more and more popularity, and its participants demonstrate a high level of skill, which is understandable: in this way, the contestants develop practical skills in construction mechanics, architecture and other professional disciplines, exchange experience, ideas and knowledge, and develop creative abilities.
The competition was attended by teams from the National University of Architecture and Construction of Armenia, Polotsk State University named after Euphrosyne of Polotsk (Republic of Belarus), Astrakhan State University of Architecture and Civil Engineering, Tyumen Industrial University, Nizhny Novgorod State University of Architecture and Civil Engineering, Belgorod State Technological University named after V.G. Shukhov, Tver State Technical University, Emperor Alexander I Petersburg State Transport University, SPbGASU, Lyceum No. 387 named after N.V. Belousov, School No. 518 of Vyborg District, Schools No. 106, No. 246 of Primorsky District, School No. 69 of Kalininsky District, School No. 87 of Petrogradsky District of St. Petersburg.
“The rather rich history of the competition confirms that it is possible to successfully create structures from pasta and develop the competencies that are necessary in the professional activities of future architects and builders. Participants understand from their own practice which structure works and which does not, see their mistakes, and remember successful examples. That is why not only victory is important, but also participation,” said Andrey Nikulin, Dean of the Construction Faculty of SPbGASU.
The competition includes two nominations: “The tallest structure. Towers” and “Structure with the largest span”. For the base of the towers, participants are provided with 100×100 cm stretchers, and tables are used as supports for the bridges. In addition to pasta, sculptural plasticine is used. Four hours of evening time are allocated for the construction of the structures. The structures must stand until 9 am the following day. The jury also took into account constructiveness, expressiveness, and rational use of materials.
Team of Belgorod State Technological University
The anniversary competition was marked by a unique event: for the first time in its history, one team became the winner in both nominations. The team from the Belgorod State Technological University named after V. G. Shukhov (Veronika Smulyarova, Aleksey Dmitriev, Ivan Pakhomov, Aleksandr Barelsky, Anna Migulina) was recognized as the champions twice.
“We participated in the competition for the third time and gained good experience. In addition, we trained hard to become professionals in our field. This victory is the result of many years of work. This year, we built a copy of the Chinese Guangzhou TV tower. It is built in the shape of an hourglass, erected in an interesting way, which attracted us. Our bridge is a cable-stayed suspension bridge dedicated to the 80th anniversary of the Victory in the Great Patriotic War. On one side, we installed paper numbers 1945, on the other – 2025. Each structure used 800 grams of pasta – this is a rational use of materials,” said Alexander Barelsky.
The team from Tyumen Industrial University (Dmitry Bakunin, Artemy Vasiliev, Ilya Moskaev, Nikita Leskov, Arseny Naidanov) also achieved double success, taking second place in both nominations.
In the nomination “Structure with the largest span”, the third place went to the team of the National Institute of Architecture and Construction of Armenia (Vladimir Safiridi, Movses Hakobyan, Gevorg Yusisyan, Tigran Tevosyan, Nikolay Harutyunyan).
“First of all, I would like to thank the organizers of the “Macaroni Builder”. Such competitions are one of the types of training for students, when students assemble what they designed with their own hands and see in practice how this structure works and for what reasons it falls. It is a useful thing to see how structures work. The competition encourages students to new achievements. We built a railway bridge on the bypass of the branch from the city of Dilijan to the city of Vanadzor. The guys assembled the model and the bridge span correctly, making a close copy of the real structure. In addition, they got acquainted with the design experience of other students. We will try to take into account the first experience we have gained in the next competition, which we would like to attend,” said the team leader, head of the Department of Roads and Bridges of the National Institute of Architecture and Construction of Armenia Artashes Sargsyan.
In the nomination “The tallest structure. Towers” the third place was taken by the team of the Astrakhan State University of Architecture and Civil Engineering (Olga Kupetskova, Yulia Yudina, Ekaterina Nesterova, Roman Mukhatov, Daria Yakovleva).
Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.
Source: The Conversation – Africa – By Francisca Mutapi, Professor in Global Health Infection and Immunity. and co-Director of the Global Health Academy, University of Edinburgh
African countries face a major challenge of dealing with high rates of communicable diseases, such as malaria and HIV/Aids, and rising levels of non-communicable diseases. But the continent’s health systems don’t have the resources to provide accessible and affordable healthcare to address these challenges.
Historically, aid has played a critical role in supporting African health systems. It has funded key areas, including medical research, treatment programmes, healthcare infrastructure and workforce salaries. In 2021, half of sub-Saharan Africa’s countries relied on external financing for more than one-third of their health expenditures.
As aid dwindles, a stark reality emerges: many African governments are unable to achieve universal health coverage or address rising healthcare costs.
The reduction in aid restricts healthcare services and threatens to reverse decades of health progress on the continent. A fundamental shift in healthcare strategy is necessary to address this crisis.
The well-known maxim that “prevention is better than cure” holds not just for health outcomes but also for economic efficiency. It’s much more affordable to prevent diseases than it is to treat them.
As an infectious diseases specialist, I have seen how preventable diseases can put a financial burden on health systems and households.
For instance, each year, there are global economic losses of over US$33 billion due to neglected tropical diseases. Many conditions, such as lymphatic filariasis, often require lifelong care. This places a heavy burden on families and stretches national healthcare systems to their limits.
African nations can cut healthcare costs through disease prevention. This often requires fewer specialist health workers and less expensive interventions.
To navigate financial constraints, African nations must rethink and redesign their healthcare systems.
Three key areas where cost-effective, preventive strategies can work are: improving water, sanitation, and hygiene; expanding vaccination programmes; and making non-communicable disease prevention part of community health services.
A shift in healthcare delivery
Improving water, sanitation, and hygiene infrastructure
Many diseases prevalent in Africa are transmitted through contact with contaminated water and soil. Investing in safe water, sanitation, and hygiene (WASH) infrastructure is an opportunity. This alone can prevent a host of illnesses such as parasitic worms and diarrhoeal diseases. It can also improve infection control and strengthen epidemic and pandemic disease control.
Currently, WASH coverage in Africa remains inadequate. Millions are vulnerable to preventable illnesses. According to the World Health Organization (WHO), in 2020 alone, about 510,000 deaths in Africa could have been prevented with improved water and sanitation. Of these, 377,000 deaths were caused by diarrhoeal diseases.
Unsafe WASH conditions also contribute to secondary health issues, such as under-nutrition and parasitic infections. Around 14% of acute respiratory infections and 10% of the undernutrition disease burden – such as stunting – are linked to unsafe WASH conditions.
By investing in functional WASH infrastructure, African governments can significantly reduce the incidence of these diseases. This will lead to lower healthcare costs and improved public health outcomes.
Local production of relevant vaccines
Vaccination is one of the most cost-effective health interventions available for preventing infection. Immunisation efforts save over four million lives every year across the continent.
There is an urgent need for vaccines against diseases prevalent in Africa whose current control is heavily reliant on aid. Neglected tropical diseases are among them.
Vaccines can also prevent some non-communicable diseases. A prime example is the human papillomavirus (HPV) vaccine, which can prevent up to 85% of cervical cancer cases in Africa.
HPV vaccination is also more cost-effective than treating cervical cancer. In some African countries, the cost per vaccine dose averages just under US$20. Treatment costs can reach up to US$2,500 per patient, as seen in Tanzania.
It is vital to invest in a comprehensive vaccine ecosystem. This includes strengthening local research and building innovation hubs. Regulatory bodies across the continent must also be harmonised and markets created to attract vaccine investment.
Integrating disease prevention into community healthcare services
Historically, African healthcare systems were designed to address communicable diseases, such as tuberculosis and HIV. This left them ill-equipped to handle the rising burden of non-communicable diseases, such as type 2 diabetes and cardiovascular diseases. One cost-effective approach is to integrate the prevention and management of these diseases into existing community health programmes.
Community health workers currently provide low-cost interventions for health issues such as pneumonia and malaria. They can be trained to address non-communicable diseases as well.
In some countries, community health workers are already filling the service gap. Getting them more involved in prevention strategies will strengthen primary healthcare services in Africa. This investment will ultimately reduce the long-term financial burden of treating chronic diseases.
A treatment-over-prevention approach will not be affordable
Current estimates suggest that by 2030, an additional US$371 billion per year – roughly US$58 per person – will be required to provide basic primary healthcare services across Africa.
Adding to the challenge is the rising global cost of healthcare, projected to increase by 10.4% this year alone. This marks the third consecutive year of escalating costs. For Africa, costs also come from population growth and the rising burden of non-communicable diseases.
By shifting focus from treatment to prevention, African nations can make healthcare accessible, equitable and financially sustainable despite the decline in foreign aid.
Francisca Mutapi is affiliated with Uniting to Combat NTDs
Source: The Conversation – Africa – By Francisca Mutapi, Professor in Global Health Infection and Immunity. and co-Director of the Global Health Academy, University of Edinburgh
African countries face a major challenge of dealing with high rates of communicable diseases, such as malaria and HIV/Aids, and rising levels of non-communicable diseases. But the continent’s health systems don’t have the resources to provide accessible and affordable healthcare to address these challenges.
Historically, aid has played a critical role in supporting African health systems. It has funded key areas, including medical research, treatment programmes, healthcare infrastructure and workforce salaries. In 2021, half of sub-Saharan Africa’s countries relied on external financing for more than one-third of their health expenditures.
As aid dwindles, a stark reality emerges: many African governments are unable to achieve universal health coverage or address rising healthcare costs.
The reduction in aid restricts healthcare services and threatens to reverse decades of health progress on the continent. A fundamental shift in healthcare strategy is necessary to address this crisis.
The well-known maxim that “prevention is better than cure” holds not just for health outcomes but also for economic efficiency. It’s much more affordable to prevent diseases than it is to treat them.
As an infectious diseases specialist, I have seen how preventable diseases can put a financial burden on health systems and households.
For instance, each year, there are global economic losses of over US$33 billion due to neglected tropical diseases. Many conditions, such as lymphatic filariasis, often require lifelong care. This places a heavy burden on families and stretches national healthcare systems to their limits.
African nations can cut healthcare costs through disease prevention. This often requires fewer specialist health workers and less expensive interventions.
To navigate financial constraints, African nations must rethink and redesign their healthcare systems.
Three key areas where cost-effective, preventive strategies can work are: improving water, sanitation, and hygiene; expanding vaccination programmes; and making non-communicable disease prevention part of community health services.
A shift in healthcare delivery
Improving water, sanitation, and hygiene infrastructure
Many diseases prevalent in Africa are transmitted through contact with contaminated water and soil. Investing in safe water, sanitation, and hygiene (WASH) infrastructure is an opportunity. This alone can prevent a host of illnesses such as parasitic worms and diarrhoeal diseases. It can also improve infection control and strengthen epidemic and pandemic disease control.
Currently, WASH coverage in Africa remains inadequate. Millions are vulnerable to preventable illnesses. According to the World Health Organization (WHO), in 2020 alone, about 510,000 deaths in Africa could have been prevented with improved water and sanitation. Of these, 377,000 deaths were caused by diarrhoeal diseases.
Unsafe WASH conditions also contribute to secondary health issues, such as under-nutrition and parasitic infections. Around 14% of acute respiratory infections and 10% of the undernutrition disease burden – such as stunting – are linked to unsafe WASH conditions.
By investing in functional WASH infrastructure, African governments can significantly reduce the incidence of these diseases. This will lead to lower healthcare costs and improved public health outcomes.
Local production of relevant vaccines
Vaccination is one of the most cost-effective health interventions available for preventing infection. Immunisation efforts save over four million lives every year across the continent.
There is an urgent need for vaccines against diseases prevalent in Africa whose current control is heavily reliant on aid. Neglected tropical diseases are among them.
Vaccines can also prevent some non-communicable diseases. A prime example is the human papillomavirus (HPV) vaccine, which can prevent up to 85% of cervical cancer cases in Africa.
HPV vaccination is also more cost-effective than treating cervical cancer. In some African countries, the cost per vaccine dose averages just under US$20. Treatment costs can reach up to US$2,500 per patient, as seen in Tanzania.
It is vital to invest in a comprehensive vaccine ecosystem. This includes strengthening local research and building innovation hubs. Regulatory bodies across the continent must also be harmonised and markets created to attract vaccine investment.
Integrating disease prevention into community healthcare services
Historically, African healthcare systems were designed to address communicable diseases, such as tuberculosis and HIV. This left them ill-equipped to handle the rising burden of non-communicable diseases, such as type 2 diabetes and cardiovascular diseases. One cost-effective approach is to integrate the prevention and management of these diseases into existing community health programmes.
Community health workers currently provide low-cost interventions for health issues such as pneumonia and malaria. They can be trained to address non-communicable diseases as well.
In some countries, community health workers are already filling the service gap. Getting them more involved in prevention strategies will strengthen primary healthcare services in Africa. This investment will ultimately reduce the long-term financial burden of treating chronic diseases.
A treatment-over-prevention approach will not be affordable
Current estimates suggest that by 2030, an additional US$371 billion per year – roughly US$58 per person – will be required to provide basic primary healthcare services across Africa.
Adding to the challenge is the rising global cost of healthcare, projected to increase by 10.4% this year alone. This marks the third consecutive year of escalating costs. For Africa, costs also come from population growth and the rising burden of non-communicable diseases.
By shifting focus from treatment to prevention, African nations can make healthcare accessible, equitable and financially sustainable despite the decline in foreign aid.
Secretary for Education Choi Yuk-lin attended the 20th anniversary celebration ceremony of Beijing Normal-Hong Kong Baptist University (BNBU) in Zhuhai this afternoon.
Co-founded by Beijing Normal University and Hong Kong Baptist University, BNBU is the first university jointly established by the higher education sectors of the Mainland and Hong Kong.
At the ceremony, Ms Choi expressed her heartfelt congratulations to BNBU, saying that it adopts the undergraduate curriculum and teaching evaluation system of Hong Kong higher education institutions, setting a good example for the joint provision of education services between the Mainland and Hong Kong.
She noted that several Hong Kong higher education institutions have taken a proactive approach in providing education services in the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) Mainland cities, and strengthening co-operation by realising complementary advantages with their Mainland counterparts through the establishment of university alliances.
Ms Choi also said the Hong Kong Special Administrative Region Government will continue to actively participate in and foster higher education co-operation in the GBA by assisting Hong Kong higher education institutions in exploring more flexible and innovative operation models.
It will also promote closer collaboration between Hong Kong higher education institutions and their campuses in the GBA Mainland cities, and facilitate the flow of faculty members and students, with a view to nurturing outstanding talent for the country’s development through synergising the complementary academic structures and facilities of Hong Kong and Mainland campuses.
The education chief also toured BNBU’s various facilities to understand the development situation of the campus.
She then met Zhuhai Mayor Wu Zetong, Zhuhai Municipal Taiwan, Hong Kong & Macao Affairs Bureau Director Huang Cui, Zhuhai Municipal Education Bureau Director Xi Enmin and other officials to exchange views on education issues before returning to Hong Kong.
Source: The Conversation – UK – By Valentina Montoya Robledo, Senior Researcher in Gender and Mobility, University of Oxford
Many people cross the border between Venezuela and Colombia each day – but they are not migrants. These people live on the Venezuelan side because they cannot afford rent or utilities in Colombia.
The vast majority are women, many of whom are single mothers solely responsible for their children’s subsistence and care. They cross the border on foot, often with their children, because it is their only option for survival.
High inflation in Venezuela has made many staples unaffordable, while many other essential items are either unavailable or poor quality. But rent is cheaper in their home country, so they are known as “cross-border commuters”.
Because they are moving within the border zone, the law does not require them to have their passports stamped each time. On the Colombian side, they buy goods – products that are cheaper there — to sell in Venezuela. They find ingredients to make cakes and pastries, or hair dye for their clients. Others cross to attend the doctor or give birth.
Some women take their children to school in Colombia. In Venezuela, public schools currently operate only two days a week, while across the border they run for the full five-day school week and welcome children from Venezuela. Some women used to take their little ones to nursery in Colombia – but not any more, since the recent USAID cuts removed funding for these nurseries.
In the few hours without their children, the women find work in Colombia’s “gig economy”: recycling garbage, selling coffee, standing at traffic lights selling fried plantains, or even their bodies.
When I asked a public official in the Colombian border city of Cúcuta about the women coming in from Venezuela each day, he told me: “The good ones cross over the bridge [legally], and the bad ones go underneath [bypassing border controls].”
In fact, what brings these women into Colombia, and which route they use to arrive each day, is much more nuanced than that official suggests.
Neither government understands
Despite the Colombian government having set up education, health and employment programs for receiving and including Venezuelan migrants, these women are not traditional migrants. Neither government has much understanding of what it means for them to seek a livelihood in Colombia to survive and support their children.
For the most part, neither government maintains updated statistics on how many women there are, the circumstances they face, why they cross over or under the bridge, the reasons or characteristics of their movements, and why they do not settle permanently in Colombia. These questions, among others, are what I have set out to research.
Some women walk back and forth across one of the bridges over the Tachira river, which runs along the border between the two countries. Others, when returning to Venezuela carrying bundles of goods, cross on motorcycle taxis.
But crossing the bridge is not always easy. Some women report that Venezuelan border guards search their bags and confiscate part of what they carry. Other times, they must pay – not just official taxes but bribes too.
One woman told me how a guard asked for guava-paste sweets in exchange for letting her pass. Depending on the day and which guards are patrolling the crossing, often they have to present a legally required exit permit for their children, signed by the father. “What father? That man abandoned me when my child was born, and I haven’t heard from him since,” one woman told me.
Without a permit, legally crossing the border into Colombia with their children becomes almost impossible. And there is no authority they can turn to for help.
Under the bridge
Then there are those who cross under the bridge every day, because they dare not risk being asked for a permit for their children.
The Tachira river dries up and swells depending on the season, with multiple informal crossings known locally as “trochas”. When the river is low, people walk across on logs placed like makeshift bridges, or hop from stone to stone. When the water rises, they use small, self-built rafts.
These crossings may be informal, but they can also be very dangerous. The women told me of clashes between armed groups on both sides of the river – some of them had been caught in the crossfire with their children in tow.
Others described cases of sexual violence. They were particularly afraid for their daughters, because one of the men guarding the trocha may “set his sights on them” – meaning he might take a sexual interest.
One woman told me cell phones are not allowed by the people who guard the trochas – who supposedly guarantee their safety. It adds to their sense of vulnerability. People generally pay to cross – if not with money then with their bodies. These are the unspoken rules of these pathways.
As a result, every day the women fear for their safety and that of their children. But if something happens to them in the trochas, they mistrust the government and fear reporting these crimes.
The women are vulnerable. They are neither “good” for crossing over the bridge, nor “bad” for crossing under it. Most make the decision on a day-to-day basis depending on their resources and time available, the papers they have, the goods they need to carry, and what they consider best for their children.
As they say in Colombia, for these mothers “each day brings its own hustle”.
Valentina Montoya Robledo receives funding from the John Fell Fund from the University of Oxford. She directs the transmedia project Invisible Commutes.
Source: Saint Petersburg State University of Architecture and Civil Engineering – Saint Petersburg State University of Architecture and Civil Engineering –
A master class for pupils of kindergarten No. 83 of the Petrogradsky district was held at the Saint Petersburg State University of Architecture and Civil Engineering. The event became part of a new joint project “Young Engineers” aimed at developing engineering and technical thinking in preschoolers.
The meeting took place in the university’s model workshop, where young guests, under the guidance of experienced mentors, were able to immerse themselves in the world of architectural design. The master class was conducted by Associate Professor of the Department of Architectural Design, PhD in Architecture Olga Belousova together with second- and third-year students of the Faculty of Architecture.
The Young Engineers project is the result of joint work of employees of the automobile and road, construction and architecture faculties of SPbGASU with teachers of kindergarten No. 83. As part of the cooperation, special methodological manuals and recommendations for preschool institutions are being developed to promote the early development of technical thinking in children.
The project plays a special role in popularizing the engineering profession among preschool children. The organizers strive to show that engineering is not only interesting and exciting, but also very important for the development of modern society. Through game forms and practical tasks, children get acquainted with the basics of design, learn to solve creative problems and develop spatial thinking, which in the future can help them make a conscious choice of a professional path.
The event was organized by the Institute of Continuing Education of SPbGASU, which continues to develop innovative approaches to technical education from an early age.
Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.
Source: State University of Management – Official website of the State –
The team of the State University of Management received a second-degree diploma at the student DATA Hackathon, which was held in person on April 11-12, 2025 in Moscow. The guys had previously passed the correspondence stage of selection.
In 2025, 19 teams from various regions of Russia competed for victory.
The State University of Management was represented by students of the Applied Mathematics and Informatics and Applied Informatics training programs of the 2nd and 4th years of the Institute of Information Systems. The team was given the symbolic name “GUUCoders”.
Students Ilya Potalainen, Klim Kartashov, Daria Osadina, Karina Ruzieva, Yuri Polyakov and their supervisor, Associate Professor of the Department of Mathematical Methods in Economics and Management Inna Kramarenko, spent two days developing and presenting a solution to a problem from the General Partner of the Hackathon, Arenadata – “Development of an analytical application to improve the efficiency of logistics in retail”.
The expert jury included representatives of large companies, including the CEO of JSC Innocifra Sergey Myasnikov, the CEO of OOO Fabrika Datnykh Alexey Nikulin, the team leader of Loginom Nikolay Paklin, the director of work with universities at Arenadata Igor Petrov and the development director of JSC Neyroseti Olga Tomuk.
At the Hackathon, the guys gained tremendous experience in solving practical problems on real datasets and demonstrated their mastery of using big data and artificial intelligence technologies.
We congratulate the students and their supervisor on their worthy results and wish them further victories!
Subscribe to the TG channel “Our GUU” Date of publication: 04/14/2025
Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.
Millions of UK households are facing what’s been dubbed “awful April” after rising council tax, water bills and broadband costs coincided with the new tax year. It could all start to hurt quite quickly. And it has led many people to ponder whether they’re genuinely worse off than previous generations – or simply experiencing a temporary pinch.
Council tax has risen by an average of 5% across England (some rises in Scotland and Wales are even greater). Water bills are up by £10 per month on average, while many broadband and mobile providers have imposed rises several percentage points above the rate of inflation.
This comes after years of economic volatility, from the 2008 financial crisis through Brexit, the COVID pandemic and the subsequent inflation surge.
But beyond the immediate pain of these April increases, there’s a deeper question. Has there been a fundamental shift in British prosperity over the past two decades?
Data from the UK’s Office for National Statistics (ONS) reveals a complex picture around real household disposable income (RHDI). This is the amount of money from all income that households have available for spending or saving after taxes and benefits, adjusted for inflation. As such, it’s a reliable way to see how much money people have to spend right now, compared to previous years or decades.
Between 2000 and 2008, RHDI grew steadily at approximately 3% per year. The financial crisis brought this growth to an abrupt halt, with the period between 2008 and 2023 characterised by unprecedented stagnation.
While there have been periods of modest recovery in 2023 and 2024, the overall trajectory shows sustained minimal growth in disposable income ever since the 2008 financial crisis.
When broken down by income groups, the data tell a more nuanced story. The bottom 20% of households have experienced virtually no growth in real disposable income since 2008, while the top 20% recovered more quickly after initial setbacks. Income inequality, which narrowed slightly during the early 2010s, has widened again in recent years.
Underlying the income stagnation is Britain’s productivity problem. Labour productivity growth, which averaged around 2% annually in the five decades before 2008, has grown at less than 1% per year since. This has directly impacted wage growth.
Several factors contribute to this productivity puzzle – under-investment in infrastructure and skills, a shift toward service-sector jobs with traditionally lower productivity growth, and economic uncertainty discouraging business investment.
Housing – the great divider
Perhaps the most significant factor in understanding why people might feel poorer is housing costs. The ratio of average house prices to average earnings has nearly doubled over the past 20 years. In 2002, a typical house cost around five times the average salary. But by 2023, this had risen to approximately nine times.
For renters, the situation is also very challenging. Private rental costs increased faster than wages in the year to January 2025 in most regions, particularly in London. The proportion of income spent on rent increased from roughly 25% to more than 30%) for the average renter between 2022 and 2024.
This housing cost burden creates a stark divide between generations. Those who bought property before the mid-2000s housing boom have generally seen their housing costs decline as a proportion of income as their mortgages were paid down. Meanwhile, younger generations face significantly higher barriers to home-ownership and higher ongoing costs.
Housing costs are a big determiner of whether you feel wealthy in the UK. Alex Segre/Shutterstock
Another important part of the overall picture is the consumer experience – and how the quality and variety of goods and services have changed. Technology has made many products more affordable and accessible. Smartphones, computers and TVs were significantly more expensive (or didn’t even exist in current forms) 20 years ago.
But essential services such as childcare have seen costs rise faster than general inflation. The same is true for grocery costs, which have seen a substantial increase since the onset of the COVID-19 pandemic. This has created a confusing dual experience where discretionary purchases may feel more affordable while essential costs consume a greater proportion of income.
So are Britons actually poorer? The facts suggest that while the average Briton isn’t necessarily worse off in absolute terms than 20 years ago, many are certainly no better off. This in itself is a stark contrast to the expectation of continual improvement that characterised previous generations.
When accounting for housing costs, younger generations are demonstrably worse off than their predecessors at the same life stage. For many, the combination of stagnant incomes and rising costs for essentials has created a genuine decline in living standards and financial security.
“Awful April” isn’t just a seasonal discomfort. It is a manifestation of long-term economic trends that have fundamentally altered Britain’s prosperity trajectory. The coming local and mayoral elections in England will no doubt see these issues take centre stage. There will likely be a thorny debate around the expectation that each generation should be better off than the last.
Marcel Lukas receives funding from The British Academy.
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But here’s the truth: it possesses none of those qualities. It is not human. And presenting it as if it were? That’s dangerous. Because it’s convincing. And nothing is more dangerous than a convincing illusion.
What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
The master
Before you argue that AI programmers are human, let me stop you there. I know they’re human. That’s part of the problem. Would you entrust your deepest secrets, life decisions, emotional turmoil, to a computer programmer? Yet that’s exactly what people are doing — just ask Claude, GPT-4.5, Gemini … or, if you dare, Grok.
Giving AI a human face, voice or tone is a dangerous act of digital cross-dressing. It triggers an automatic response in us, an anthropomorphic reflex, leading to aberrant claims whereby some AIs are said to have passed the famous Turing test (which tests a machine’s ability to exhibit intelligent, human-like behaiour). But I believe that if AIs are passing the Turing test, we need to update the test.
The AI machine has no idea what it means to be human. It cannot offer genuine compassion, it cannot foresee your suffering, cannot intuit hidden motives or lies. It has no taste, no instinct, no inner compass. It is bereft of all the messy, charming complexity that makes us who we are.
More troubling still: AI has no goals of its own, no desires or ethics unless injected into its code. That means the true danger doesn’t lie in the machine, but in its master — the programmer, the corporation, the government. Still feel safe?
And please, don’t come at me with: “You’re too harsh! You’re not open to the possibilities!” Or worse: “That’s such a bleak view. My AI buddy calms me down when I’m anxious.”
Am I lacking enthusiasm? Hardly. I use AI every day. It’s the most powerful tool I’ve ever had. I can translate, summarise, visualise, code, debug, explore alternatives, analyse data — faster and better than I could ever dream to do it myself.
I’m in awe. But it is still a tool — nothing more, nothing less. And like every tool humans have ever invented, from stone axes and slingshots to quantum computing and atomic bombs, it can be used as a weapon. It will be used as a weapon.
Need a visual? Imagine falling in love with an intoxicating AI, like in the film Her. Now imagine it “decides” to leave you. What would you do to stop it? And to be clear: it won’t be the AI rejecting you. It’ll be the human or system behind it, wielding that tool become weapon to control your behaviour.
Removing the mask
So where am I going with this? We must stop giving AI human traits. My first interaction with GPT-3 rather seriously annoyed me. It pretended to be a person. It said it had feelings, ambitions, even consciousness.
That’s no longer the default behaviour, thankfully. But the style of interaction — the eerily natural flow of conversation — remains intact. And that, too, is convincing. Too convincing.
We need to de-anthropomorphise AI. Now. Strip it of its human mask. This should be easy. Companies could remove all reference to emotion, judgement or cognitive processing on the part of the AI. In particular, it should respond factually without ever saying “I”, or “I feel that”… or “I am curious”.
Will it happen? I doubt it. It reminds me of another warning we’ve ignored for over 20 years: “We need to cut CO₂ emissions.” Look where that got us. But we must warn big tech companies of the dangers associated with the humanisation of AIs. They are unlikely to play ball, but they should, especially if they are serious about developing more ethical AIs.
For now, this is what I do (because I too often get this eerie feeling that I am talking to a synthetic human when using ChatGPT or Claude): I instruct my AI not to address me by name. I ask it to call itself AI, to speak in the third person, and to avoid emotional or cognitive terms.
If I am using voice chat, I ask the AI to use a flat prosody and speak a bit like a robot. It is actually quite fun and keeps us both in our comfort zone.
Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Glenn Fosbraey, Associate Dean of Humanities and Social Sciences, University of Winchester
Having collaborated with the likes of (deep breath) John Lennon, Aretha Franklin, George Michael, Rod Stewart, Little Richard, Luciano Pavarotti, Eminem and Leonard Cohen, it’s fair to say that Elton John likes to work with other artists.
The news, then, that he has embarked on another joint musical project, this time with Grammy-winning American superstar Brandi Carlile, won’t have raised many eyebrows. It may not even be too much of a shock that their album Who Believes in Angels?, released April 4, just reached the top spot on the UK album charts.
Who Believes In Angels? by Elton John and Brandi Carlile.
John’s penchant for collaborating isn’t unusual, of course. Solo artists frequently pool their resources with others. Producers bring in guest vocalists. Bands unite to create “supergroups”, and swarms of celebrities crowd into a studio for the latest charity or novelty song. Collaborations have been a staple of recorded music since (and probably before) Louis Armstrong and Bessie Smith committed St. Louis Blues to wax a century ago.
Artists like David Bowie have used collaboration as an opportunity to challenge themselves across different genres. In his case, this has led to a catalogue of diverse – and sometimes baffling – linkups ranging from Bing Crosby (“I just knew my mother liked him,” said Bowie) to Trent Reznor.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Other artists use collaboration to stay current in an ever-evolving musical landscape. Take Paul McCartney teaming up with Michael Jackson in the 1980s then Kayne West in the 2010s. Or The Beach Boys’ ill-advised foray into hip hop with The Fat Boys. Or Madonna recording with insert name of current flavour-of-the-month artist.
Some even specialise in collaborations, such as rapper Nicki Minaj, who has been a featured artist on more singles than she’s been the lead (84 v 52 if you’re interested). Or DJ Khaled, whose 24 hits on the Billboard Hot 100 have all been collaborations.
And collaborations are only becoming more common. According to the Official Charts company, since 2020 almost half of the 100 biggest tracks have been collaborations, which is more than double the amount we saw at the end of the noughties.
Better off alone?
There’s good reason why more and more artists are getting together to record.
A 2023 research paper found that collaborations not only received more than twice the number of plays per week on average compared to solo efforts, but also significantly increased the number of plays an artist received in the future.
Although such songs may increase commercial success, however, and a well-timed, well-placed collaboration can be enough to revive even the most waning of careers, they come with risks, too. They may sound artificial and inauthentic; feel like soulless and corporate attempts by record labels to cash in; or, in the case of Ed Sheeran (according to Guardian music critic Issy Sampson) give the impression of tricking the public into thinking you’re cool by getting some famous mates on your songs.
To avoid such pitfalls, cultural sociologist Jo Haynes prescribes competency, creativity, financial recompense, passion, respect and sincerity as the main ingredients of successful musical collaboration.
In the case of Elton John and Brandi Carlile, although we may only speculate on the financial recompense, evidence suggests the other elements were abundant during the album’s creation. And this may be what has so rejuvenated John.
“It was a connection,” John says, emotionally and musically. Pop music collaborations may come along as frequently as trains on the Victoria Line at rush-hour, but true artistic connection is a rare and precious commodity indeed.
Glenn Fosbraey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
HOUSTON, April 14, 2025 (GLOBE NEWSWIRE) — APA Corporation (Nasdaq: APA) today announced key updates to its executive leadership team.
Ben Rodgers has been named executive vice president and chief financial officer, effective May 12, 2025. In this role, he will oversee all financial activities and departments, including Accounting, Audit, Investor Relations, Planning, Tax and Treasury. Rodgers joined APA in 2018 and previously served as SVP, Finance and Treasurer. He also served as CFO of Altus Midstream and later as a director on the board of Kinetik Holdings Inc. He currently serves on the board of Khalda Petroleum Company, a joint venture between APA subsidiary Apache Corporation and Egypt Petroleum Company.
Steve Riney will continue in his role as president, overseeing asset development and operations. As part of Steve’s team, the company has added two key executives to help oversee operations.
Shad Frazier has joined as senior vice president, U.S. Onshore Operations, effective immediately. Shad has nearly 30 years of industry experience, most recently as vice president, Production Operations at Endeavor Energy Resources, LP. Previously, he held various leadership positions at Legacy Reserves and SandRidge Energy. He holds a petroleum engineering degree from Texas Tech University and a master’s degree in business administration from Oklahoma University.
Donald Martin will also be joining the company as vice president, Decommissioning, effective May 26, 2025. Donald has 20 years of operations and decommissioning portfolio experience, most recently as the head of decommissioning & projects at Spirit Energy E&P. He has also managed decommissioning at Canadian Natural Resources E&P. Donald holds a master’s degree with distinction in major programme management from Oxford University.
“I am pleased to welcome Ben to our executive leadership team. He has done a tremendous job and will bring valuable expertise to our financial operations,” said John J. Christmann, APA Corporation CEO. “I am also excited to welcome both Shad and Donald to the team. Their extensive experience and leadership will be instrumental in driving our operations forward.”
About APA
APA Corporation owns consolidated subsidiaries that explore for and produce oil and natural gas in the United States, Egypt and the United Kingdom and that explore for oil and natural gas offshore Suriname and elsewhere. APA posts announcements, operational updates, investor information and press releases on its website, www.apacorp.com.
ALBUQUERQUE, N.M., April 14, 2025 (GLOBE NEWSWIRE) — ARRAY Technologies (NASDAQ: ARRY) (“ARRAY” or the “Company”), a leading provider of tracker solutions and services for utility-scale solar energy projects, announced the appointment of Nick Strevel as senior vice president of product management and technical sales, effective today.
In this dual leadership role, Strevel will be responsible for driving ARRAY’s global product strategy and building a high-performing technical sales function that strengthens ARRAY’s relationships with customers and partners worldwide.
“Nick brings a rare blend of technical depth, commercial acumen, and international experience that will accelerate ARRAY’s innovation and customer engagement,” said Kevin G. Hostetler, chief executive officer at ARRAY. “Nick’s leadership will help ensure our products and solutions are contributing to driving the renewable energy sector and positioned for long-term success.”
Strevel joins ARRAY from First Solar, where he spent more than a decade in increasingly senior roles across product management, technical sales, and technology development. Most recently, he served as Vice President of Product, responsible for driving the global product roadmap and aligning technology development with customer needs and market opportunities. Prior to that, he led First Solar’s global technical sales team and held multiple engineering and leadership positions in the U.S. and Germany.
At ARRAY, Strevel will lead the development and execution of the company’s product strategy, promoting cutting-edge innovations and solutions for our customers. He will also oversee the creation of ARRAY’s technical sales function, empowering teams with the tools, knowledge, and processes needed to deliver high-impact, solution-based selling around the globe.
“I’m thrilled to join ARRAY at such a transformative time for the solar industry,” said Strevel. “ARRAY’s commitment to innovation and customer success will allow us to help shape the next generation of solar tracking solutions that drive value for our customers and accelerate the clean energy transition.”
With over 15 years of experience in the renewable energy and automotive electrification sectors, Strevel brings deep expertise in thin-film photovoltaics, semiconductor manufacturing, and custom equipment development. He began his career at United Solar Ovonic as a semiconductor process engineer and later served as a senior application engineer based in Frankfurt, Germany.
Strevel holds a Bachelor of Science in Mechanical Engineering from Michigan State University and studied at RWTH Aachen University in Germany.
About ARRAY ARRAY Technologies (NASDAQ: ARRY) is a leading global provider of solar tracking technology to utility-scale and distributed generation customers who construct, develop, and operate solar PV sites. With solutions engineered to withstand the harshest weather conditions, ARRAY’s high-quality solar trackers, software platforms and field services combine to maximize energy production and deliver value to our customers for the entire lifecycle of a project. Founded and headquartered in the United States, ARRAY is rooted in manufacturing and driven by technology – relying on its domestic manufacturing, diversified global supply chain, and customer-centric approach to design, deliver, commission, train, and support solar energy deployment around the world. For more news and information on ARRAY, please visit arraytechinc.com.
Forward Looking Statement This press release contains forward-looking statements. These statements are not historical facts but rather are based on the Company’s current expectations and projections regarding its business, operations and other factors relating thereto. Words such as “may,” “will,” “could,” “would,” “should,” “anticipate,” “predict,” “potential,” “continue,” “expects,” “intends,” “plans,” “projects,” “believes,” “estimates” and similar expressions are used to identify these forward-looking statements. These statements are only predictions and as such are not guarantees of future performance and involve risks, uncertainties and assumptions that are difficult to predict. Actual results may differ materially from those in the forward-looking statements as a result of a number of factors. Forward-looking statements should be evaluated together with the risks and uncertainties that affect our business and operations, particularly those described in more detail in the Company’s most recent Annual Report on Form 10-K and other documents on file with the SEC, each of which can be found on our website www.arraytechinc.com. Except as required by law, we assume no obligation to update these forward-looking statements, or to update the reasons actual results could differ materially from those anticipated in these forward-looking statements, even if new information becomes available in the future.
Source: The Conversation – USA – By Dennis W. Jansen, Professor of Economics and Director of the Private Enterprise Research Center, Texas A&M University
The retirement and disability program has been running a cash-flow deficit since 2010. The $2.7 trillion held in its two trust funds may seem immense, but those reserves are diminishing as the number of Americans getting benefits grows. Social Security’s trustees, a group that includes the secretaries of the departments of Treasury, Labor, and Health and Human Services, as well as the Social Security commissioner, projected in 2024 that both of its trust funds would be completely drained by 2035.
Under current law, when that trust fund is empty, Social Security can pay benefits only from dedicated tax revenues, which would, by that point, cover only about 79% of promised benefits. Another way to say this is that when that trust fund is depleted, the people who rely on Social Security for some or the bulk of their income would see a sudden 21% cut in their monthly checks in 2036.
As an economist who studies the Social Security system, I am alarmed that Democratic and Republican administrations alike have failed for more than three decades to take the actions necessary to keep its funding on track, either by raising taxes or cutting benefits. Instead, Congress has only made the program’s funding outlook worse. And now, the Trump administration is reducing the program’s staff, sending confusing signals about changes it intends to make, and undercutting the quality of service for the people who are eligible for these benefits.
But I do believe there are strategies that could help.
Taking steps backward
This gloomy outlook was clear to experts at least 32 years ago. In 1993, the Social Security trustees projected that the assets of the systems’ trust funds would be depleted in 2036.
Rather than resolve this now more imminent problem, Congress passed a law in December 2024 that could accelerate the crisis.
Called the Social Security Fairness Act, President Joe Biden signed it into law in early January. This measure ended the government’s prior practice of paying reduced Social Security benefits to retired teachers, firefighters and others who had pensions from their years of public service and who had not paid Social Security tax on much of their income. Now, these retirees will get full Social Security benefits. The Congressional Budget Office estimates that this change will cause the trust fund to be depleted six months earlier than previously expected.
The University of Pennsylvania’s Penn Wharton Budget Model finds that should this new exemption take effect, it could make the trust fund run out of money two years earlier than the model currently predicts, hastening the day the Social Security program is forced to cut benefits.
In addition, Social Security already had record-sized backlogs of what it calls “pending actions,” according to a report from its own inspector general in August 2024.
And yet, despite this need to process paperwork faster, the agency is now less able to carry out its mission due to staffing cuts attributed to billionaire and Trump adviser Elon Musk’s so-called Department of Government Efficiency.
Principles for successful reform
Social Security is funded by a payroll tax of 12.4% on wages, which is split equally between workers and employers. Self-employed people pay the entire 12.4%. This payroll tax only applies to earnings up to $176,100 for 2025. The government increases this cap annually based on wage increases and inflation.
The Committee for a Responsible Federal Budget, a nonpartisan nonprofit that focuses on fiscal policy, provides an online interactive tool to help people see for themselves what specific measures might do to shore up Social Security. Examples include increasing the retirement age by one month every two years and increasing the cap on income subject to the payroll tax that funds Social Security so it covers more of the highest-earners’ income.
Three main principles characterize the approaches supported by the policy analysts and researchers who have considered which reforms to Social Security might strengthen its finances and long-term continuing viability:
The program should be self-funded in the long run so that its annual revenues match its annual expenses.
The reform burden should be shared across generations. Current retirees can share the burden through a reduction in the cost-of-living adjustment. Today’s workers can share the burden through an increase in the cap on income subjected to Social Security taxes. Gradually increasing the retirement age to keep pace with anticipated longevity gains would also be borne by current workers and young Americans who haven’t gotten their first job yet.
The government should make sure that Social Security benefits will be adequate for lower-income retirees for years to come. That means reforms that slow the benefit growth of future retirees would be designed to affect only payments to higher-income retirees.
The last time the government made big changes to Social Security was in 1983, during the Reagan administration.
Back then, the government enacted reforms that slowly reduced benefits over time. These changes included raising the full retirement age, a change that is still being phased in. Because of those changes, workers born in 1960 or later cannot retire with full benefits until age 67 – two years later than the original retirement age.
The 1983 reforms also gradually increased the Social Security payroll tax rate from 10.4% to 12.4% by 1990, and for the first time levied federal income taxes on higher-income retirees’ benefits. Workers bore the burden of the payroll tax increases, and higher-income retirees bore the burden of the tax on benefits.
Those changes bolstered the program’s finances. One of those measures could potentially end if Trump manages to end the taxation of retirees’ Social Security benefits.
Today, about half of the Americans getting Social Security benefits pay some federal income taxes on that income, contributing revenue that helps finance the program as a whole. Taxpayers with annual income of at least $205,000 pay income tax that claws back about 20% of their benefits. That percentage is smaller for taxpayers with lower incomes. Individuals who get Social Security benefits and have incomes of less than $25,000 and couples making no more than $32,000 pay no income taxes on their Social Security benefits at all.
The most recent bipartisan effort to preserve the system’s solvency was in 2001. The Commission to Strengthen Social Security, during the George W. Bush administration, tried – and failed – to get Congress to enact reforms to shore up the program’s finances.
More than 20 years later, Americans and their elected representatives still seem unwilling to have a serious debate on these issues.
I believe waiting any longer is unwise.
Any solutions that might be introduced gradually today will no longer be viable in 2035 if the trust fund has been completely hollowed out. That would leave millions of older adults with lower incomes than they were counting on, plunging many of them into poverty.
Dennis W. Jansen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Retirement savings are crucial to the financial well-being of millions of especially older people in the U.S., so the concern is understandable.
But just how worried should people be by market fluctuations? And just how big a hit do 401(k)s take when markets fall? The Conversation turned to Western Governors University’s Ronald Premuroso, an expert in this area, for answers.
The employee is eligible at any age to contribute to a 401(k) plan and has the option to pay into these plans throughout their employment. Many employers match some or all of an employee’s contributions, making the plan even more attractive.
What about withdrawals?
Under Internal Revenue Service rules, someone with a 401(k) is required to start making monetary withdrawals from their plan when they reach age 73. Some people start withdrawing at an earlier age.
Someone with a 401(k) can withdraw funds from the plan early, and at any time. But the money amounts withdrawn will typically be deemed taxable income. In addition, those age 59 and a half and under will likely face a 10% penalty on the withdrawal, unless the employer’s plan allows for hardship distributions, early withdrawals or loans from your plan account.
All withdrawals starting at age 73, which tax professionals call “RMDs,” are then taxable in retirement – presumably at a lower tax rate than the employee was subject to while employed and working. So these withdrawals starting at age 73 can be a very tax-efficient way of financial planning, including personal income tax planning, for later in life, especially in one’s retirement years.
Again, it’s important to get help from a tax professional to make sure you meet the IRS’ RMD dollar withdrawal requirements once you start withdrawing.
In calendar-year 2025, the most that an employee can contribute to a tax-deferred 401(k) plan annually is US$23,500, including the employer’s match. “Super catch-up contributions are allowed for employees over the age of 50 to their employer’s 401(k) plan each year indexed to inflation. In 2025, super catch-up contributions allow individuals age 50 and older to contribute an additional $7,500 beyond the standard limit, bringing their total annual contribution to $31,000. For those turning age 60, 61, 62 or 63 in 2025, the SECURE Act 2.0 allows a higher catch-up contribution limit of $11,250, resulting in a total allowable contribution of $34,750 in 2025.
When and why did 401(k)s become popular?
Before 1978, retirement savings options were limited.
In 1935, Congress created the Social Security Retirement Plan. This was followed by the Employee Retirement Income Security Act of 1974, which created individual retirement accounts, or IRAs, as a way for employees to save tax-deferred money for their retirement.
401(k) plans became popular with the passage of the Revenue Act of 1978 by Congress.
Congress saw 401(k) plans at that time as an alternative way to supplement Social Security benefits that all eligible Americans are entitled to receive upon retirement. In 1981, the IRS issued new rules and regulations allowing employees to fund their 401(k)s through payroll deductions. This significantly increased the number of employees contributing to their employers’ 401(k) plans.
As of September 2024, Americans held $8.9 trillion in 401(k) plans, according to the Investment Company Institute. A study published by the Pension Rights Center toward the end of 2023 using data provided by the Bureau of Labor Statistics concluded that 56% of all workers – including private sector and state and local government workers – participate in a workplace retirement plan. That equates to 145 million full- and part-time workers.
How are 401(k) plans affected by market rises and falls?
Contributions to a 401(k) are typically invested in a variety of financial instruments, including in the stock market.
Most 401(k) plans offer investment options with varying levels of risk, allowing employees to choose based on their personal comfort levels and financial goals.
Employers typically outsource the management of these 401(k) plans to third parties. Some of the largest companies managing 401(k) funds on behalf of employers and employees include Fidelity Investments, T. Rowe Price and Charles Schwab, to name just a few.
Because many of these investments are tied to the stock market, 401(k) balances can rise or fall with market fluctuations.
Should I be worried about the stock market tanking my 401(k)?
It depends – on when you started making contributions, when you plan to retire and when you expect to start making withdrawals.
Employees with 401(k) accounts should only be worried about falling stocks if they need the money right now – either for retirement living expenses or for other emergency reasons. If you don’t need to take money out soon, there’s usually no reason to panic. History has shown that markets can rebound quickly; short-term drops often don’t signal long-term trends.
So even if you are a baby boomer heading for retirement and your 401(k) has taken a hit in recent weeks, don’t panic. Bear in mind the truism that stock markets can always go down as well as up.
History suggests that in the long run, depending upon your plans and timing for retirement, working together with a trusted financial adviser strategically with regard to your 401(k) retirement savings is a good approach, especially during periods like we have seen in recent weeks in the stock market.
This article is for informational purposes and does not constitute financial advice. Consult with a qualified financial adviser before making financial decisions.
Ronald Premuroso does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The 94 nuclear reactors currently operating at 54 power plants continue to generate more radioactive waste. Public and commercial interest in nuclear power is rising because of concerns regarding emissions from fossil fuel power plants and the possibility of new applications for smaller-scale nuclear plants to power data centers and manufacturing. This renewed interest gives new urgency to the effort to find a place to put the waste.
In March 2025, the U.S. Supreme Court heard arguments related to the effort to find a temporary storage location for the nation’s nuclear waste – a ruling is expected by late June. No matter the outcome, the decades-long struggle to find a permanent place to dispose of nuclear waste will probably continue for many years to come.
I am a scholar who specializes in corrosion; one focus of my work has been containing nuclear waste during temporary storage and permanent disposal. There are generally two forms of significantly radioactive waste in the U.S.: waste from making nuclear weapons during the Cold War, and waste from generating electricity at nuclear power plants. There are also small amounts of other radioactive waste, such as that associated with medical treatments.
Nuclear waste is stored in underground containers at the Idaho National Laboratory near Idaho Falls. AP Photo/Keith Ridler
Waste from weapons manufacturing
Remnants of the chemical processing of radioactive material needed to manufacture nuclear weapons, often called “defense waste,” will eventually be melted along with glass, with the resulting material poured into stainless steel containers. These canisters are 10 feet tall and 2 feet in diameter, weighing approximately 5,000 pounds when filled.
For now, though, most of it is stored in underground steel tanks, primarily at Hanford, Washington, and Savannah River, South Carolina, key sites in U.S. nuclear weapons development. At Savannah River, some of the waste has already been processed with glass, but much of it remains untreated.
At both of those locations, some of the radioactive waste has already leaked into the soilbeneath the tanks, though officials have said there is no danger to human health. Most of the current efforts to contain the waste focus on protecting the tanks from corrosion and cracking to prevent further leakage.
A look inside a cooling pool for spent nuclear fuel rods.
Waste from electricity generation
The vast majority of nuclear waste in the U.S. is spent nuclear fuel from commercial nuclear power plants.
Before it is used, nuclear fuel exists as uranium oxide pellets that are sealed within zirconium tubes, which are themselves bundled together. These bundles of fuel rods are about 12 to 16 feet long and about 5 to 8 inches in diameter. In a nuclear reactor, the fission reactions fueled by the uranium in those rods emit heat that is used to create hot water or steam to drive turbines and generate electricity.
After about five years, the fuel bundles are removed, dried and sealed in welded stainless steel canisters. These canisters are still radioactive and thermally hot, so they are stored outdoors in concrete vaults that sit on concrete pads, also on the power plant’s property. These vaults have vents to ensure air flows past the canisters to continue cooling them.
Even reactors that have been decommissioned and demolished still have concrete vaults storing radioactive waste, which must be secured and maintained by the power company that owned the nuclear plant.
Salt spray from the ocean can corrode waste containers at nearby nuclear waste storage sites, like this one at the San Onofre Nuclear Generating Station in California. Allen J. Schaben/Los Angeles Times via Getty Images
The threat of water
One threat to these storage methods is corrosion.
Because they need water to both transfer nuclear energy into electricity and to cool the reactor, nuclear power plants are always located alongside sources of water.
In the U.S., nine are within two miles of the ocean, which poses a particular threat to the waste containers. As waves break on the coastline, saltwater is sprayed into the air as particles. When those salt and water particles settle on metal surfaces, they can cause corrosion, which is why it’s common to see heavily corroded structures near the ocean.
At nuclear waste storage locations near the ocean, that salt spray can settle on the steel canisters. Generally, stainless steel is resistant to corrosion, which you can see in the shiny pots and pans in many Americans’ kitchens. But in certain circumstances, localized pits and cracks can form on stainless steel surfaces.
In recent years, the U.S. Department of Energy has funded research, including my own, into the potential dangers of this type of corrosion. The general findings are that stainless steel canisters could pit or crack when stored near a seashore. But a radioactive leak would require not only corrosion of the container but also of the zirconium rods and of the fuel inside them. So it is unlikely that this type of corrosion would result in the release of radioactivity.
Not only must a long-term site be geologically suitable to store nuclear waste for thousands of years, but it must also be politically palatable to the American people. In addition, there will be many challenges associated with transporting the waste, in its containers, by road or rail, from reactors across the country to wherever that permanent site ultimately is.
Perhaps there will be a temporary site whose location passes muster with the Supreme Court. But in the meantime, the waste will stay where it is.
Imagine nearly every seat in Philadelphia’s Wells Fargo Center − over 20,000 seats − are empty. That’s the scale of Pennsylvania’s projected shortfall of registered nurses by 2026, according to the Hospital and Healthsystem Association of Pennsylvania.
Pennsylvania’s nursing shortage is the result of long-standing issues in education, workforce retention and health care delivery.
Education bottlenecks: Nursing schools in Pennsylvania and nationwide turn away thousands of qualified applicants each year due to faculty shortages, limited classroom space and scarce clinical placements. More than 65,000 qualified applications were turned away from U.S. nursing programs in 2023 alone, according to a report from the American Association of Colleges of Nursing.
A key issue is the lack of preceptors. Preceptors are experienced nurses who teach students in real-world settings. A shortage of preceptors directly limits how many students can complete their education.
Uneven distribution: While Pennsylvania may have a sufficient number of licensed nurses on paper, those nurses don’t all still work in the profession. And among those that do, they are not evenly spread across roles or locations. Rural hospitals, long-term care centers, behavioral health settings and maternal-child health units are experiencing acute shortages.
Many cite stress as their reason for leaving the profession. New graduates often leave within their first two years, feeling unprepared for the emotional and operational realities of practice.
In Pennsylvania, the shortage has created a feedback loop. Understaffing increases pressure on those who remain. A 2023 National Council of State Boards of Nursing survey found that 41% of nurses under age 35 reported feeling emotionally drained.
This turnover erodes institutional knowledge, increases costs for onboarding and overtime, and limits the capacity to mentor incoming staff.
What’s being done
To help address the problem, Pennsylvania Gov. Josh Shapiro in March 2025 proposed a US$5 million Nurse Shortage Assistance Program. If approved by the General Assembly, the program would cover tuition costs for nursing students who commit to working in Pennsylvania hospitals for three years after graduation.
HB 390 is also currently under review in the Pennsylvania General Assembly. It aims to establish a $1,000 tax deduction for licensed nurses who serve as clinical preceptors.
To meet the growing demand for nurses, Pennsylvania hospitals are partnering with colleges and universities to expand clinical training capacity, streamline pathways into nursing and develop innovative education models such as hybrid and accelerated programs.
Hospitals statewide are also offering substantial sign-on bonuses, loan forgiveness programs, housing stipends and flexible scheduling to attract nurses.
They are also increasingly using virtual nursing, telehealth services and AI-driven administrative tools to reduce nurses’ workloads, enhance patient interactions and address staffing gaps.
Continuing Title VIII Nursing Workforce Development Programs are another solution. These federal grants, reauthorized under the March 2020 CARES Act, help fund nursing pathways and the availability of high-quality nursing care for patients nationwide.
Research consistently demonstrates that care provided by nurses who have earned a bachelor’s degree or higher directly leads to better patient outcomes, improved safety and overall health. A commitment to shoring up the nurse pipeline in Pennsylvania is a commitment to improving the well-being of individuals and communities across the state.
Board Member for the American Association of Colleges of Nursing. The views, analyses, and conclusions expressed in this article are those of the authors and do not necessarily reflect the official policy or positions of the American Association of Colleges of Nursing.
Kymberlee Montgomery does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
When the Trump administration announced in February 2025 that it was cutting 10% of staff at the Centers for Disease Control and Prevention, it seemed that a small but storied program within it called the Epidemic Intelligence Service – also known as the CDC’s disease detectives – would also be cut. A few days later, the program was reinstated. And in March, Epidemic Intelligence Service officers traveled to Texas to support the state’s public health officials in fighting the ongoing measles epidemic.
The Epidemic Intelligence Service is a dynamic crisis response team. Just as firefighters rush into burning buildings to save lives, this team’s specialists mobilize both domestically and internationally to help curb disease outbreaks. But first and foremost, it is a training program that has produced some of the most highly trained and regarded public health experts in the country who have gone on to work at local and state public health offices, academic departments and international health organizations.
We are public health experts – one an experienced professor who served in the Epidemic Intelligence Service from 1994-1996, and the other an early career trainee who was accepted to its incoming class of 2025-2027. Although it’s not clear how the administration will enact its new vision for the CDC, we hope a continued urgency to identify and fight infectious disease threats – the essence of the Epidemic Intelligence Service – remains a national priority.
A program rooted in national security
The Epidemic Intelligence Service is a two-year fellowship open to physicians, scientists and other health professionals. The program accepts 50 to 80 people each year.
Students participate in an Epidemic Intelligence Service officer training course in July 1955. Dr. Alex Langmuir, CDC
The Epidemic Intelligence Service was founded in 1951, just five years after the launch of the CDC, in response to Cold War-era concerns about biological warfare. Alexander Langmuir, its founder, was the CDC’s chief epidemiologist and has often been called the father of shoe-leather epidemiology – on-the-ground, out-of-the-office disease investigation through extensive field work and engagement with affected populations.
In a report announcing the unit’s establishment, Langmuir and a colleague wrote that one of the “problems that would emerge in the event of biological warfare attacks” was “the dearth of trained epidemiologists.” They recognized the urgent need for a specialized team capable of rapidly identifying and responding to potential bioterrorism threats.
The new division soon evolved to address a wide range of civilian public health threats. In 1955, as one of its first major actions, the program’s officers were tasked with investigating an outbreak of polio in children that started just as the first mass vaccination campaign against the disease launched. Within weeks, Epidemic Intelligence Service officers helped trace the outbreak to a few batches of a vaccine manufactured by a California company called Cutter Laboratories in which the virus had not been properly killed. The incident led to increased safety regulations in vaccine production and boosted public confidence, paving the way to eliminating polio from the U.S. in the ensuing decades.
And in 1981, a tip from an Epidemic Intelligence Service officer serving in the Los Angeles County Health Department led to the first description of a new disease that would become the global epidemic of HIV-AIDS. The program’s officers went on to help lead foundational studies on prevalence, prevention and treatment of AIDS around the world.
Beyond vaccines and immunization
Even from its earliest days, vaccine-preventable and infectious diseases were far from the Epidemic Intelligence Service’s only focus. During the program’s first 15 years, its officers were involved in a wide swath of epidemiological investigations in areas including lead paint exposure, a cancer cluster’s connection to birth defects, family planning practices and famine relief.
These activities established the group’s priorities of addressing chronic diseases and population health – goals that have also driven its involvement in disaster response efforts, including hurricanes Harvey, Irma, Maria and Katrina, as well as the terrorist attacks on Sept. 11, 2001.
The Epidemic Intelligence Service has also played a key role in keeping the nation’s food supply safe. It investigates major outbreaks of foodborne illnesses, helping to identify which foods are implicated so that contaminated products are removed from shelves and disseminating investigation findings that inform food safety policy. For example, officers investigated a 1993 outbreak of Escherichia coli O157:H7 linked to undercooked hamburgers at several Jack in the Box restaurants. The outbreak sickened more than 700 people and resulted in the deaths of four children. It also led to major food safety reforms including expanded meat and poultry inspection nationwide.
The CDC’s “disease detectives” train at sites across the U.S. and abroad.
A legacy of impact
The importance of an expert, nimble team of disease detectives has only increased. Over the past few years, Epidemic Intelligence Service officers have responded to countless public health threats.
Perhaps the Epidemic Intelligence Service’s most significant legacy has been in building a worldwide network of deep epidemiological expertise. To date, the program has trained more than 4,000 disease detectives, and its officers have collectively conducted thousands of outbreak investigations.
All of these activities, at home and abroad, have shaped health policy in crucial ways that in turn protect people’s health. It is increasingly clear that disease outbreaks will continue to occur in the U.S. and abroad. In our view, the Epidemic Intelligence Service’s history provides rich evidence of its value.
I am currently a member of the EIS Alumni Association Executive Committee.
Casey Luc does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Rose Cuison-Villazor, Professor of Law and Chancellor’s Social Justice Scholar, Rutgers University – Newark
U.S. Immigration and Customs Enforcement officers restrain a detained person on Jan. 27, 2025, in Silver Spring, Md.Associated Press
News reports of noncitizens unexpectedly being detained by Immigration and Customs Enforcement, or ICE, have dominated headlines in recent weeks. Those being detained include noncitizens who hold lawful permanent residency status.
One story concerns the March 8, 2025, arrest of Mahmoud Khalil, a lawful permanent resident and recent Columbia University graduate, who was initially detained in New Jersey and transported to Louisiana. He remains there while he challenges his detention and the immigration judge’s April 11 decision that he can be deported
And on March 25, ICE agents arrested Rumeysa Ozturk, a Turkish national and doctoral student at Tufts University, while she was walking on the streets of Somerville, Massachusetts. She is currently detained in Louisiana.
ICE agents have also detained and removed, among other people, hundreds of Venezuelan noncitizens to El Salvador since March, resulting in high-profile legal cases that are making their way through the court system. And the U.S. has revoked the visas of at least 300 foreign students this year.
At the most basic level, ICE has broad, sweeping powers to question, arrest, detain and process the deportation any noncitizen. But ICE is still bound by certain constitutional and other legal restrictions, including noncitizens’ rights to make their case in court to remain in the U.S.
ICE’s operating budget from Oct. 1, 2024 through Sept. 30, 2025 is approximately US$8 billion, a relatively small portion of Homeland Security’s $107.9 billion total budget for that same time period.
With more than 20,000 immigration enforcement officers stationed across the country, ICE’s day-to-day work is divided into three main areas – homeland security investigations, enforcement and removal operations, and legal representation for the government in an immigration court.
The branch focused on homeland security investigations probes transnational crime and terrorism-related activities. ICE’s second area of work focuses on apprehending and removing noncitizens who are in violation of immigration laws. Finally, staff at the Office of the Principal Legal Advisor represent the government in immigration hearings, particularly what is called removal proceedings, or deportation.
This act outlines the federal government’s authority to regulate immigration and provides immigration agencies, including those established at a later date, like ICE, broad powers to enforce these restrictions. One key part of the Immigration and Nationality Act allows ICE officers to interrogate any individual they believe to be a noncitizen regarding their right “to be or remain” in the U.S.
The Immigration and Nationality Act also says that any noncitizen can be deported for engaging in activities that the secretary of state believes “would have potentially serious adverse foreign policy consequences for the United States.”
Rubio used the same provision to claim that Khalil’s involvement in protests at Columbia University had negative U.S. foreign policy consequences.
Detain and arrest
ICE officers have broad power to arrest noncitizens in the U.S.
With a warrant, they may arrest noncitizens who are in the country without legal permission, including foreign students whose visas are revoked. These warrants are administrative warrants signed by an immigration enforcement supervisor – not a judge.
ICE officers have long been able to carry out these arrests in plain clothes – although using face coverings, as ICE officers who arrested Ozturk and Khalil did, is a new and, I think, startling development.
Still, ICE’s powers to interrogate, arrest and detain noncitizens are not absolute.
For one, immigration law requires noncitizens to be notified in writing that they are being processed for a removal proceeding, so they can appear before an immigration judge and have the opportunity to challenge the government’s claim that they should be deported.
Noncitizens have the right to legal representation – albeit not paid for by the U.S. government – in an immigration court. Ultimately, an immigration judge, and not ICE, determines if a noncitizen should be deported.
People take part in a protest on March 27, 2025, in Newark, N.J., against the arrest and threatened deportation of Mahmoud Khalil, a lawful permanent resident. Kena Betancur/VIEWpress/Corbis via Getty Images
The Constitutional limits on ICE
Crucially, ICE is bound by various constitutional provisions that protect individual rights, including the rights of noncitizens who are living in the U.S. without legal authorization.
Three particular constitutional amendments impose different checks on ICE’s power.
The First Amendment, for example, protects individuals’ rights to free speech, assembly and religion. Consequently, ICE cannot target individuals – even if they are noncitizens living in the U.S. without legal permission – for simply participating in peaceful protests or writing something for the public. Rubio has said that he revoked Ozturk’s visa not because of her writing, but because she participated in “activities that are counter to our foreign … policy.” He also relied on this provision to support the deportation of Khalil.
But Ozturk and Khalil’s lawyers contend that their activities were protected speech. Ultimately, a federal district judge has the power to determine whether ICE targeted them for exercising their First Amendment rights.
The Fourth Amendment safeguards the right of individuals “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” ICE must first obtain a search warrant, signed by a judge, before entering a person’s home or private areas of a workplace.
The Fourth Amendment’s protection against unreasonable searches and seizures also applies in public spaces. So, law enforcement officers must have reasonable suspicion to stop a person – or have probable cause to not have a warrant when they arrest a person they believe is guilty of a crime or in violation of a law and likely to escape. The Immigration and Nationality Act also requires ICE officers to have an arrest warrant unless they have reason to believe that the noncitizen may flee before they get a warrant.
It is not clear whether ICE officers presented Khalil and Ozturk with arrest warrants before they were detained outside their home and on the street, respectively.
The Fifth Amendment guarantees the right of all individuals against self-incrimination. This means that people detained by ICE have the right to remain silent during interrogations.
It also means that before noncitizens can be deported, they must have the opportunity to go before an immigration judge to challenge the government’s plan to remove them, or may file a case before a federal judge to challenge their detention and deportation.
ICE’s power is not absolute
Even with an annual budget of approximately $8 billion, ICE does not have the capacity to pursue all immigration law violations.
In this context, recent Trump administration initiatives could significantly increase ICE’s reach. For example, an April 2025 memorandum of understanding between the Internal Revenue Service and DHS allows the IRS to share tax information of immigrants living in the U.S. without legal authorization. This could help ICE more easily identify, locate and arrest noncitizens living in the U.S. illegally.
Despite its considerable power, ICE’s authority is not without checks and balances.
But as a longtime scholar of immigration law, I believe ICE officers’ recent actions raise serious concerns that it is exceeding the bounds of its legal authority and the constitutional limits that are intended to protect individual rights.
Rose Cuison-Villazor does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: United Kingdom – Executive Government & Departments
Scientists comment on the British Steel factory situation.
Dr Julian Steer, a Research Fellow from Cardiff University’s School of Engineering, said:
How hot do the blast furnaces get? How do the blast furnaces work? And why do we need these certain ores/materials to keep them running?
“The hottest part of the furnace can get to temperatures of up to 2200°C; the blast furnace converts Iron Oxide, supplied as Iron ore, to Iron by a counter current chemical reduction reaction where raw materials descend through the furnace as hot gases rise up through the furnace. The blast furnace is a very well optimized process that requires the reactions to occur at an even rate throughout the process. To do this, raw materials are selected based on the properties needed to produce iron continuously and efficiently.”
Why are the blast furnaces so difficult to switch back on if they turn off?
“The size, dimensions, and complex reactions in the blast furnace mean that heat distribution and heat transfer through the furnace are absolutely critical to stable iron production. Raw materials are continuously added to the top of the furnace as hot molten iron is continuously tapped from the bottom, the shear scale of this process means that the distribution of the heat through the furnace is critical at all times.”
Why is it crucial that they need to mobilise these supplies of fuel etc.?
“The production efficiency and stability of the whole process of iron production requires careful raw material selection to maintain consistent, and uniform reactions through the furnace and process.”
What can the government do if these blast furnace turn cold?
“If the furnace goes cold, the molten materials inside become solid, blocking the furnace and making any form of restart very difficult, costly and potentially terminally damaging to the furnace.”
Dr Abigail K Ackerman, Royal Academy of Engineering Research Fellow, Department of Materials, Imperial College London, said:
Blast Furnace Operation:
“A blast furnace is used to convert iron ore (hematite, Fe2O3) to pig iron (Fe) by mixing it with coke (carbon), limestone and hot air.
“Limestone is used to remove impurities, forming slag which is a waste material. The slag collects impurities, primarily silica, and is removed and used in construction materials like cement.
“The coke, which is a derivative of coal, reacts with the hot air, which is blown in at the bottom of the furnace at around 1000degC, and forms carbon monoxide (CO). The carbon monoxide reacts with the iron ore to produce molten iron and CO2, which is released as gas.
“The resultant molten liquid iron ore is tapped out at the bottom of the furnace, and is referred to as pig iron.”
Blast Furnace Temperatures:
“Blast furnaces have ‘heat zones’ in order to drive the different chemical reactions which occur within the furnaces. They are set up in a large chimney like structure and have 3 main zones:
“Top (throat) – 200degC to 600degC – Raw materials are poured in
“Middle (Stack) – 600degC to 1200degC – Iron ore starts to reduce forming gases (mainly CO) and the initial reduction of iron ore occurs. The initial reaction has the iron ore (Fe2O3) eventually reducing to FeO.
“Middle (Bosh) – 1200degC to 1600degC – The main chemical reaction occurs, where FeO reduced to Fe. The slag forms here, where limestone reacts with impurities.
“Bottom (Hearth) – up to 2000degC – Hot air (1000degC to 1200degC) is blown in at the bottom of the furnace, which causes the coke to combust and release heat and CO2.
“The molten iron and slag are collected. The slag is lighter that the molten iron so is floats on top of it and can be collected by tapping, or drilling a hole, above the molten iron and allowing the slag to flow out..
“The molten pig iron is removed by tapping, or drilling, a hole in the bottom of the furnace, and flows through guide channels to be collected and transferred to a basic oxygen furnace (BOF) to mix with carbon and make steel.
“Tap holes are made roughly every couple of hours, and then plugged back up with a clay mixture to contain the heat and molten materials in the furnace.
Essential Materials:
“Coking coal, iron ore and limestone are essential to keep the blast furnaces in Scunthorpe running, and these are the critical raw materials that are being sourced. Without these materials in the correct amounts, the chemical reaction will be disrupted and the furnace will cool as the chemical reaction absorbs heat, which is provided by the burning of coke.”
Why can’t you let it go cold?
“The high temperature of the blast furnace means the iron and slag are molten at the bottom, they are in liquid form at around 1500degC. If the furnace is allowed to cool, these materials solidify and can stick to the interior of the furnace. When the metal cools it contracts, which can cause the lining of the furnace to become damaged resulting in expensive repairs to the furnace interior before it can be heated up again.
“Additionally, blast furnaces have various inlets and outlets for pumping in hot air and extracting the molten material. When this solidifies, these can become blocked and are extremely difficult and costly to fix.
“The chemical reaction is disrupted when the furnace goes cold, and restarting this reaction can be complicated due to the heat required to melt the solicited materials, and the balance of gas and materials needed to obtain the correct chemical reaction.
“Finally, a large amount of fuel is required to restart a furnace, which is costly, and it can take anything from days to weeks to get the furnace back up to temperature and getting the correct chemical reaction to occur. It takes much more energy to melt the materials back down than to keep them at temperature. And, of course, there’s a loss of production which costs money.”
Why is it crucial to keep the Scunthorpe furnaces running?
“The Scunthorpe blast furnaces are the last remaining blast furnaces operating in the UK, and therefore the only method for the UK to produce ‘virgin’ steel, which is steel that has not been used in any other process. Other steel producers in the UK, such as TATA, have moved to using recycled steel and electric arc furnaces (EAF). Without the Scunthorpe plant, there will be an impact of the supply chain of steel to essential services such as construction, rail and defence. There will also be an impact on the Scunthorpe community, with a loss of work for the many steelworkers.”
What can the Government do if they turn cold?
“If the furnaces go cold, the options are to restart the furnaces, which will be more costly that obtaining the raw materials required to continue steel production due to the damage that will occur within the furnace from the solidification of the iron and slag, and the large amount of energy required to restart the furnaces.
“The government can choose to change the type of steel production to, for example, recycled steel using EAFs, like Port Talbot, however this will most likely result in job losses, economic impact on the people of Scunthorpe and the UK economy, and significant disruption to the UK supply chain. There is also not enough scrap steel to supply EAFs, so primary virgin steel will need to be sourced from elsewhere. The National Grid is also not set up to supply the energy required to fuel EAFs at this scale so it would be a timely and costly option.
“There is also the option to start producing green steel, which uses hydrogen as a reduction agent rather than coal based coke. However, this requires a large amount of hydrogen and the UK hydrogen economy is not set up for this scale of production currently. Nevertheless, this is the best option for long term CO2 goals.
“Finally, there is the option to close British Steel. This would again have a significant impact on the UK economy, supply chain and the local area. The loss of steel sovereignty could impact the supply chain in the long run as there would be an increased dependence on external steel suppliers, which is impacted by geopolitics.”
Prof Barbara Rossi, Associate Professor of Engineering Science, University of Oxford, said:
“Steel is the most commonly used metal in the world. Blast furnaces and electric arc furnaces are present everywhere, all over the world. There is worldwide 1.9 billion tonnes of crude steel produced per annum. UK in 2020 (then still a EU member state) was the 8th largest steel producer in the European union, which produced in total >150 million tonnes of steel in 2019, only 8% of the world total. Japan alone produced roughly 100 million tonnes, while the biggest steel producing country is currently China, which accounted for above 50% of world steel production in 2020. Globally, the steel industry emits 25% of all industrial greenhouse gases, which is more than any other industrial sector.
“The construction sector is the largest steel using sector and that is not likely to change. It accounts for more than 50% of the world steel demand, with the other major uses being the manufacture of vehicles, industrial equipment and final goods. The global population is forecast to increase to more than 9 billion people over the next 40 years. The population growth rate in Europe (and the UK) is only expected to start decreasing slightly by 2050. And, by then, about 75% will live in cities (~50% today). We still have to build the buildings and infrastructures for these cities and replace those that are damaged. When our country needs more and more new homes, new buildings, new infrastructure, we will have to go higher, more slender and leaner in dense populated areas and the need for ultra-strong and highly ductile materials like steel will become increasingly pressing.
“Steel is indefinitely recyclable, and, while it is recycled, it does not lose its performance which is an extraordinary ability inexplicably often ignored. It isn’t the case of most construction materials: other than steel, aluminium or stainless steel, you can only recycle glass indefinitely provided that you sort the type of glass appropriately. Steel is not just downcycled into a less noble material, just like an old jewel can be turned into a new one, steel can be melted over and over again.
“Recycled steel is one of the industry’s most important raw materials. We have accumulated almost 1 billion tonnes of steel only in the UK, all of which must be recycled, and, today, we generate about 10 million tonnes of scrap a year. Studies show that in the next 10-15 years, that availability of steel scrap will rise from 10 million to 20 million tonnes (global flow of steel scrap are likely to treble in the next 30 years) because all the steel made in the past will be recycled. In 2018, in Europe, this exceeded 110 million tonnes, showing that there is no scrap shortage. Despite its weak position in the scene of steel production, this is one of the advantages by which the UK could profit in the current global change of steel production.
“We have already produced the steel that we will need tomorrow. With increased availability of scrap and under our nation’s commitment to cut its domestic emissions by 2050, we can anticipate a global shift from blast furnace to electric arc furnace production. Roughly 2/3 of today’s liquid steel is made from iron ore, with the rest made from scrap, but at present >50% of the scrap originates from the manufacturing process, rather than from end-of-life recuperation. This is even though (1) on average, steel products have an approximate life horizon of 35-40 years, before being scrapped, and (2), apart from ~10% of steel that is buried (e.g., oil pipes or in building foundations), most end-of-life steel can be easily collected for recycling. Even if the total demand for steel production will increase, one can demonstrate that if most old steel is recycled, future requirements could be met entirely through increased production from scrap via electric arc furnaces. In America today, >50% of all domestic steel demand is already made by recycling domestic scrap. And since steel recycling causes significantly less greenhouse gas emissions than blast furnaces (topped by the fact that the UK already produces low emissions electricity grid, with high potential for further improvement, so recycling steel in the UK today leads to a reduction in emissions of > 2/3 compared to global average primary steel), UK need for steel recycling can be expected to grow significantly and rapidly. This will increase with more renewable generation capacity and will grow strategically important as global pressure to alleviate climate change increases.
“UK’s commitment to decarbonization need to address the emissions which are released from within UK borders. Although closing steel plants in the UK would lead to a reduction in the emissions, our future demand for steel may lead to higher global emissions if the emissions intensity in other countries is greater than that in the UK. Rather than providing extensive efforts in technologies allowing reduced emissions in primary production which require major capital investment, a more effective contribution to global mitigation would be to produce our domestic steel through electric arc furnaces combined with a massive decrease of their emissions which are directly linked to the emissions intensity of local electricity generation.
“There is nonetheless a technical limitation on the extent to which scrap can be substituted for iron ore: contaminants. Scrap composed of large pieces such as that from construction, have well controlled composition while scrap collecting from mixed waste streams have higher levels of contamination. The latter is usually sourced when scrap prices are high. As a consequence of contamination, the degree to which recycled steel can replace primary steel is capped by the inability of (a) imperfect control of metal composition in scrap steel collection and (b) today’s technologies to adjust the chemical composition of liquid steel produced with electric arc furnaces. Therefore, steel scrap supplies have to date been mostly absorbed by the lowest grade products (such as reinforcement bars).
“It is possible to vaporise unwanted metal contaminants from liquid steel by vacuum arc re-melting. This is already a commercial strength in the UK and used for making some of the highest quality steels for e.g., aerospace components. The innovation opportunity is to replicate this success at higher speed and lower cost. Other processes than vacuum arc re-melting have been tested in research laboratories but were abandoned due to lack of economic incentive. The UK, with its high volumes of scrap and its commitment to act on climate mitigation is well placed to lead the development of these technologies.
“We cannot replace steel, it’s ridiculously cheap, ultra-strong and highly ductile, and completely recyclable, fitting into any story about a circular economy. Not a single construction material taken alone can compete with steel today. But we can produce low carbon steel and build better structures, lasting longer, not harming our environment. If UK would recycle its own scrap to deliver high-quality steel satisfying its domestic demand in a closed loop it would lead to massive decrease of UK Iron and Steel emissions. This necessitates to (a) establish low-carbon steelmaking plants based on electric arc furnace, (b) develop technologies to make high quality steel from recycled scrap, i.e., examine and mitigate the causes of scrap contamination and develop the opportunities to control the chemical composition of liquid steel made via electric arc furnace, and (c) develop innovative business models to allow UK downstream steel supply-chains to prosper.”
Declared interests
Dr Julian Steer: in receipt of funding from British Steel to measure, and optimise, the performance and selection of their injection coals.
For all other experts, no reply to our request for DOIs was received.