By Associate Professor Grant Blashki , University of Melbourne
From job fears to an existential dread, artificial intelligence is triggering a new kind of anxiety. Here's why you're not alone in feeling unsettled – and what actually helps
A patient said to me the other day, half-smiling but clearly unsettled: “I think I’ve got anxiety about AI.”
They weren’t having a panic attack or describing clinical anxiety. What they were expressing was a persistent sense of unease that many of us are feeling right now.
A sense that the world is changing very quickly, that the systems we live within are being redesigned around us and that most of us don’t feel particularly consulted or prepared for a life increasingly immersed in artificial intelligence (AI).
If you’re feeling that way, you’re not alone.
A recent survey shows that the general public is more concerned about AI than AI experts – particularly around jobs and human connection – while both groups express strong concern about misinformation.
AI anxiety isn’t a single fear. It’s a cluster of related concerns that vary by a person’s stage of life, technical literacy, work and values. However, many of these worries tend to fall into a few common themes.
Fear of economic and identity disruption
Understandably, one of the biggest concerns is job disruption.
AI is increasingly capable of performing tasks that were once the preserve of humans – drafting text, analysing data, writing code, summarising meetings, interpreting images, handling customer interactions and even helping me fix my barbecue with what appears to be impressive competence.
Whether AI will ‘take our jobs’ is a more complex question.
The International Monetary Fund estimates that almost 40 per cent of jobs globally will be affected by AI, with advanced economies facing higher exposure because more work involves cognitive tasks.
While the World Economic Forum’s Future of Jobs Report 2025 projects substantial labour-market churn by 2030, with 170 million jobs created and 92 million displaced, resulting in net growth but major transition pressures.
Although these projections suggest overall job growth, others suggests AI-driven job losses, especially for young people. Regardless, the lived experience for many people is often one of disruption – local, personal and immediate.
For many of us, work provides more than income. It provides identity, purpose and social connection. Anxiety arises not only from fear of unemployment, but from uncertainty about relevance and value in a world increasingly shaped by AI.
Loss of control
A second major concern is what might be called the ‘Big Brother’ effect – the growing role of AI systems in informing, and sometimes making, decisions that affect people’s lives.
These include hiring, credit, insurance, welfare compliance and even healthcare prioritisation.
The worry is not simply that AI systems may be wrong. It’s that decisions may be opaque, difficult to challenge and poorly explained – effectively occurring inside a black box.
Although the Organisation for Economic Co-operation and Development’s (OECD) AI Principles explicitly emphasise human agency and oversight, transparency and accountability as core requirements for trustworthy AI systems, global trends suggest that guardrails are often weakened as companies and nations race for dominance in a competitive AI market.
Misinformation and manipulation
AI has dramatically lowered the cost of producing highly realistic – but entirely false – content. While this can be entertaining at times, it becomes serious when convincing images, audio and video are used to influence people’s decisions.
Deepfake technology is now used for fraud, impersonation and misinformation.
Recent reporting by The Guardian described deepfake scams as occurring on an “industrial scale”, with high-quality fake content accessible to non-experts.
In Australia, this concern has become concrete.
ABC News reported deepfake advertisements impersonating a leading diabetes specialist, promoting unproven supplements and discouraging evidence-based treatment – a clear public-health risk.
When people can no longer reliably distinguish authentic information from synthetic content, trust in institutions, expertise and online information degrades – and anxiety follows.
Privacy and surveillance
Privacy has been gradually eroding for years, and this is amplified by AI’s ability to analyse large volumes of personal data – including behavioural, biometric and location data – often without us fully understanding how that data is used.
Pew Research shows persistent public concern about data misuse, impersonation and loss of privacy associated with AI systems.
This anxiety is not limited to government surveillance; it also reflects unease about corporate data practices, profiling, targeted persuasion and information asymmetries between people and institutions.
AI agents ‘taking over’
For most people, fears of an AI apocalypse are not everyday concerns, but they surface during high-profile stories about AI behaving unpredictably or operating autonomously.
One recent example is Moltbook, a platform marketed as a social network for AI agents, which attracted widespread commentary about some weird and disturbing interactions between AI systems.
Reuters reported that the platform had a major security vulnerability that exposed private messages, thousands of email addresses, and over a million credentials, highlighting basic governance and security failures.
These episodes often attract dystopian interpretations reminiscent of science-fiction narratives. But the more immediate risks tend to be practical rather than dramatic: poor security, weak oversight, unclear responsibility and premature deployment.
Concentration of power
Another source of anxiety is the concentration of AI capability among a small number of firms and countries. Many people worry about a future in which a handful of technology giants hold disproportionate power and wealth.
The OECD has noted that generative AI markets may exhibit a ‘winner takes all or most’ dynamic, reinforcing market power and potentially increasing inequality.
When powerful technologies are perceived as unavoidable and foundational, people reasonably ask who benefits, who bears the risk and how accountability is maintained.
Education integrity under pressure
AI anxiety is particularly evident in education.
The concern is not only academic misconduct, but whether assessment continues to measure understanding, reasoning and learning when high-quality outputs can be generated instantly.
A recent article in The Australian reported widespread student use of AI in higher education, including an experiment in which around 80 per cent of 40 student assignments had a high probability of being AI-generated.
This is a global issue.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has warned that generative AI is advancing faster than institutional readiness, raising concerns about privacy, equity and the long-term future of education.
At its core, this anxiety reflects concern about the purpose of education itself – not just credentialing, but the development of judgment, critical thinking and intellectual independence.
Authenticity and meaning
Finally, there is a quieter concern about authenticity and meaning.
As generative AI becomes capable of producing fluent writing, images and conversation at scale, some people worry less about being replaced and more about being diminished.
They question whether human creativity, effort and connection will continue to be recognised and valued when machine-generated output is ubiquitous.
Research from the Pew Research Center captures this unease.
Many people express concern that AI may reduce human interaction, weaken social connection and devalue human skills and creativity, even while acknowledging its potential benefits.
These concerns are not anti-technology; they reflect a desire for a future in which human contribution remains visible and meaningful alongside increasingly capable machines.
What actually helps if you’re worried about AI
When people raise anxiety with me – whether it’s about their health, the environment or technology – it usually eases once concerns become specific and actionable. AI is no different.
So, here are some tips:
- Name the worry: ‘AI’ is too broad to be useful. Are you worried about your job, misinformation, privacy, education or decision-making without oversight?
- Clean up your information diet: AI anxiety is often driven by headlines rather than evidence. Limit sensationalist coverage, be cautious with viral screenshots and rely on a small number of trusted sources.
- Build your AI literacy: You don’t need to be technical, but you do need to understand how AI is used in your own field – where it helps, where it fails and how outputs should be checked.
- Ask for guardrails: Anxiety rises when accountability is unclear. Ask who is responsible when AI is used, how errors are handled and what safeguards exist. Support regulation – even at your workplace or in your home – that focuses on transparency, safety and fairness.
At this moment in history, AI anxiety is not irrational. It reflects rapid change intersecting with our work, education, relationships and identity.
Neither denial nor panic is helpful. Engagement, understanding and our shared responsibility are.
Note: This article was originally published on the University of Melbourne’s Pursuit research news website and republished on DIW with permission. We have been informed that no AI tools were used in creating the text.
Reviewed by Ayaz Khan.
Read next: AI energy use: New tools show which model consumes the most power, and why
