AI-Powered Cyber Attacks Are Escalating, But Most IT Teams Aren’t Ready

A finance employee joins a routine video call. The CEO is on the screen. So, too, is the CFO. They authorize a large transfer over $25 million. The finance employee complies. Later, a startling truth comes to light: neither executive was ever on the call. Instead, the entire meeting was a deepfake engineered with AI tools designed to closely mimic the faces and voices of the executives in real time.

Unfortunately, this isn’t a hypothetical scenario. It happened to a multinational company in Hong Kong. And, according to new data from identity and access platform Frontegg, it might be a glimpse into the near future for thousands of organizations that currently rely on outdated cybersecurity playbooks.

In its latest report, Frontegg surveyed 1,019 IT professionals to gauge exactly how organizations are responding to the rapid emergence of AI-driven threats. The findings are, among other things, unsettling. On one hand, generative AI is supercharging the speed, scale, and sophistication of cyberattacks. On the other hand, a majority of IT teams admit they’re neither equipped nor actively preparing to counter these augmented cyberattacks.

The Changing Nature of Cyber Threats

The past two years have witnessed tremendous advances in generative adversarial networks (GANs), large language models (LLMs), and multimodal AI. These technologies are now widely accessible and, in some cases, weaponized. Today, attackers often use them to produce realistic fake media, crack passwords at scale, or conduct phishing campaigns that are indistinguishable from legitimate communications.

According to Frontegg’s research:

  • 35% of IT professionals say their organization has experienced a rise in cyberattacks in the past year.
  • Of those, 51% attribute the increase directly to AI-enhanced tools.
  • 44% report that generative AI has enabled deepfake impersonation attacks (e.g., voices, faces, even live video).
  • 42% say AI has accelerated password cracking, automating brute force methods at speeds humans can’t match.

This isn’t just an emerging issue. More than one in 5 IT professionals say they have personally witnessed over 10 AI-driven cyberattacks in the past year alone.

As attackers adopt AI to scale operations and bypass traditional defenses, the rules of cybersecurity are rapidly shifting. Unfortunately, this shift is not in favor of the defenders.

When Your CEO Becomes the Threat Vector

One of the most alarming trends is the rise in impersonation using AI-generated media. Over a third of IT professionals report phishing emails that spoof their company’s leadership. Sometimes, these phishing attempts use synthetic voice or video.

A widely reported case involved scammers cloning the digital likeness of top executives in order to execute a multi-million-dollar heist through a convincingly faked Zoom meeting. Unfortunately, this trend is growing. As Frontegg’s report notes, 34% of IT teams encountered phishing attempts that featured their CEO’s face or voice.

The FBI has echoed similar concerns. It warns that cybercriminals are increasingly using AI to craft persuasive social engineering attempts. These schemes range from fake hostage videos to deepfake messages from government officials. While trust was once a defensive bulwark in corporate communications, it is now one of the more exploited attack surfaces.

Authentication: The Achilles’ Heel

Most authentication systems still rely on passwords, despite years of warnings about their vulnerabilities. AI is exploiting that gap.

Frontegg’s survey reveals:

  • 51% of IT professionals see passwords as the weakest link in their security architecture.
  • 57% cite delays in implementing passwordless systems, citing complexity (34%), cost (27%), and internal resistance (19%).
  • Only 32% have implemented passwordless authentication at all.

Even CAPTCHA challenges are faltering. Nearly half of respondents believed that CAPTCHA is no longer effective against AI-driven bots. Only a third still trust CAPTCHA.

Traditional login systems weren’t built to defend against intelligent automation. However, many teams remain stuck. Reasons include legacy systems, cost considerations, or a lack of executive buy-in.

The Readiness Gap

Awareness is growing. But, preparation is not. That’s one of the most troubling findings of the report.

  • Just 33% of organizations have created “red team” exercises to test defenses against AI-enabled threats.
  • A staggering 66% admit their teams don’t dedicate any time each month to reviewing protocols or updating practices in response to AI.
  • Half of IT professionals believe their current authentication stack would fail in the face of a sophisticated AI-powered attack.

This readiness gap is both technological and psychological. Indeed, Frontegg’s study found that 50% of IT professionals report rising stress levels from tracking and responding to AI-driven incidents. Defending against human adversaries has already proven to be difficult. Now, defending against algorithms that scale infinitely is becoming, for some, a daunting and even exhausting burden.

A Better Path Forward: From Reactive to Resilient

What does adapting to the multi-pronged threat of AI look like? According to Frontegg, it starts with rethinking authentication. Instead of framing authentication as a one-time gatekeeping task, consider analyzing it as a dynamic, context-aware process.

That could include:

  • Phish-resistant authentication like passkeys or hardware tokens.
  • Behavioral analytics and contextual login flows to detect anomalies in real time.
  • Segmented access controls so that high-risk actions require additional validation.

It also means restructuring teams so that cybersecurity is not siloed. For instance, one approach that’s gaining traction is allowing product, information security, and customer success teams to manage user access without depending entirely on developers. This approach distributes responsibility across departments. That flexibility is becoming critical in defending against threats that evolve too fast for linear decision-making.

A Problem of Technology and Trust

The stakes go beyond dollars lost or systems breached. They extend into user trust, data integrity, and long-term business viability. In recent months, Digital Information World has reported on growing concerns around user authentication. This includes consumers abandoning apps after frustrating password resets or privacy-violating login policies.

AI-driven threats exploit code and human confidence. When a phishing attack succeeds by mimicking your CEO’s voice, or a fake login form captures credentials with uncanny precision, users are less likely to trust the digital spaces they interact with. That loss of trust is harder and more expensive to repair than any technical system.

Why Most Organizations Will Stay Vulnerable

Why are so many IT teams still underprepared if the dangers are clear?

Frontegg’s data points to three overlapping blockers:

  1. Legacy systems that can’t easily integrate new authentication technologies.
  2. Cost pressures, especially in sectors like healthcare and education.
  3. Cultural inertia or security practices that were “good enough” five years ago are proving hard to dislodge.

These aren’t trivial challenges. But they are solvable. And as the report emphasizes, the price of inaction is rising.

Looking Ahead

The future of cybersecurity will be shaped by how effectively organizations respond to the AI threat. Organizations must move beyond patching holes to redesigning the way digital trust is established and maintained in the first place. That means rethinking what authentication means in a world where identity can be cloned, where phishing emails no longer have telltale signs, and where automation makes it cheap to attack at scale.

It also means giving IT teams the resources, autonomy, and support they need to implement next generation protections. Instead, many organizations continue to ask IT teams to do more with the same tools and shrinking budgets.

The story that the Frontegg data tells reflects an urgent reality. AI in the hands of attackers has already changed the game. The question is whether defenders will catch up or continue playing by old rules in a game that’s already been rewritten.

AI-deepfake Zoom scam cost $25M; hackers cloned executives, exposing global gaps in authentication defenses.




Methodology: This analysis is based on a May 2025 survey of 1,019 IT professionals conducted by Frontegg to assess how AI is influencing cybersecurity trends, particularly in the realm of authentication and access management.

Read next: Nearly Half Of Americans, Particularly Millennials, Worry About Online Privacy But Continue Using Data-hungry Apps

Previous Post Next Post