Although nearly all large organizations are working with artificial intelligence in some form, very few are ready to expand its use in a reliable and secure way. A new report from F5, which draws from insights across hundreds of enterprises worldwide, shows that just 2 percent of firms meet the criteria for high AI readiness. While most companies have started deploying machine learning tools, many lack the underlying security, governance, and infrastructure alignment needed to support AI at scale.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: AI Chatbots Often Overconfident Despite Errors, Researchers Say
AI Usage Varies Widely Across Readiness Levels
The data shows that about a quarter of all business applications now include AI capabilities. In highly prepared organizations, AI use tends to stretch across nearly every part of their technology stack. In contrast, companies with lower readiness still operate in fragmented ways, often keeping AI in pilot phases or restricted to isolated projects. Those in the middle range, which make up the majority, have AI present in roughly one out of every three apps, showing forward movement but not yet achieving enterprise-wide saturation.Security Remains a Core Obstacle for AI Maturity
More than 70 percent of companies already rely on AI to strengthen their cybersecurity posture, but only a fraction have invested in tools designed specifically for AI protection. Among those classified as moderately ready, fewer than one in five have deployed AI firewalls, though almost half plan to do so within a year. Data transparency also remains limited. Only 24 percent of all surveyed firms are actively labeling their data on a continuous basis, a gap that increases the risk of hidden vulnerabilities and flawed model behavior.Most Use Several Models but Few Have Strategic Integration
Across the board, organizations are adopting a mix of paid and open-source models. Roughly two-thirds are using at least three different systems, and many run them in multiple environments. Popular choices include GPT-4, Meta’s Llama family, Mistral variants, and Google’s Gemma. Those who favor open-source tools often maintain tighter control over how data is labeled, whereas firms relying on commercial models report stronger confidence in their governance frameworks. However, this diversity doesn’t always mean maturity. Many companies still lack a clear roadmap for how different models should be deployed, secured, or scaled together.Low-Readiness Organizations Are Still in Early Stages
A significant share of firms remain in the low-readiness tier, where AI is used in fewer than a quarter of their apps and often limited to experiments. Most of these organizations are working with only one model and have not yet moved beyond planning or testing stages. Just over a third have brought generative AI into live production. While interest is growing, operational gaps continue to hold them back.Clear Gaps Separate Leaders from Laggards
The report emphasizes that high-readiness firms don’t just deploy AI more widely, they also build it into their operations in a way that aligns with security, compliance, and long-term scalability. Their advantage lies not in using AI tools alone but in embedding those tools into a coordinated structure. For other companies, catching up will likely depend on whether they treat AI as a core business system rather than a feature layered on top of legacy processes.Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: AI Chatbots Often Overconfident Despite Errors, Researchers Say
