Artificial intelligence is revolutionizing vulnerability discovery, enabling both defenders and attackers to identify weaknesses faster than ever before. While this advancement promises to harden software over time, it also creates a dangerous transition period where malicious actors can leverage AI to find and exploit vulnerabilities before organizations can patch them. The following Q&A addresses key concerns and strategies for enterprises facing this new reality, including the evolving threat landscape and practical steps to strengthen defenses.
How are AI models transforming vulnerability discovery?
General-purpose AI models have demonstrated remarkable proficiency in identifying security flaws, even without being specifically designed for that task. Unlike traditional tools that rely on predefined signatures, these models can analyze code and system behaviors to uncover novel vulnerabilities with greater speed and accuracy. This capability is not limited to discovery—AI can also assist in generating functional exploit code, dramatically reducing the time and expertise required. For example, threat actors are already using large language models (LLMs) to craft exploits, and underground forums now advertise AI-driven tools for this purpose. This shift compresses the traditional attack timeline from months to days or even hours, forcing organizations to accelerate their response strategies.

What is the critical window of risk for enterprises?
As AI becomes integrated into development cycles, software will eventually become more resilient to exploitation. However, the transition period creates a dangerous gap: while defenders are still hardening existing code, attackers are already using AI to discover and exploit vulnerabilities. This means that even as security improves in the long run, the near future will see a spike in attacks targeting unhardened systems. Enterprises face two urgent tasks: rapidly patching known weaknesses in their software stacks and preparing to defend components that have not yet been updated. Without immediate action, organizations risk falling behind adversaries who are already operationalizing AI for offense.
How is the adversary lifecycle changing with AI?
Historically, discovering novel vulnerabilities and developing zero-day exploits required significant time, specialized human expertise, and substantial resources. AI now lowers the barrier, making exploit development achievable for threat actors of all skill levels. This compresses the entire adversary lifecycle—from discovery to deployment—into a much shorter window. Furthermore, advanced adversaries like PRC-nexus espionage groups have already demonstrated the ability to rapidly distribute exploits among separate threat clusters, further shrinking the gap between first discovery and mass exploitation. The result is an environment where attacks can be mounted with unprecedented speed, requiring defenders to monitor evolving threat intelligence continuously.
What trends are emerging among threat actors using AI?
According to observations from groups like Google's Threat Intelligence Group (GTIG), threat actors are already incorporating LLMs into their workflows for vulnerability research and exploit generation. Underground marketplaces now market AI-powered tools specifically for offensive security tasks, making advanced capabilities accessible to less-skilled criminals. This democratization of exploitation is leading to a surge in mass ransomware and extortion campaigns, as well as increased activity from actors who previously conserved zero-day exploits for high-value targets. The economic calculus has shifted: previously guarded capabilities are now used more frequently, driving up the overall volume and velocity of attacks. Enterprises must update their defensive playbooks to account for this new reality.

What defensive strategies should enterprises adopt now?
Enterprises must take immediate action along two parallel tracks. First, accelerate the hardening of existing software by integrating AI into security programs—using machine learning to detect anomalies, prioritize patches, and automate incident response. Second, prepare to defend systems that remain unhardened by reducing exposure through segmentation, least-privilege access, and rigorous monitoring. The time to strengthen playbooks is now, as noted in Wiz's blog post "Claude Mythos." This includes incorporating AI-based threat detection, conducting regular red team exercises with AI tools, and collaborating with industry sharing groups to stay ahead of emerging exploit techniques. The goal is to shrink the window of opportunity for attackers before AI-augmented threats become the norm.
How will the economics of zero-day exploitation shift?
AI is fundamentally altering the cost-benefit analysis of zero-day exploitation. Previously, developing a high-quality exploit required rare expertise and significant investment, so actors used these capabilities sparingly—often only for espionage or high-value targets. Now, AI reduces both the cost and skill required, enabling mass exploitation campaigns that target a broader range of organizations. This economic shift means that ransomware operators, cybercriminals, and even nation-state actors can now incorporate zero-day exploits into routine operations without the historical constraints. As a result, enterprises that once considered themselves unlikely targets due to limited exposure may become frequent victims. Defending against this new dynamic requires a proactive approach to vulnerability management and a commitment to continuous improvement.
What does the future hold for exploit deployment speed?
The acceleration of exploit deployment is already evident among advanced adversaries. In the 2025 Zero-Days in Review report, researchers noted that PRC-nexus espionage operators have become adept at rapidly sharing and deploying exploits across otherwise separate threat groups. This trend is expected to grow as AI models improve, reducing the time from vulnerability disclosure to functional exploit from weeks to days or even hours. The traditional gap between private discovery and public release is shrinking, forcing defenders to adopt real-time patching cycles and rely on AI-driven threat intelligence. Enterprises must prepare for a future where the speed of attack often outpaces manual response, making automation and AI integration essential components of any security strategy.