Darhost

2026-05-17 13:51:24

7 Critical Insights into the Intersection of Cloud Secrets and AI Risk

Explore 7 critical insights from SentinelOne's report on cloud secrets and AI risk, including the 140% surge in AI credentials and shadow AI threats.

In 2025, the enterprise risk landscape underwent a paradigm shift as AI and large language models (LLMs) became the primary driver of cloud risk. With nearly 88% of organizations now leveraging AI in at least one business function, traditional security guardrails are being outpaced by AI-related threats, creating a highly complex and interconnected attack surface. SentinelOne’s AI and Cloud Verified Exploit Paths and Secrets Scanning Report, drawing on telemetry from over 11,000 anonymized customer environments, reveals how threat actors are actively exploiting modern cloud and AI infrastructures. Here are seven critical insights from that report that every security professional needs to know.

Jump to: 1 | 2 | 3 | 4 | 5 | 6 | 7

1. The Surge of AI-Specific Secrets

The most striking finding from the 2026 report is the explosive growth of AI-specific credentials. In just one year, AI-related secrets—such as OpenAI API keys and Azure OpenAI API keys—surged by approximately 140%. This increase directly correlates with the rapid embedding of AI into customer support systems, internal tooling, financial platforms, and product experiences. As businesses rush to integrate AI, they generate a vast number of authentication keys that become scattered across environments, often without proper oversight. This proliferation makes AI secrets a prime target for attackers, as they can unlock access to sensitive models and data. Unlike traditional cloud credentials, these keys are frequently reused and stored in insecure locations, amplifying the risk of widespread compromise.

7 Critical Insights into the Intersection of Cloud Secrets and AI Risk
Source: www.sentinelone.com

2. The Rise of Shadow AI

Ubiquitous AI deployment has given rise to a pervasive organizational pattern known as “shadow AI”—the unsanctioned use of AI tools without formal IT approval or security oversight. In practice, developers and internal teams often use unmanaged or personal LLM keys to process corporate data outside sanctioned channels. Since these AI integrations span numerous internal applications, the same API keys are duplicated and stored in code repositories, SaaS configurations, and development scripts. Compounding the issue, these credentials frequently lack proper access controls or routine rotation schedules. The sprawl of shadow AI credentials renders them difficult to track via standard secrets management protocols, creating a blind spot that threat actors are quick to exploit. Organizations urgently need centralized governance over how AI keys are issued and used.

3. Distinct Risk Vectors of Unmanaged AI Credentials

Unlike traditional cloud credentials that primarily enable resource manipulation, compromised AI keys introduce unique risk vectors. AI services operate at the intersection of various enterprise systems—including CRM platforms, ticketing systems, and analytics tools—meaning a single exposed LLM API key can give an attacker broad visibility into diverse datasets. The risks fall into two primary categories: data exposure and leakage, and prompt injection with data poisoning. These vectors are distinct because they target not just infrastructure but the models themselves, potentially leading to manipulation of AI behaviors or extraction of proprietary training data. As AI becomes more embedded across the enterprise, the attack surface expands beyond conventional boundaries, requiring specialized defenses that address the unique nature of AI workloads.

4. Data Exposure and Leakage via AI Keys

Unauthorized access via compromised AI keys can expose sensitive or proprietary datasets processed by models, along with embedded business logic and internal user prompts and outputs. Attackers can harvest sensitive corporate conversations at scale, gaining insights into strategic decisions, customer interactions, or proprietary algorithms. Since AI systems often maintain context across sessions, a compromised key can unlock months of historical data. This risk is compounded by the fact that many organizations store AI keys in plaintext within code repositories or configuration files, making them easy targets for automated scraping tools. To mitigate exposure, enterprises must adopt robust secrets management practices, including encryption, access controls, and regular key rotation, as well as monitoring for unusual usage patterns that may indicate a breach.

5. Prompt Injection and Data Poisoning Risks

Unmanaged AI keys also enable threat actors to actively manipulate AI models through prompt injection and data poisoning attacks. In a prompt injection, an attacker crafts malicious inputs that trick the model into bypassing its safeguards or leaking sensitive information. Data poisoning involves corrupting the training data or fine-tuning process to introduce backdoors or biases. These techniques can have cascading effects across all applications reliant on the compromised model, from chatbots to content generators. Unlike traditional exploits that target software vulnerabilities, these attacks exploit the inherent trust placed in AI outputs. The result is a new class of risk where the integrity of the AI system itself is undermined, potentially leading to reputational damage, financial loss, or regulatory violations. Continuous monitoring and input validation are critical defenses.

7 Critical Insights into the Intersection of Cloud Secrets and AI Risk
Source: www.sentinelone.com

6. The Need for Centralized Governance

The sprawl of AI credentials and the rise of shadow AI underscore a pressing need for centralized governance over how AI keys are issued, used, and retired. Standard secrets management protocols are often insufficient because they treat AI keys like any other credential, ignoring their unique role as gateways to model access and data processing. Organizations must implement policies that require all AI integrations to be registered and audited, enforce least-privilege access, and mandate periodic rotation of API keys. Additionally, discovery tools can help identify unauthorized AI usage and exposed secrets in real time. By bringing order to the chaos of decentralized AI deployments, centralized governance reduces the attack surface and provides clear accountability. Without it, enterprises remain vulnerable to the compounding risks of unmanaged AI credentials.

7. The Evolving Attack Surface

The convergence of cloud secrets and AI risk has fundamentally transformed the modern attack surface. Traditional security controls that focus on network boundaries or endpoint protection are no longer sufficient when AI keys can grant access to models that traverse multiple internal systems. Threat actors are actively exploiting these connections to pivot from compromised secrets to sensitive data and AI models. The SentinelOne report highlights that the complexity of this attack surface is accelerating faster than most organizations can adapt. Security teams must adopt a holistic approach that integrates secrets management, AI governance, and continuous threat monitoring. As AI adoption continues to grow, so too will the sophistication of attacks targeting AI infrastructure. The time to act is now—before the next wave of exploits catches you unprepared.

In conclusion, the fusion of AI and cloud technologies is driving unprecedented risk. The seven insights above underscore the urgent need for organizations to reassess their security posture in light of the 140% surge in AI secrets, the prevalence of shadow AI, and the unique risk vectors of unmanaged credentials. By implementing centralized governance, robust secrets management, and continuous monitoring, enterprises can better defend against the evolving threat landscape. The key takeaway: AI risk is cloud risk, and managing it requires a unified strategy that prioritizes visibility, control, and proactive defense.