Although Jira serves as the system of record for many DevOps and IT teams, retrieving secrets or approving requests for privileged information often occurs on other
Employees are adopting Artificial Intelligence (AI) tools to enhance their productivity, but they rarely consider the security implications of doing so. When an employee pastes sensitive customer data into an unapproved AI tool, that data is processed by a third-party model outside the organization’s control, often leaving no audit trail for security teams to review.
According to Microsoft’s 2024 Work Trend Index Annual Report, 78% of employees reported using their own AI tools at work. This unapproved use of AI tools highlights how widespread shadow AI has become. Identity security provides the foundation to address this challenge by helping organizations establish visibility into who accesses AI tools and under what conditions, giving security teams the control they need to govern AI usage.
Continue reading to learn more about shadow AI, why it’s a major identity security risk and how to govern identity-centric shadow AI.
Shadow IT vs shadow AI
Shadow AI expands on the existing risks of shadow IT but introduces more modern, complex threats. Shadow IT refers to the unauthorized use of software or systems within an organization. For example, an employee may use their personal email account to share work files, creating access control and visibility gaps. Shadow AI takes this threat a step further because AI tools not only store data, but they also actively process and may retain it. This creates a new level of data exposure, where sensitive information can be deeply embedded into external models beyond an organization’s control. Two factors that make AI especially difficult to govern include:
- Use of personal accounts or devices: Employees accessing AI tools outside company-provisioned environments via personal accounts and devices disconnect their activity from their organizational identity, eliminating transparency and traceability.
- Browser-based AI tools: Browser-based AI tools require no installation, making them harder to detect in environments that rely on endpoint-based controls alone.
This combination of data exposure at such a large scale and tools that are capable of evading traditional detection makes shadow AI a particularly challenging security problem.
Why shadow AI is an identity security issue
When employees use unapproved AI tools, security teams have no visibility into what data was shared, who accessed the tool or what the tool does with that data. This lack of identity visibility is the main reason why shadow AI is so challenging to detect, let alone manage. Traditional Identity and Access Management (IAM) solutions were designed for human users with predictable behavior and defined roles, but modern organizations must adjust their security strategies to account for Non-Human Identities (NHIs), including AI agents and service accounts. These machine identities can access systems and execute tasks across multiple critical systems autonomously, and they are growing in popularity within enterprises. In fact, according to senior leaders surveyed in McKinsey’s 2025 State of AI Global Survey, 62% reported that their organizations were at least experimenting with AI agents. Unlike human users, AI agents can operate continuously, scale rapidly and interact across many systems at once. Without identity security controls that can govern both human and machine identities, organizations lose control over how their data is accessed and used.
Top identity-related shadow AI risks
Weak identity security not only makes shadow AI harder to detect but also exacerbates the damage shadow AI can cause.
Unmonitored data access
Employees who share sensitive data with unapproved AI tools create data exposure that traditional monitoring systems may not detect. Data loss prevention tools can only monitor channels they have visibility into; shadow AI operates outside those boundaries. If an employee uses an unapproved AI tool through an unmonitored personal account, organizations have no way to monitor, record or log the activity. If privileged data or credentials are exposed, cybercriminals can gain access to critical systems without a clear audit trail.
Machine identity sprawl
Unlike human employees who undergo formal onboarding and offboarding processes, AI agents and service accounts often lack structured lifecycle management. As a result, a growing number of machine identities operate across multiple environments with excessive permissions and limited oversight, leading to machine identity sprawl. Without visibility into machine identities, organizations cannot verify what systems are accessed or whether they have been compromised.
Compliance and audit gaps
Regulatory frameworks like the GDPR, HIPAA and PCI DSS require organizations to track how sensitive data is accessed and processed, regardless of whether a human or machine is responsible. If identity security only covers human users, organizations cannot produce complete audit trails accounting for AI activity, exposing them to regulatory penalties and audit findings that are increasingly difficult to remediate after the fact.
How to govern identity-centric shadow AI
Managing shadow AI should not involve blocking all AI tools from members of your organization; it should start with gaining full visibility into who is accessing critical systems and data. Here are some key steps your IT and security teams should follow to take an identity-centric approach to governing shadow AI:
- Establish visibility across all identities: Use a strong IAM solution to identify who is accessing both approved and unapproved AI tools. Without this, security teams can only react to shadow AI after data has been exposed. Organizations must have the visibility necessary to govern both human and machine identities, so they can determine which tools to allow and where access policies need to become more precise.
- Extend governance to NHIs: Combine IAM with Identity Governance and Administration (IGA) to manage access for both human and machine identities. AI agents and service accounts should be subject to the same access reviews, least-privilege policies and offboarding procedures as human users.
- Enforce zero-trust security: Every access request should always be verified, regardless of whether it comes from a human or machine identity. Require Multi-Factor Authentication (MFA), use Single Sign-On (SSO) and enforce least-privilege access to limit all identities to only what is required for a specific task.
- Monitor and record privileged sessions in real time: Connect all AI-related activity to authorized identities, and record sessions involving access to critical systems. Tools like KeeperAI analyze sessions as they occur, automatically flagging suspicious behavior against administrator-defined risk thresholds, providing security teams with detailed audit trails and the ability to intervene before a threat escalates.
- Implement Identity Threat Detection and Response (ITDR): ITDR continuously monitors behavioral anomalies across both human and machine identities, detecting suspicious activity such as privilege escalation attempts and credential abuse before they develop into larger incidents. By extending threat detection beyond perimeter controls to the identity layer, ITDR directly addresses the blind spots created by shadow AI.
Secure human and machine identities with Keeper®
As AI adoption grows, shadow AI will grow with it. Organizations need an identity security platform that delivers visibility, control and governance across every identity — both human and machine.
Keeper secures privileged access for both human users and machine identities, enforces least-privilege policies and provides real-time session monitoring across critical systems. It governs the infrastructure secrets and API keys that AI agents rely on, ensuring that NHIs operate within defined boundaries and that credentials are rotated automatically. KeeperAI enhances this visibility by analyzing privileged sessions in real time and surfacing high-risk activity as it happens. Built on a zero-trust, zero-knowledge architecture, Keeper provides the audit trails and access controls organizations need to govern AI usage without blocking the productivity it enables.
Start a free trial of Keeper today to gain full visibility and control over every identity in your environment.