Enterprise security infrastructure was engineered for a world where humans made every decision. Today, AI agents are making those decisions autonomously — reasoning through tasks, calling APIs, accessing sensitive data, and taking actions across organizational boundaries, all without a human in the loop. The identity systems built to govern human behavior are not equipped to handle this shift, and the gap is creating a structural vulnerability at the heart of every enterprise AI strategy.
The problem isn’t that AI agents are untrustworthy by nature. The problem is that they inherit the full privileges of the user who launched them, operate non-deterministically, and leave security teams with no visibility into what actions were taken, on whose behalf, and with what authority. Without a new identity model purpose-built for autonomous agents, enterprises face an impossible choice: restrict agents so heavily they deliver no value, or let them run free and accept the security risk.
The Guest: Ian Livingstone, Co-Founder and CEO at Keycard
Key Takeaways
- AI agents inherit full user privileges by default — there is no native mechanism in current IAM systems to scope agent access below the level of the delegating human
- Consent fatigue is a real production risk: when agents prompt humans to approve every tool call, users say yes to everything, creating the conditions for accidental privilege escalation
- Execution-time access control shifts enforcement from token issuance to the moment of action — factoring in prompt intent, agent identity, task context, and organizational policy
- Agent identity is a new category: not human identity, not traditional workload identity — but a hybrid that is stable, delegatory, and dynamically scoped per task
- Security and platform leaders need to look for shadow IT signals now: unusual access patterns, high-velocity user actions, and IT ticket floods are early indicators that agents are already operating without governance
***
In this exclusive interview with Swapnil Bhartiya at TFiR, Ian Livingstone, Co-Founder and CEO at Keycard, discusses why the identity and access models built for human-native computing fundamentally break under agentic workloads — and what a new, dynamic, execution-time authorization model looks like in practice.
Why AI Agents Break Traditional Identity and Access Models
Traditional identity infrastructure operates in two distinct modes: human identity, which grants broad access based on trust in human judgment, and workload identity, which scopes permissions narrowly because every line of code is deterministic and human-reviewed. AI agents fit neither category. They require broad access to many systems like a human, but their behavior is non-deterministic and probabilistic — a feature, not a bug, of how large language models operate. The result is a structural mismatch between how agents actually function and how existing IAM systems model trust.
Q: What changes when the actor is not a human but an agent that can reason, adapt, and take action on its own?
Ian Livingstone: “Agents operate a lot like humans — they need broad-based access to many different systems. But at the same time, when you send them off to do a piece of work, you want them to have very scoped access based on the stage of work. If I say, please help me figure out which database contains this type of information, I don’t want it to go and take the information and send it to some other system. And I certainly don’t want it to have the ability to delete that data or make any modifications. The challenge today is that when the user is interacting with that agent, that agent is inheriting all the privileges of the user — and there is no capacity for that user to control what this agent is actually capable of doing, because the underlying systems don’t support it.”
Q: Why do coding agents and autonomous systems break traditional identity and access models?
Ian Livingstone: “Agents are not static in the sense that they have the same static permissions. Human identity systems grant broad access because we generally trust that a person has good judgment. Workload identity systems are built for a very static scope world — this service only needs to read data from the user’s database. In reality, agents operate like humans in needing broad access, but they also need to have dynamically scoped access based on the specific task they are completing at that moment. The existing identity systems are built first and foremost only for authentication and not authorization, and they are built for a static model of the world. Agents are hyper dynamic across all the different tasks they do.”
The Real-World Failure Scenarios Security Teams Face
The failure modes for enterprises that deploy agents without purpose-built identity governance are not theoretical. They are already appearing in production environments across industries. Ian Livingstone draws on recent supply chain attacks and the day-to-day experience of security teams struggling to manage an invisible attack surface that grows every time an employee connects a new AI tool to internal systems.
Q: What actually goes wrong in the real world when teams use static credentials and pre-scoped permissions for agents?
Ian Livingstone: “The security team at a company doesn’t actually know what those agents can do. What they see is a user performing actions — they don’t know whether that’s the user or the agent. So if something goes wrong, they can’t determine whether it’s because of the agent or because the user approved that action. A good example would be the Axiom supply chain attack that happened a few weeks ago — a developer using a coding agent installs a package that has a vulnerability in it. That coding agent, because of where it’s running and the secrets it has access to, allows that vulnerable package to exfiltrate those credentials. Now you have those credentials available to an attacker in a way you couldn’t control or didn’t anticipate.”
Q: What are the three primary pain points security teams experience around agent access?
Ian Livingstone: “The first pain is consent fatigue. Because agents don’t have fine-grained permissions, they rely on humans to say yes or no to every tool call. As a result, humans start saying yes to everything — and that’s how accidents happen. The second problem is they have no track record of what they’re operating on and no way to do corrective action. If an agent becomes malicious, how do you know what access it actually has, and how do you revoke that? And the third challenge is actually the hardest: how do I enable my team to use agents with approved tools and approved access in a way that makes it easy for them to get up and going without taking on a ton of risk?”
What Execution-Time Access Control Actually Means
Keycard’s approach to solving the agent identity problem centers on a shift in where authorization enforcement happens. Rather than granting permissions at the moment a token is issued — a model inherited from human and static workload identity — Keycard enforces access decisions at the moment of action, using the full context of what the agent is trying to do, who directed it, and what organizational policy allows. This is what Ian Livingstone calls execution-time access control.
Q: We hear a lot about making access decisions at execution time using full context. What does that mean in practice?
Ian Livingstone: “The identity system needs to be able to interpret the intent of the agent, understand who it’s acting on behalf of, and then use that intent to determine whether or not a decision or API call should be made. In traditional systems, we move the enforcement point from token issuance — which says does this user have access to this system — to asking: in the context of the session and the task being done, can this agent actually perform this action right now? And if it needs additional permissions, how does the agent actually get that additional permission so it can perform what it’s trying to do?”
Q: What signals feed into that contextual access decision?
Ian Livingstone: “Minimally, it is what the user said to the system — the prompt. When we look at that prompt, we have to interpret it and ask: what is reasonable access for that agent to have, based on what that agent’s job is in the organization? Is it ChatGPT operating on behalf of someone on our sales team or our development team? And then using which application — our CRM or GitHub? Understanding that intent, what the task is trying to do, what access it’s requesting — is that aligned with our security policies and with what the user actually wants from this interaction?”
How Enterprises Should Respond: Building the Agent Identity Foundation
Ian Livingstone frames the current moment in enterprise AI adoption as analogous to the early explosion of enterprise SaaS — a period when user management became a critical infrastructure problem almost overnight. The same dynamic is playing out with agents, but the technical requirements are different and the stakes are higher. The enterprises that move now to establish governance frameworks will be positioned to scale agent adoption safely; those that wait will face the dual risk of security exposure and competitive disadvantage.
Q: How should identity itself evolve when software is operating across multiple environments and making decisions at machine speed?
Ian Livingstone: “Agent identity is different. You have something that is inherently delegatory. You need the ability for an agent to have a concrete identity that is different than anyone else in the system — a stable identifier. You have to think of agents as apps in a formulation, and then you have to think about delegation: these types of users are able to delegate this type of access to this type of agent for these types of systems. And then you need to build in systems from a security perspective — policy — that determines when a human must be involved in a decision an agent is making on their behalf, and when that’s not required.”
Q: What are the early warning signals that a current identity stack is not ready for agents, and what are the first steps security leaders should take?
Ian Livingstone: “The first signal is users asking for access to systems they typically don’t need, or things accessing your systems that you didn’t see before — what people call shadow IT. The second signal is a high velocity of actions that look atypical in your logs. Once you know you have agents operating, the question is: how do we enable people to actually use agents with approved tools and approved access, so we can govern what type of agents are operating in our ecosystem and what tools they have access to? And then you want to look for a solution that enables real governance control over agents across all these different systems. But the most important thing when you’re evaluating that: this is a very early space, and you’re really making a bet on a team and a technology that’s going to take you to where you want to go over time. How does it integrate with my pre-existing human identity systems, my customer identity systems, and my workload identity systems to create a whole end-to-end solution?”





