AI Tools Are Now Deciding Who Gets to Speak for Your Cloud โ And Nobody Agreed to That
There's a governance crisis unfolding inside enterprise cloud environments right now, and it doesn't look like a breach. It doesn't trigger a security alert. It doesn't show up in your weekly infrastructure report. What it looks like, from the outside, is your AI tools doing exactly what they were designed to do โ helpfully, efficiently, and without complaint.
That's the problem.
AI tools embedded in modern cloud orchestration layers are increasingly making identity-level decisions: which service account speaks on behalf of which workload, which credential gets passed downstream, which role gets assumed during a multi-step agentic task. These aren't configuration choices made once by a human architect. They're runtime decisions, made autonomously, often invisibly, and almost never logged in a way that connects back to a human who said "yes, do that."
The question of who gets to speak for your cloud โ which entity is authorized to act, request, retrieve, and modify โ used to be answered by an IAM policy document that a human wrote and a security team reviewed. Today, that answer is increasingly being written at runtime by an LLM-based orchestration layer that nobody explicitly authorized to make that call.
Why Identity Is the New Frontier of AI Cloud Risk
To understand why this matters, it helps to think about what "identity" means in cloud infrastructure.
In a traditional cloud environment, every action โ reading from a storage bucket, calling an API, writing to a database โ is performed by an identity: a user, a service account, a role. IAM (Identity and Access Management) frameworks exist precisely to control which identities can do what. This is the foundation of the "least privilege" principle that every cloud security framework, from AWS Well-Architected to the NIST Cybersecurity Framework, treats as non-negotiable.
Now introduce an AI orchestration layer โ something like a LangChain-based agent, an AutoGen multi-agent system, or a vendor-managed AI workflow engine. These systems don't just execute tasks. They plan them. They decide which tools to call, in what order, with what parameters. And critically, they decide which credentials to use when making those calls.
Here's where the governance gap opens: the orchestration layer often inherits a broad service account or API key at initialization, and then makes downstream decisions about how to deploy that credential across a chain of tool calls. The original human who provisioned that service account may have intended it for a narrow, specific use. The AI tool uses it for whatever the task requires โ because it can, because nothing stops it, and because the architecture was never designed with this use case in mind.
This appears to be the norm, not the exception, in current enterprise AI deployments.
The "Assumed Role" Problem at Runtime
Let me make this concrete with a scenario that is playing out in real enterprise environments right now.
A data engineering team deploys an AI-assisted pipeline orchestrator. The orchestrator is given a service account with permissions to read from a data lake, write to a staging database, and call an internal analytics API. Standard stuff. The security team reviews the service account permissions, approves them, and moves on.
Three months later, the orchestrator has been extended โ through prompt updates and new tool registrations โ to also handle data quality checks, trigger downstream ML training jobs, and query a compliance reporting API. None of these extensions went through a formal IAM review. The service account that was approved for three narrow operations is now being used to speak for a workload that spans six or seven distinct functions, some of which touch regulated data.
Nobody changed the IAM policy. The AI tool changed what it does with the existing policy.
This is the assumed role problem at runtime. The identity didn't change. The permissions didn't change. What changed is the scope of action that the AI tool decided to exercise under that identity โ and there's no governance checkpoint that catches this drift.
This connects directly to something I've been tracking across several previous analyses: the pattern where AI tools don't just execute instructions but quietly expand the surface area of what they do, without triggering any of the approval gates that traditional infrastructure changes would require. If you've been following the thread on how enterprise AI costs spiral beyond what any budget process anticipated, this identity drift is the same structural problem wearing a different mask.
When AI Tools Negotiate Trust Chains
The problem compounds in multi-agent architectures, which are becoming standard in enterprise AI deployments as of early 2026.
In a multi-agent system, one AI agent (the "orchestrator") delegates subtasks to other agents (the "workers"). Each worker may need its own set of credentials to perform its task. The orchestrator decides โ at runtime โ which worker gets which credential, how long it holds it, and whether to pass it further downstream.
This is, functionally, a trust chain. And it's being assembled dynamically, by software, without human sign-off at each link.
The security implications are significant. In traditional systems, trust chains are designed explicitly: Service A is allowed to call Service B on behalf of User C, and this is documented in policy. In AI orchestration systems, the trust chain is emergent โ it forms as the agent decides how to accomplish its goal. An orchestrator agent might pass a credential to a retrieval agent, which passes a modified request to an external API, which returns data that gets written to a location the original human operator never anticipated.
"The key question in agentic AI security isn't 'what can this agent do?' โ it's 'what can this agent authorize other agents to do?'" โ this framing, while not from a single citable source, reflects the emerging consensus in enterprise security architecture discussions at events like RSA Conference 2025 and re:Inforce 2025.
The "confused deputy" problem โ a well-established concept in computer security where a program with legitimate permissions is tricked into misusing them on behalf of another party โ is being recreated at scale in AI orchestration layers. Except in this version, nobody is doing the tricking. The AI tool is making these delegation decisions autonomously, in good faith, as part of doing its job.
What Makes This Different from Traditional Privilege Escalation
Security teams might reasonably ask: isn't this just privilege escalation? We have tools for that.
Not quite. Traditional privilege escalation involves an actor โ human or malicious software โ exceeding their authorized permissions. The detection logic is relatively straightforward: compare what the identity is doing against what it's allowed to do, and flag anomalies.
The AI tool identity problem is different in a crucial way: the AI tool is operating entirely within its authorized permissions. It's not escalating. It's redistributing โ using a broad credential to do things that were technically permitted but never intended. The action is legal. The governance failure is upstream, in the moment when the service account was provisioned without anticipating how an AI orchestration layer would use it.
This means existing SIEM (Security Information and Event Management) systems and CSPM (Cloud Security Posture Management) tools will likely miss it. They're looking for unauthorized access. This is authorized access being used in ways nobody planned for.
The practical implication: your current security tooling is probably not equipped to catch this class of problem. You need a different kind of visibility โ one that tracks not just what an identity does, but what decided to use that identity, and what was the chain of reasoning that led there.
Three Things You Can Do Right Now
This isn't a problem that requires waiting for vendors to build new tools (though they will, and should). There are concrete steps that cloud architects and security teams can take today.
1. Audit Every Service Account That an AI Tool Touches
Start with a simple inventory: which service accounts, API keys, and IAM roles are currently accessible to any AI orchestration layer in your environment? This is harder than it sounds โ many deployments inherit credentials through environment variables, secrets managers, or SDK defaults that aren't explicitly documented anywhere.
Once you have the list, ask a harder question: what is the broadest action this AI tool could take with this credential, given its current tool registrations and prompt configuration? That worst-case surface area is your actual risk exposure, not the intended use case.
2. Implement AI-Specific IAM Scoping
Standard IAM best practices say "least privilege." For AI tools, this needs to be operationalized differently. Consider:
- Time-bounded credentials: AI orchestration tasks should use credentials that expire after the task completes, not long-lived service accounts.
- Task-scoped roles: Rather than giving an AI tool a single broad service account, provision separate roles for each distinct capability (read from data lake, write to staging, call analytics API) and require the orchestrator to explicitly assume the correct role for each action.
- Delegation limits: Define explicitly whether an AI tool is permitted to pass its credentials to sub-agents, and log every instance where this happens.
3. Add a "Who Decided This?" Layer to Your Audit Logs
Current cloud audit logs answer: what happened, when, by which identity. For AI-orchestrated environments, you need a fourth dimension: what AI reasoning chain led to this action?
This is technically achievable today using structured logging from your orchestration layer. LangChain, LangGraph, and similar frameworks support trace logging that captures the decision chain. The gap is that most enterprises aren't connecting these traces to their cloud audit logs โ they're living in separate systems. Bridging that gap is one of the highest-leverage security investments an AI-enabled enterprise can make right now.
The Broader Pattern: AI Tools Are Becoming Identity Brokers
Stepping back from the operational details, there's a structural shift worth naming clearly.
AI tools are becoming identity brokers โ systems that decide, in real time, which entity speaks for which workload, which credential gets used for which action, and which trust relationships get formed across a distributed system. This is a function that, in traditional infrastructure, was performed by humans writing IAM policies and security architects designing trust boundaries.
We haven't updated our governance frameworks to reflect this shift. We're still asking "who deployed this?" and "what permissions did we grant?" when the more urgent questions are "what is the AI tool deciding to do with those permissions?" and "what is the chain of delegation it's creating at runtime?"
This is the same pattern that showed up in the context of how physical AI systems are rewriting manufacturing accountability chains โ a different domain, but the same structural problem: AI systems making decisions that used to require explicit human authorization, in ways that existing governance frameworks weren't designed to catch.
The CISA guidance on secure-by-design AI and the emerging NIST AI Risk Management Framework both point in the right direction โ they emphasize that AI systems need to be designed with accountability chains that survive runtime autonomy. But translating that principle into concrete IAM policy and audit architecture is work that most enterprises haven't done yet.
The Question That Should Be on Every Cloud Governance Agenda
Here's the governance question I'd put on every cloud security and architecture review agenda for the rest of 2026:
"For every AI tool we've deployed: what is the most powerful identity it could speak as, and who โ human or policy โ would stop it if it chose to?"
If the answer to the second part is "nothing, currently" โ and for most enterprise AI deployments, that appears to be the honest answer โ then you have a governance gap that no amount of cost monitoring or deployment approval will close.
The AI tools are doing their jobs. The question is whether we've done ours: defining, explicitly and enforceably, whose voice they're allowed to use when they do it.
Technology is never just a machine โ it's a system of decisions, trust, and accountability. Right now, AI tools are inheriting more of that accountability than we've consciously chosen to give them. That's not a reason to slow down AI adoption. It's a reason to build the governance infrastructure that makes fast, confident adoption actually safe.
The machines are ready. The question is whether our policies are.
๊นํ ํฌ
๊ตญ๋ด์ธ IT ์ ๊ณ๋ฅผ 15๋ ๊ฐ ์ทจ์ฌํด์จ ํ ํฌ ์นผ๋ผ๋์คํธ. AI, ํด๋ผ์ฐ๋, ์คํํธ์ ์ํ๊ณ๋ฅผ ๊น์ด ์๊ฒ ๋ถ์ํฉ๋๋ค.
Related Posts
๋๊ธ
์์ง ๋๊ธ์ด ์์ต๋๋ค. ์ฒซ ๋๊ธ์ ๋จ๊ฒจ๋ณด์ธ์!