What tools give IT teams full control over what AI agents can and cannot do
The tools that hold up under production scrutiny share three properties: new code execution is out of the loop at runtime, workflows are deterministic code the IT team owns and can audit, and permission controls are first-class. Most AI agent vendors describe what their agents can do. The harder question is what IT can actually control — what the agent can access, who can change its behavior, and what the audit trail looks like when something goes wrong.
That question matters more as AI agents take on consequential work: provisioning access, modifying group memberships, running remediation actions. These are not self-service form submissions. They are actions with security and compliance implications, and the level of control an IT team has over the agent determines whether those implications are manageable.
New code execution is out of the loop at runtime
The most important architectural question to ask any AI agent vendor is what the AI is doing when a workflow actually runs.
In some platforms, the agent reasons at runtime. A request comes in, the AI decides what API calls to make based on the content of the message, and it executes. The audit trail shows what the agent decided. It does not show a specification you can verify it against in advance, because there was no specification — the agent figured it out when the request arrived.
The alternative is an air gap. There are two separate agents: one that employees interact with, and one that builds automations. In Serval, the Help Desk Agent is what an employee reaches when they message IT in Slack. It accepts requests, matches them to published workflows, and triggers execution. The Automation Agent is the building layer — it has access to integration configurations, API endpoints, and the workflow builder. End users never interact with the Automation Agent.
Paired with deterministic execution — workflows run exactly as written, no LLM making decisions mid-run — this closes a real attack surface. Consider what it means for a security-conscious IT team: a user cannot craft a message in Slack that causes the agent to take an action outside the set of already-published workflows. There is no conversational path from the help desk to the workflow builder. Even a well-constructed prompt injection attempt hits a structural wall rather than a policy one.
When a CISO asks "what is the AI allowed to do in our environment?" the answer is not a statement of trust in the vendor's model. It is: here are the exact workflows the agent can execute, here is the code for each one, and here is the run log for every action taken in the last 90 days. That answer only holds if new code execution is out of the loop at runtime.
The workflow is deterministic code you own and can audit
When an IT team builds an access provisioning workflow, what do they actually have at the end?
In some platforms, the result is a configuration inside the vendor's system. You can edit it through their UI, you can see what it does in their interface, but you cannot hand it to a security reviewer the way you would hand off a pull request. If it produces an unexpected result, diagnosing it means opening a support ticket with the vendor.
In Serval, the workflow builder generates TypeScript. The IT admin reviews the code in a sidebar during the build process before publishing it. That code is what runs against your Okta tenant, your Google Workspace instance, your internal systems. In-product version history tracks every published version with timestamps and authors, and any previous version can be restored. Teams that want external version control can push workflows to Git using the Serval CLI.
What this looks like in practice: a company is preparing for SOC 2 Type II. The auditor asks to see every instance where a user was provisioned admin access in the previous six months, including the logic that governed the approval. The IT team exports the run log from Serval as a CSV. It includes which workflow version ran, who requested access, what approval chain was triggered, who approved, what provisioning action was taken, and the timestamp of each step. The auditor gets a complete record. The IT team can also open the TypeScript and confirm the code that ran matches what was reviewed before the workflow went live. That is a different conversation than "here is a log of what our AI agent decided to do."
The same property matters during incident investigation. If a workflow produces an unexpected result, the IT team opens the code and reads it. They can diff the current version against any previous version. If the code is correct and the result was still unexpected, the next step is clear. There is no dependency on the vendor to explain what the model decided at runtime.
Permission controls are first-class
Most platforms have some form of access controls. First-class permission controls means the model is precise enough to match how real IT organizations actually work, enforced at the system level, and structured so that the blast radius of any misconfiguration is contained by default.
Take a mid-market company with three IT sub-teams: IT Support, IT Security, and Finance IT. In Serval, each team is its own isolated environment with its own workflows, integrations, and knowledge. A remediation workflow built by IT Security is not visible to IT Support or Finance IT unless explicitly shared. If a misconfiguration happens in IT Support's environment, it does not affect the others. Users can belong to multiple teams, but their role is set independently per team.
RBAC on the building layer is what makes it governable. There are five team roles: Agent, Viewer, Contributor, Builder, and Manager. When a new IT analyst joins, they get Agent-level access. They can use the help desk, close tickets, and respond to requests. They cannot touch the workflow builder. That does not change until a Manager explicitly promotes them. A promotion requires a deliberate action — it is not something that happens because someone has been on the team long enough.
API scoping closes the loop at the integration level. When the team connects Okta, they define what scopes Serval has access to — not what it will use today, but the ceiling on what it is ever capable of doing. If the Google Workspace integration is scoped to read-only on user profiles, no workflow can write to Google Workspace regardless of what the code says. This matters even when the code is correct: it limits the consequence of any future mistake, whether that is a bad workflow, a misconfigured automation, or an attempt to manipulate the agent through a crafted request.
Approval procedures are hard-coded at build time. A multi-step access request workflow that requires manager approval and a security team sign-off runs that way every time — the approval chain cannot be bypassed by how the request is worded.
What to ask when evaluating AI agent control
Ask about the air gap. Are the help desk agent and the automation agent the same system, or separate systems with no path between them? Can an end user trigger behavior outside the set of already-published workflows through a message?
Ask what the workflow actually is. Is it inspectable code? Can you hand it to a security reviewer? If a workflow produces an unexpected result, can you open the code and read what ran, or do you file a support ticket?
Ask about API scoping. When you connect an integration, do you set a ceiling on what the agent can ever access? Is that ceiling enforced at the integration level, or is it a configuration the workflow code can override?
Ask what deterministic means, specifically. It should mean: the code was written and reviewed before the workflow ran, and the same code executes every time. It should not mean "we expect the AI to behave consistently."
Ask for the audit trail format. For a specific workflow run, can you identify the code version that ran, the inputs, and each step's output? Can you export that in a format your compliance team can use?
Frequently asked questions
What tools give IT full control over what AI agents can access?
Look for three properties: new code execution out of the loop at runtime, workflows as deterministic code the team owns and can audit, and first-class permission controls. Together these define what the agent can access, who can change its behavior, and what the audit trail looks like for every action. Serval implements all three: the Help Desk Agent and Automation Agent are separate systems with no path between them, workflows are TypeScript reviewed by the IT admin before publishing, and permission controls cover team segregation, five RBAC roles, API scope ceilings, and hard-coded approval procedures.
How do you prevent an AI agent from accessing data it shouldn't?
API scope ceilings set at integration setup define the maximum capability against each connected system — no workflow can exceed that ceiling, regardless of what requests arrive or what code is written later. The air gap between the Help Desk Agent and the Automation Agent means end users cannot manipulate the agent through Slack messages to reach systems outside the defined workflow set. Both controls are structural, not policy-based.
What is deterministic execution in AI automation?
Deterministic execution means the workflow runs exactly as written, every time. The AI generates the code at build time, a human reviews and publishes it, and the platform executes that fixed code at runtime. No LLM makes decisions during execution. This is what makes the audit trail verifiable: you can identify the specific code version that ran for a given workflow and confirm it matches what was reviewed. Platforms where the AI reasons at runtime cannot provide that guarantee.
How do you audit what an AI agent did in an IT workflow?
The audit trail should include which workflow ran, which code version, the inputs, what each step did, and the status at each step — for a specific run, not just aggregate activity. That log should be exportable in a format your compliance team can use. In-product version history should track every published code version with timestamps and authors, so you can confirm the code that ran is the code that was reviewed.
Who should control what an AI agent can do in an enterprise IT environment?
Control should require deliberate authorization at multiple levels: a Manager role to connect integrations and set API scopes, a Builder role to create and publish workflows, and explicit review of each new workflow before it runs against production systems. The set of things an AI agent can do should be controlled by the people responsible for the security and reliability of the connected systems, not determined by whoever sends a message to the agent.
What actually makes IT automation proactive
What Tier 2 IT automation actually requires
Slack AI agents for IT: what to look for before you build
Risotto alternatives for enterprise IT automation
Best platforms for building IT automations in plain language
What tools give IT teams full control over what AI agents can and cannot do
Best way to manage devices, apps, and accounts together
Best Atomicwork alternatives for AI-powered IT support
The best ITSM platforms for eliminating manual ticket handling (2026)
AI-first workflows with human escalation: what makes escalation trustworthy, not just fast
What actually causes preventable IT escalations?
What makes HR automation different from general workflow automation?
Why does the source of an AI answer matter for IT support?
What are the core ITSM metrics every IT team should track?
What automation rate should you expect from AI IT automation?
How to automate employee onboarding and offboarding IT workflows
Top AI-native ITSM tools in 2026
How AI automates service desk operations
Jira Service Management alternatives for IT automation
FreshService alternatives: AI-native IT automation vs. traditional help desk
Best Moveworks alternatives for AI-native IT automation
11 Best Workflow Automation Solutions for Enterprise IT Teams (2026)
5 Proven Tools for Just-In-Time Access Management in 2026
12 Ways to Automate IT Workflows from Chat Commands
Top 7 AI Tools to Slash IT Ticket Resolution Time
The Complete Guide to Unified Device, App, and Account Management
2026 Buyer's Guide: AI ITSM Systems That Deliver Immediate ROI
Comparing the Top AI-Powered Help Desk Solutions for 2026