What automation rate should you expect from AI IT automation?
The automation rate question comes up in almost every IT buying conversation. A vendor shows 70%. Another shows 87%. A third says "50%+ in steady state." The numbers look different, but the real problem is simpler: they're not measuring the same thing. Before you evaluate any rate, you need to know what's actually being counted. A well-deployed AI IT platform should reach 30-50%+ full automation within the first few months, scaling higher as workflow coverage grows. But the number only means something if "automated" means the request was resolved completely without IT touching it.
What is the difference between deflection and full automation?
Deflection is a call-center metric. It means: did this interaction avoid reaching a human at some point in the chain?
Routing a ticket to the right queue is deflection. Surfacing a knowledge article the user may or may not have read is deflection. Triggering a service catalogue entry that IT still fulfills is deflection. The user may or may not have gotten what they needed. IT may still have touched it. Nothing was actually automated.
Full automation means the request was resolved end to end without IT or the service desk ever touching the ticket. The user asked for something. The system handled it completely. IT never logged in. That's the metric that maps to time saved, headcount impact, and real SLA improvement.
Many platforms in this market report deflection but use language that implies full automation. "50% of tickets handled without a human" can mean 50% were deflected before reaching the queue, or it can mean 50% were fully resolved. These produce very different outcomes for your IT team and very different ROI numbers.
When a vendor gives you an automation rate, the first question to ask is: what does "resolved" mean in your platform? If the answer involves anything short of the request being completed without IT involvement, they're measuring deflection.
What is a realistic automation rate for a new deployment?
For teams starting from scratch or migrating from a lower-performing system, 30-40% full automation in the first months is a realistic and strong baseline.
This assumes the platform is connected to core systems: an identity provider, MDM, and knowledge base. It also assumes workflows exist for the highest-volume request types. Password resets, access requests, software provisioning, and basic account issues are where most IT ticket volume concentrates, and where automation pays off fastest.
Full automation rates scale with workflow coverage. A team with automations built only for access requests will see a lower blended rate than a team with workflows across device troubleshooting, onboarding, policy questions, and software licensing. The rate is partly a function of how much the platform has been configured, not just how capable the underlying AI is.
50%+ is achievable in steady state for most IT teams. Higher rates are possible when workflow coverage is broad and the ticket mix skews toward repeatable, structured requests. Teams with a higher share of complex or judgment-heavy escalations will see lower rates, which is expected and appropriate: those are exactly the requests that should reach a human.
The benchmark you should ask any vendor to provide is not their best-case customer. It's a customer with a comparable ticket mix and stack depth to yours.
Does a human approval step count as failed automation?
No. And understanding this distinction changes how you read vendor benchmarks.
Some workflows appropriately require a human decision in the middle: a manager approving an access grant, an IT admin signing off on a sensitive privilege change, or a business-rule-based approval routed to the right team. These approval steps are security controls. They're not evidence that the automation failed.
In Serval, approval logic is hard-coded into the workflow itself, not toggled on or off at runtime. The Automation Agent converts a plain-language description of the workflow into deterministic TypeScript code, with approval steps built in as first-class blocks. You can configure individual approvers, group-based approvals, manager-based routing, multi-step chains, or custom business-rule-based logic. Those approvals fire the same way every time the workflow runs.
A request that was handled by the Help Desk Agent from intake through execution, with a 10-second manager approval in the middle, is still a largely automated request. The IT team didn't touch it. The system gathered context, validated inputs, routed the approval, and completed the action once it came back.
The question to ask any vendor is whether approvals are a security design choice or a workaround for automation gaps. In a well-architected system, they're the former.
How does Serval measure AI resolution rate?
Serval's analytics dashboard separates ticket outcomes into four distinct categories. The primary metric is "AI resolved": tickets Serval completed entirely without human intervention.
The other categories are tracked separately and labeled precisely. "AI assisted" means Serval ran workflows on the ticket, but a human agent ultimately resolved it. "Unassisted" means Serval escalated without running any workflows. "Resolved outside Serval" covers tickets handled without Serval's involvement at all.
This categorization prevents the dashboard from flattering itself. A ticket where Serval ran a workflow but still needed a human to close it is not reported as automated. The number you see in the analytics tab is the honest count: requests resolved without IT touching them.
The analytics dashboard also tracks total workflows run, estimated time saved, and estimated cost savings, calculated using configurable labor rates. These are the numbers you bring to a leadership conversation, not the vendor's slide deck.
The practical implication: when Serval reports an automation rate, it's measuring AI resolved tickets as a share of total. Not blended with AI-assisted. Not including deflection. The rate is comparable across teams because the definition is consistent.
How does workflow coverage drive the rate up over time?
The automation rate at launch is a function of how many workflows exist. The rate at month six is a function of how well the team has used ticket data to build more.
Serval surfaces automation opportunities through the Insights Agent, which analyzes ticket history and flags high-volume request types that don't have workflows yet. The Insights Agent powers the analytics dashboards and continuously identifies where manual resolution is happening at scale.
Serval Suggestions goes further. It analyzes how tickets were manually resolved and drafts automation candidates based on those patterns, then surfaces them for a single-click review and approval. You're not starting from a blank canvas every time you want to expand coverage.
The improvement pattern is direct. Look at the "unassisted" category in your analytics: those are requests the system escalated without running any automation. Filter by volume. Identify which categories recur most often. Build workflows for those. Your AI resolution rate rises.
Teams that review this data regularly and ship two or three new workflows each month see continuous rate improvement. The rate is not a fixed number tied to the product. It's a moving curve tied to the work the team puts in, and it has a feedback mechanism built in.
How do you evaluate any vendor's claimed automation rate?
When a vendor shows you a headline rate, the number itself is the last thing to evaluate. These are the questions to ask first.
What does "resolved" mean in your reporting?
Full completion without IT involvement, or anything that reduced human queue volume in some way? Routing, knowledge article surfaces, and catalogue triggers can all be counted as "resolved" in some reporting models. Know exactly what the denominator is.
Is deflection included in the count?
If a vendor's system deflects a ticket before it reaches the queue and counts that as automated, ask what the full-resolution rate is when those are excluded.
Are approvals treated as failures?
Some platforms report a lower "automation rate" because they exclude any workflow that includes an approval step. If that's the case, the rate understates actual automation, and you're comparing it incorrectly against platforms that count approval-gated workflows normally.
Is the benchmark from a comparable environment?
An 87% rate from a team whose top request type is a policy lookup tells you very little about what you'll see with a mix that includes multi-step provisioning, device troubleshooting, and conditional access workflows. Ask for customers at comparable scale, with comparable ticket mix and integration depth.
What does the rate look like at month one vs. month six?
A platform that gets you to 30% in month one and 60% at month six is a different trajectory than one that claims 60% from day one on a small sample. Ask for the time-series view, not the steady-state snapshot.
Will the vendor put the rate in a contract?
This is the clearest signal. A contractual automation floor, written into a pilot SOW, means the vendor is confident enough in their architecture to commit to a number. That commitment is only possible when automation runs as deterministic, pre-built code rather than as a real-time AI inference that produces different outputs depending on context.
How do you improve your automation rate once you're deployed?
The rate improves in a cycle. Review what's escalating. Build automation for the highest-volume gaps. Measure the result. Repeat.
Practically, this means using the analytics dashboard to filter for unassisted tickets regularly: requests your platform received but escalated without running any automation. Sort by volume. The top categories in that list are your next automation priorities.
Serval Suggestions accelerates this by drafting those automations before you ask. When Serval sees a category of tickets being resolved manually in a consistent way, it proposes a workflow based on that resolution pattern. You review, edit if needed, and accept. The workflow goes live and begins running the next time a matching request comes in.
The Insights Agent also flags gaps at the integration level. If a category of tickets can't be automated because a system isn't connected, the analytics surface that too. App connection suggestions appear alongside workflow and guidance suggestions.
Teams that treat the automation rate as a managed metric, not a vendor claim, consistently improve over time. The infrastructure for doing that is built into the platform. Using it is the work.