Product

Resources

Case Studies

Careers

Log In

Book a demo
Book a demo

Log In

Log in

Book a demo

How AI automates service desk operations

AI automates service desk operations through two distinct layers: a language model that interprets employee requests and a workflow execution engine that carries out the corresponding actions across your IT stack. Whether that automation is reliable, auditable, and safe depends almost entirely on how the execution layer is built. The architectural question is whether the AI generates actions on the fly at runtime or triggers pre-built, pre-approved code that IT controls. That distinction is not subtle, and it shapes everything about whether automation creates trust or creates risk.

What does it actually mean for AI to "automate" a service desk request?

Every modern AI service desk makes roughly the same claim: employees ask for something, AI figures out what they need, and the request gets resolved automatically. That description is accurate at the surface. It also leaves out the decision that determines whether the system is trustworthy in production.

Two very different architectures fit that description.

In one model, the AI interprets the request and then generates the actions to take in real time. The model decides at runtime which systems to touch, in what order, and with what parameters. The result may be correct. It may not be. If it isn't, there's no fixed code to review, no repeatable logic to test, and no stable definition of what the automation was supposed to do.

In the other model, IT builds the workflows ahead of time. The AI's job is to understand the request and match it to a workflow that already exists, has already been tested, and has already been published by someone with the authority to do so. At runtime, the AI routes. The pre-built code executes.

These two approaches produce different behavior, different failure modes, and very different answers when an auditor asks what happened.

Why does the execution model change what you can prove to an auditor?

IT teams get asked to prove things. SOC 2 auditors want evidence that access was granted through an approved process, with documented approvers and timestamps. Security teams investigating an incident want to know exactly what changed. Compliance teams want a log they can produce on request, not reconstruct from memory.

If the execution layer is a live AI generating steps on the fly, there's no fixed definition of what should have happened. There's a log of what did happen, but those are different things, and auditors know the difference. If the logic is probabilistic and generated at runtime, you can't assert to an auditor that the same request always produces the same action. You can only say it usually does.

If the execution layer is pre-built code, you can show an auditor exactly what the workflow does, who approved it for production, what approval steps are required before it runs, which API calls it makes, and what the complete run history looks like. The code is reviewable, diffable, and portable. You can hand it to a security team the same way you'd hand them application code for review.

Serval's Automation Agent builds workflows as TypeScript. The code is visible in the workflow builder, editable directly, and version-controlled with a full change history that records who made each change and when. Once a workflow is published, the Help Desk Agent triggers it when a matching request comes in. No code is generated or modified at runtime. The workflow runs exactly as written, for employee one and employee 10,000.

This is what "deterministic by design" means in the Serval docs: the same request produces the same action, every time, because the same code runs every time. Not because the AI makes the same judgment call each time.

How do approval requirements get enforced in an automated workflow?

Full automation doesn't mean no human involvement. It means IT isn't touching tickets they don't need to touch. Approvals are a security feature, not a slowdown.

The question is whether approval requirements are enforced at the code level or suggested to the AI at runtime. If they're suggestions, the system may follow them. If they're hard-coded into the workflow, they're not optional.

Serval workflows support individual approvers, group-based approvals, manager approvals, multi-step chains, and custom business rules that automate the approval decision itself. Those requirements are configured into the workflow at build time. They're enforced every time the workflow runs, regardless of how the employee phrases the request. You can't talk your way past an approval gate that's built into the code.

When a workflow runs, every step is logged: what ran, what the inputs and outputs were, who approved it, how long approval took, and when each step completed. That log is available for export and is ready for audit queries like "show me every access provisioning event from the last 90 days, with approvers and timestamps." You can build that export as a scheduled workflow that runs every Monday and posts to your IT ops Slack channel automatically.

What is the difference between full automation and deflection, and why does it matter?

Deflection is a call-center concept that migrated into ITSM. A ticket that gets routed to a knowledge base article and then closed by the employee is a deflected ticket. It avoided a technician, but not because the system resolved the problem. The employee resolved it, often after several failed searches.

Full automation means the workflow completed the action and the ticket closed without IT ever touching it. The employee asked. The AI matched the request to a workflow. The workflow ran. The employee got a confirmation. IT saw zero of it.

That distinction matters when you're reporting to a VP of IT or a CFO. "We deflect 60% of tickets" means 60% of employees were redirected. "We fully automate 60% of tickets" means 60% of requests were resolved without any IT action. Those two numbers can come from the same system and look identical on a dashboard if the vendor defines the metric loosely.

Serval measures full automation rate as the primary metric: the percentage of requests resolved without any IT or service desk member ever touching the ticket. Approvals may require human input, but that's a security feature. What doesn't count is a ticket that got routed to a human who then resolved it.

What controls who can build and publish workflows?

The employee-facing layer and the workflow-building layer are completely separate in Serval. Employees interact with the Help Desk Agent through Slack, Teams, email, phone, or a web portal. IT teams build, configure, and publish workflows through a separate interface with its own role-based access controls.

Workflow execution scope is a per-workflow setting. Workflows marked as available to anyone in the organization are discoverable by the Help Desk Agent when handling employee requests. Workflows marked as team-only are restricted to team members and don't surface to the general employee population.

At the team level, roles determine who can do what. Only Builder and above can create and edit workflows. Only Managers can configure integrations and view audit logs. A standard IT agent running the help desk queue can't touch the workflow builder or modify what the automation does.

When an integration is connected, the API scope is set at connection time. That scope is the ceiling of what Serval can ever access for any workflow running on that team. Individual workflows can't exceed it, regardless of what a prompt asks for. And each team is an isolated environment: a workflow built for IT Security isn't visible to IT Support unless it's explicitly shared.

This is the security architecture that makes enterprise deployment workable. The permission controls aren't advisory. They're enforced at the platform level.

How does a workflow actually get built and shipped to production?

Serval's Automation Agent builds workflows from a plain-language description. You describe what you want, the agent generates TypeScript and a visual step diagram showing exactly what the workflow does, and you review both before publishing. You can edit the generated code directly, add approval steps through the builder interface, or refine the workflow through follow-up prompts without touching code.

Before a workflow goes to production, you test it against real integrations. When it passes, you publish it. From that point, the Help Desk Agent can trigger it automatically when it recognizes a matching employee request, or team members can run it manually from the workflow catalog.

You don't have to build every workflow from scratch. Serval includes installable workflows for common request types: password resets, software access requests, onboarding and offboarding tasks, and more. These are starting points you can customize, not black-box automations with hidden logic.

Serval Suggestions accelerates the building process further. As your team closes tickets manually, the system identifies patterns in those resolutions and proposes fully built workflows based on real ticket data from your environment. You review the suggestion and publish it with a single click. The proposals aren't generic templates. They're grounded in what your team is actually doing repeatedly.

The Insights Agent surfaces which request types are highest volume and which are already automatable so IT can prioritize what to build next. Ticket data drives workflow decisions, workflow execution reduces ticket volume, and analytics show what's left.

What does an IT automation platform need to get right before you can trust it in production?

Any AI service desk can claim to automate requests. The question worth asking before you deploy one is: what exactly is the AI doing at the moment it takes action on a production system?

If the answer involves the model generating API calls or steps in real time, execution is probabilistic. The system may work most of the time. It may be fast and often correct. But "most of the time" is not an answer you can give to an auditor, and it's not a risk profile most IT teams can accept for actions like provisioning access, modifying user accounts, or triggering offboarding flows.

If the answer is that the AI matches the request to a pre-built workflow and triggers it, execution is deterministic. The same request produces the same action. Every action is logged with full step-level detail. Every approval chain is enforced. Every workflow can be reviewed, version-diffed, and handed to a security team.

There are five things a foundation model alone doesn't solve when you're automating IT operations at scale: reliable execution across systems when things fail mid-run; enforcement of approval chains; a complete audit trail per run; API scope controls per team; and role-based permissions on who can build and publish automations in the first place. Those are infrastructure problems, not AI problems. They require the platform built around the model, not just the model.

The architectural decision that separates automation IT teams can govern from automation they can only observe after the fact is whether the AI is the executor or the router. In a trustworthy service desk automation platform, the AI understands requests and picks workflows. Deterministic code does the work.

What will you build?

Book a demo

What will you build?

Book a demo

What will you build?

Book a demo