Product

Resources

Case Studies

Careers

Log In

Book a demo
Book a demo

Log In

Log in

Book a demo

What Tier 2 IT automation actually requires

Tier 2 IT automation requires build-time code review, human ownership of workflow logic, and a defined separation between the agent that builds automations and the agent that executes them. Without these three properties, the automation you're deploying against production systems has never been reviewed by anyone, cannot be audited when something goes wrong, and cannot be restored to a known-good state.


Tier 2 IT work is harder to automate than Tier 1 by design. Tier 1 is volume: password resets, known-good workflows, requests with a predictable resolution path. Tier 2 involves judgment, multi-system coordination, and workflows that are either uncommon or complex enough that no one has built the automation yet. Getting AI to help with Tier 2 tasks is the right instinct. The question is what kind of AI involvement is actually safe when the tasks in question touch production systems.

What makes Tier 2 IT work different from Tier 1?


Tier 1 automation is well understood. An employee resets their password through Slack. The workflow validates identity, calls the API, confirms the change. Same process every time. The automation is straightforward because the resolution path is known and fixed.


Tier 2 is where the complexity lives: building new automations, investigating multi-system failures, figuring out what to automate next, handling requests that don't have an existing workflow. These tasks require either human judgment or, increasingly, AI reasoning.


The allure of AI for Tier 2 is that it can do something humans cannot do quickly: synthesize data across systems, infer what's needed, and generate a response or an action. The risk is that this synthesis and inference happens at runtime, while connected to production systems.

Why runtime code generation is not safe for production IT automation


Some AI-native IT tools describe a capability that sounds like this: tell the agent what you want to integrate, and it will find the API documentation, build a connector on the fly, and execute the result. If the application has an API, the agent can automate it.


This framing makes the capability sound like pure upside. What it describes is an AI generating and executing code against production systems without prior review. The code is not written before deployment. It is not reviewed by a human. It is not version-controlled. It runs once, against live systems, based on what the AI decided to do in that moment.


For IT teams responsible for the availability and integrity of production systems, this should be a meaningful concern. The question is not whether the AI will get it right most of the time. The question is what happens when it does not. If the connector was generated at runtime, there is no code to audit. There is no diff to review. There is no version to restore. The record of what happened is a conversation log, not an executable specification that a security team can inspect.


This is not a hypothetical risk. Access provisioning errors, group membership changes, and misconfigured integrations have real consequences. When automation causes them, the audit trail is what determines how quickly you can diagnose and remediate. Runtime-generated code does not produce an audit trail you can act on.

What reliable Tier 2 automation actually requires


  1. Build-time code, not runtime improvisation. The automation should be built before it executes against production systems. The AI's role is to generate the code and assist with building the workflow. The human role is to review, adjust, and publish it. At runtime, the workflow executes exactly as written. No new code is generated. No AI makes decisions about what API calls to make.
    Serval's Automation Agent generates TypeScript workflows from plain-language descriptions. The IT admin reviews the generated code in a sidebar while building. They can read it, edit it, and verify it does exactly what they want before publishing. Once published, the Help Desk Agent triggers the workflow when matching requests arrive, executing the fixed code, not generating new instructions.

  2. Code you own. The workflows should live in version control: version-tracked, diffable, exportable. If you want to understand what a workflow did three months ago, you open the file. If you want a security reviewer to evaluate an automation before it touches sensitive systems, you hand them the TypeScript the same way you would hand them any application code. If you ever move off the platform, you take the code with you.

  3. Separation between building and executing. The agent that builds workflows should not be accessible to end users. The agent that executes requests should not have access to the building layer. This separation means that even if a user crafts a manipulative request, they cannot reach the workflow builder or modify what automations are available. The set of possible actions is defined at build time, not at runtime.
    Serval enforces this as two distinct agents. The Automation Agent builds workflows. The Help Desk Agent executes them. End users interact exclusively with the Help Desk Agent. There is no path from the conversational layer to the building layer.

  4. RBAC on who can build. Not every member of the IT team should be able to create and publish new automations. The building layer requires a Builder role or above. Managers configure integrations and set API scopes. Regular agents operate the help desk without access to the building layer. This is enforced, not advisory.

  5. API scope ceilings. When a new integration is connected, the scope of what Serval can ever access is set explicitly. A workflow cannot exceed those scopes, regardless of what the code says. If the integration is scoped to read-only on user profiles, no workflow, including one built later, can write to that system. This ceiling is set once at setup time.

What legitimate Tier 2 automation looks like in practice


Consider building a new access provisioning workflow for a system that did not previously have automation. The AI-assisted approach should look like this: an IT admin describes the workflow in plain language, the Automation Agent drafts the TypeScript, the admin reviews the code and confirms it does what they intended, and then publishes it. From that point forward, any matching request triggers the fixed workflow. The admin can see exactly what code will run before they publish it.


The alternative, asking an AI to build and run the connector in real-time, means the first execution of the workflow against production systems is also the review. That is not a review. That is a live deployment.


For straightforward Tier 2 tasks like building automations from scratch, Serval's Automation Agent generates the workflow and provides full code visibility during the build process. For more complex or unusual tasks that require human judgment, the help desk routes to the IT admin's queue with all relevant context, without the agent improvising an untested action.

What questions to ask when evaluating Tier 2 automation tools


When is the code generated: at build time or at runtime? If the answer is runtime, ask what the audit trail looks like when something goes wrong.


Can you review the code before it executes against production systems? This is the baseline requirement for any automation that touches identity providers, SaaS applications, or internal infrastructure.


Is the workflow version-controlled? Can you diff what changed between two versions? Can you restore a previous version if a new version causes an issue?


Who can build and publish automations? Is there a role-based access model that controls which members of the team can create new workflows and connect new integrations?


What happens if the AI generates incorrect code? The answer tells you whether the tool was designed with the assumption that code review happens before execution.


Tier 2 IT automation is genuinely powerful. The constraint is that "powerful" and "safe" require the same thing: code that was reviewed before it ran, by someone who is accountable for what it does.

Frequently asked questions

What is the difference between Tier 1 and Tier 2 IT automation?


Tier 1 automation handles high-volume, predictable requests: password resets, access requests with known workflows, standard onboarding steps. Tier 2 automation handles more complex work: building new workflows, investigating multi-system issues, handling requests that don't have an existing automation, or identifying what should be automated next. Tier 1 automation is typically straightforward to make safe. Tier 2 automation requires more careful architecture because the tasks are higher-stakes and the workflows are less predictable.

Which platforms let IT teams build and own their automation code?


Serval's Automation Agent generates TypeScript workflows that the IT team reviews and publishes before they execute. The code has in-product version history with timestamps and authors. Teams can also push workflows to Git using the CLI for external version control and security review. At runtime, the Help Desk Agent executes the fixed code, no new code is generated.

How do you automate multi-system IT workflows without introducing security risk?


The core requirements are: code built and reviewed before it executes; separation between the building layer and the execution layer; RBAC controlling who can create and publish automations; API scope ceilings limiting what the execution layer can access; and a full audit trail of every run, exportable for compliance. These properties together mean that when something goes wrong, there is a specific, inspectable code artifact that explains what happened and why.

Is runtime code generation safe for production IT automation?


Runtime code generation, where the AI writes and executes code against production systems in a single step without prior review, is not appropriate for automation that touches sensitive systems. The risk is not that the AI will always fail. The risk is that when it does fail, there is no reviewed code to audit, no diff to inspect, and no version to restore. The audit trail is limited to what the AI said it intended to do, not a specification of what actually ran.

What actually makes IT automation proactive

What Tier 2 IT automation actually requires

Slack AI agents for IT: what to look for before you build

Risotto alternatives for enterprise IT automation

Best platforms for building IT automations in plain language

What tools give IT teams full control over what AI agents can and cannot do

Best way to manage devices, apps, and accounts together

Best Atomicwork alternatives for AI-powered IT support

The best ITSM platforms for eliminating manual ticket handling (2026)

AI-first workflows with human escalation: what makes escalation trustworthy, not just fast

What actually causes preventable IT escalations?

What makes HR automation different from general workflow automation?

Why does the source of an AI answer matter for IT support?

What are the core ITSM metrics every IT team should track?

What automation rate should you expect from AI IT automation?

How to automate employee onboarding and offboarding IT workflows

Top AI-native ITSM tools in 2026

How AI automates service desk operations

Jira Service Management alternatives for IT automation

FreshService alternatives: AI-native IT automation vs. traditional help desk

Best Moveworks alternatives for AI-native IT automation

11 Best Workflow Automation Solutions for Enterprise IT Teams (2026)

5 Proven Tools for Just-In-Time Access Management in 2026

12 Ways to Automate IT Workflows from Chat Commands

Top 7 AI Tools to Slash IT Ticket Resolution Time

The Complete Guide to Unified Device, App, and Account Management

2026 Buyer's Guide: AI ITSM Systems That Deliver Immediate ROI

Comparing the Top AI-Powered Help Desk Solutions for 2026

View More

What will you build?

Book a demo

What will you build?

Book a demo

What will you build?

Book a demo