How an AI IT workflow builder actually works
An AI IT workflow builder converts a plain-language description into deterministic code. The AI's job is to write that code. Once you review, test, and publish it, the AI's job is done. At runtime, the published code executes exactly as written, every time, with no AI involvement. That separation between building and executing is what makes the automation safe to trust at scale.
Most explanations of "AI workflow builders" skip that distinction entirely. They describe the input (a natural language prompt) and the output (an automated task), and leave the middle as a black box. But the middle is everything. Whether you can audit what the automation did, whether you can verify what it will do before it runs, and whether you can hand the implementation to your security team for review all depend on what exists at execution time.
What does an AI workflow builder actually produce?
The term "workflow builder" has been stretched to cover a wide range of tools. At one end: drag-and-drop rules engines that require no AI and produce a set of conditional logic blocks. At the other: systems where a language model interprets each request at runtime and decides what to do, with no stable artifact in between.
Most IT teams don't realize this spectrum exists until they ask the audit question: "What exactly ran, and can I see it?"
In Serval, the answer is TypeScript code. The Automation Agent converts your plain-language description into a TypeScript workflow in real time. You can watch it build. When it finishes, you see three things: a visual diagram of each step, the generated TypeScript, and a summary of what the workflow does. The code is readable. A developer can review it. Your security team can review it the same way they review application code.
This is documented directly in Serval's technical FAQ: "TypeScript. Serval generates workflow logic as TypeScript. If you use the Serval CLI to pull workflows locally, you'll see the same in each workflow's `index.ts` file."
That matters for a simple reason. When an auditor asks what happened, you point to the code and the run logs. When a security reviewer asks what access this automation can take, you show them the code. There's no black box to defend.
How does the Automation Agent convert plain language to TypeScript?
You start by describing the automation in whatever terms make sense:
"Reset a user's password in Okta and send them a Slack DM with instructions"
"Every Monday at 9am, list open tickets older than 7 days and post to #it-ops"
"When a user is added to the Engineering group in Okta, add them to the engineering GitHub team"
The Automation Agent interprets the description, asks follow-up questions if it needs clarification, and generates the TypeScript live. You don't fill out a form. You don't configure fields. You describe what you want, and the builder constructs the code.
After the initial build, you can refine conversationally: "Add an approval step before resetting the password." "Also send a confirmation email to the user's manager." The Automation Agent updates the code accordingly. Each change is visible. Nothing is locked. The code is yours to read, edit, and reject.
You can also edit the TypeScript directly in the builder, or pull it locally via the Serval CLI and work in your own editor. That's optional. Most IT admins who aren't developers never open the code. The natural language iteration loop is enough to get where they need to go.
Who can build workflows, and what stops the wrong person from publishing?
This is where the permission model matters.
Serval uses a five-level team role system: Agent, Viewer, Contributor, Builder, and Manager. Only users at the Builder level and above can create and edit custom workflows. Managers can additionally configure integrations and view audit logs. Regular IT agents can run workflows, but they can't touch the building layer.
That's not advisory. It's enforced. A user with Agent-level access can trigger a published password reset workflow. They cannot open the workflow builder, modify the code, or publish a new version.
The result: building workflows is scoped to IT admins with explicit permission. Running workflows is scoped to whoever the published workflow is configured for. Those are two separate gates.
The Automation Agent that builds is separated from the Help Desk Agent that runs. End users interact only with the Help Desk Agent. There's no path from the employee-facing layer back to the building layer.
What happens before a workflow goes live?
Publishing is the gate. Nothing runs until you publish. Before publishing, you have three mandatory steps.
Configure approvals. You specify who must approve workflow runs before execution. Options include individual users, groups synced from your identity provider, the requester's manager, sequential multi-step chains, or a custom approval workflow that auto-approves or auto-denies based on business logic. A workflow requiring manager approval for an Okta password reset will always require manager approval. The Help Desk Agent can't bypass it.
Configure execution scope. Workflows can be restricted to team members only or made available to anyone in the organization. A Security team workflow doesn't surface to IT Support unless it's explicitly configured to. Execution scope is set at build time and respected at runtime.
Test against your real stack. The test panel shows each step's execution status, the input and output data at each step, and any errors with details. You're testing against real integrations, not a sandbox environment.
When you publish, the workflow becomes available to the Help Desk Agent immediately. Scheduled workflows activate their schedule. Version history records the publication with your name and a timestamp.
How does the Help Desk Agent trigger a workflow without modifying it?
This is the architecture question that most tools don't answer clearly.
When an employee messages the IT Slack channel asking to reset their password, the Help Desk Agent receives the request, matches it to the right published workflow, and triggers it. The Help Desk Agent doesn't write code. It doesn't adapt the workflow to the specific request. It executes the TypeScript exactly as written.
If the workflow requires manager approval, the Help Desk Agent pauses execution, routes the approval request to the configured approver, and waits. Once approved, it continues. The approval gate is hard-coded into the workflow. It's not a runtime decision.
The same code runs for employee 1 and employee 10,000. The Help Desk Agent's job is to match requests to published workflows and trigger them. The Automation Agent's job is to write those workflows. Those two agents don't share a layer.
Serval describes this distinction as deterministic by design: "Once configured, workflows always run the exact code you define, meaning no hallucinations or surprises." That's not a feature description. It's an architectural constraint.
What does the audit trail actually cover?
Every workflow run produces a complete execution log: each step by name, its status, how long it took, the input passed to it, and the output it returned. You can see exactly which API calls the workflow made, what data it operated on, and what it returned at each step.
Workflow configuration changes are versioned separately from run history. Every published version shows the publisher's name and timestamp. You can view a previous version, compare it to the current one, and restore it if needed. Restoring creates a new version in the history rather than overwriting anything.
Run history is exportable as CSV or JSON. Compliance use cases documented in Serval's audit docs include SOC 2 reporting (evidence of access changes with timestamps and approvers), change management (all workflow modifications with version history), and incident response (reviewing workflow activity during security investigations).
You can also build audit workflows directly in the workflow builder. For example: "Create a scheduled workflow that runs every Monday at 9am. List all workflows in the team, get the run history for each workflow from the past 7 days, and post a CSV report to #it-ops with total runs, successful runs, and failed runs." The audit infrastructure is a set of workflows, not a separate dashboard you wait for a vendor to ship.
How long does it take to build and ship a workflow?
Most workflows ship the same day. The loop is: describe the automation, review the generated TypeScript, configure approvals and execution scope, test, publish. There's no PS engagement, no implementation scoping, no sandbox-only pilot phase.
More complex workflows that span multiple systems take longer to test than to build. An offboarding flow that disables an Okta account, removes GitHub access, and posts a summary to Slack is a single natural-language description. The iteration loop is fast: describe the change you want, the Automation Agent updates the TypeScript, test again.
Pre-built installable workflows are available for 30+ integrations, including Okta, Google Workspace, GitHub, Slack, AWS, and Jira. You install one, customize inputs, approvals, and execution scope, and publish. The underlying model is identical: TypeScript code that you review and publish before it runs. Installable workflows are a starting point, not a ceiling.
For recurring discovery, the Insights Agent analyzes your help desk patterns and surfaces which requests are still being handled manually at the highest volume. It suggests which workflows to build next based on your actual ticket history. You don't have to audit your own tickets to know what to automate.
What is the right question to ask any AI workflow tool?
The question isn't whether AI is involved. Every tool in this category involves AI at some point. The question is: what exists at execution time, and can you see it?
A tool that generates and runs logic in the same step doesn't produce a stable artifact. You get a result and a log. The log tells you what happened. It doesn't tell you why the AI made the decision it made, or whether it will make the same decision next time.
A tool that generates TypeScript code at build time produces an artifact that you can read, review, test, and version-control. The AI's decisions are captured in code. The code runs the same way every time. When an auditor asks "what did this automation do, and why," the answer is in the code and the run logs, not in a vendor's support portal.
That distinction determines whether IT leaders can govern automation at scale. A governance question isn't "did this work?" It's "can I prove what it did, to whom, and under what approval?" The execution model either supports that answer or it doesn't.
Serval's Automation Agent writes TypeScript. The Help Desk Agent triggers it. Publishing is the gate between them. That's the architecture. Everything else follows from it.