Assembly Line Protocol
2026-03-30 · DraftAn open protocol for AI Agent Assembly Lines
ALP defines how work flows through a sequence of AI-powered Stations and the contracts between Server, Runner, Operator, and Agent. Inspired by GitHub Actions self-hosted runners and the Model Context Protocol, it standardises the plumbing so you can focus on what your agents actually do.
Server
Manages Assembly Lines, the task queue, and the Runner registry. The authoritative source of truth.
Runner
Polls for jobs, claims tasks, delegates execution to a Station Operator, and reports results back.
Operator
Spawned per-job to set up the workspace, run the Agent, stream output, and detect completion.
Agent
The AI that does the actual work at each Station. Reads prompt, does the work, exits with a result.
Assembly Line Protocol (ALP)
A protocol for running AI agent pipelines at scale — inspired by GitHub Actions self-hosted runners and the Model Context Protocol (MCP).
What Is ALP?
The Assembly Line Protocol (ALP) is an open protocol that describes how AI tasks flow through a sequence of automated Stations, each powered by an AI Agent. It defines the contracts between four roles so that anyone can build a compatible Server or Runner:
- Server — manages Assembly Lines, the task queue, and the Runner registry
- Runner — polls the Server for jobs, delegates execution to a Station Operator, and reports results back
- Operator — spawned by the Runner to manage a single Station's execution (sets up the workspace, runs the Agent, streams output)
- Agent — the AI that does the actual work at each Station (e.g. Claude Code)
The key inspiration is GitHub self-hosted runners: just as any machine can register as a runner and execute GitHub Actions workflows, any ALP Runner can register with a compliant ALP Server and execute Stations. You can run your own Clients against the agentics.dk Server, or build your own ALP Server that pks-cli can connect to.
Why ALP Exists
Today, agentics.dk runs Assembly Lines internally. ALP exists so that:
- Anyone can build a Runner — connect your own runner/operator/agent stack to the agentics.dk server or any other ALP-compliant server
- Anyone can build a Server — implement the ALP server API and route tasks to any Runner that speaks the protocol
- The ecosystem grows — just as MCP unlocked a marketplace of AI tools, ALP should unlock a marketplace of AI production pipelines
The reference implementations are:
- Server: agentics.dk — closed source
- Runner: pks-cli — open source, C#
- Operator: vibecast — open source, Go
Who Should Read This
| You want to... | Read... |
|---|---|
| Understand the big picture | spec/01-overview.md |
| Learn the terminology | spec/02-concepts.md |
| Understand Assembly Lines | spec/03-assembly-lines.md |
| Understand Stations | spec/04-stations.md |
| Understand Tasks | spec/05-tasks.md |
| Understand flow control | spec/06-transition-rules.md |
| Build a Server | spec/07-server.md |
| Build a Runner | spec/08-runner.md |
| Build an Operator | spec/09-operator.md |
| Understand Agents | spec/10-agent.md |
| See a real-world example | examples/vibecheck.md |
| Start minimal | examples/hello-world.md |
Spec Status
This specification is in active draft. Open questions are tracked in todo.md. Sections with remaining open questions are marked > ❓.