Build a Unified TypeScript Action Pipeline for Better DX

We've all stared at our backend orchestrator firing off 50 uncoordinated API calls for no reason while downing coffee, right? Every third-party service you integrate has a completely different syntax. One uses a webhook, another requires polling, and a third needs a deeply nested payload that makes your eyes cross.
Before you know it, your clean Node.js backend looks like a bowl of spaghetti, and your Developer Experience (DX) has plummeted to zero.
Shall we solve this beautifully together? ✨
Today, we are going to build a TypeScript action pipeline. Inspired by lightweight registration frameworks, we will create an embeddable, opinionated infrastructure layer that standardizes how every function, API call, and task is registered and executed.
Instead of focusing purely on execution speed, we are going to focus on how much earlier this lets us go home. We want auto-generated types, built-in validation, and a pipeline runner that feels like magic to use.
The Mental Model
Imagine your API integrations as a bustling, chaotic kitchen. Right now, every chef (function) speaks a different language, uses different measuring cups, and throws ingredients into the pan at random times. It's stressful to watch, and even harder to manage.
Now, visualize a standardized shipping container system. Every piece of data that enters your system is placed into an identical, clearly labeled box (our Schema). It travels along a predictable conveyor belt (our Pipeline Runner). If a box is the wrong size, it gets rejected at the loading dock (Input Validation), long before it clogs up your internal systems.
By building a unified TypeScript action pipeline, we enforce a strict contract. Every action will automatically validate its inputs, handle retries, and provide perfect TypeScript autocomplete for the next developer who touches the code.
Prerequisites
Before we start writing our pipeline, ensure you have the following ready:
- Node.js 18+ installed on your machine.
- A TypeScript project initialized (
tsc --init). - Zod installed (
npm install zod). We will use Zod for our runtime schema validation because its TypeScript inference is best-in-class.
Let's get our hands dirty and build this!
Step 1: Defining the Standard Action Interface
The first step to restoring order is defining what an "Action" actually is. We need a structure that holds the execution logic, the expected input schema, and metadata for logging.
Create a file named pipeline.ts and add the following core types:
import { z } from "zod";
// 1. Define the blueprint for every action in our system
export interface ActionDef<TInput extends z.ZodTypeAny, TOutput> {
name: string;
description: string;
schema: TInput;
execute: (input: z.infer<TInput>) => Promise<TOutput>;
retries?: number;
}
// 2. A wrapper class to hold the instantiated action
export class PipelineAction<TInput extends z.ZodTypeAny, TOutput> {
constructor(public readonly def: ActionDef<TInput, TOutput>) {}
async run(input: unknown): Promise<TOutput> {
// Runtime validation happens here, not in the business logic!
const parsedInput = this.def.schema.parse(input);
return this.def.execute(parsedInput);
}
}
Why this code is better:
Notice how we decoupled the validation from the execution. In a typical messy codebase, the first 15 lines of every function are justif (!input.id) throw new Error(...). By moving validation into the PipelineAction.run method via Zod, our actual business logic (execute) remains incredibly lean and focused purely on the task at hand.
Step 2: The Registration Factory
Now we need a DX-friendly way to create these actions. We don't want developers manually instantiating classes everywhere. We want a beautiful, inferred factory function.
Add this to your pipeline.ts:
export function createAction<TInput extends z.ZodTypeAny, TOutput>(
options: ActionDef<TInput, TOutput>
): PipelineAction<TInput, TOutput> {
return new PipelineAction(options);
}
Let's see how a developer actually uses this in practice. Imagine we are integrating a weather API.
const getWeather = createAction({
name: "get_weather",
description: "Fetches the current weather for a city",
schema: z.object({
city: z.string().min(1),
units: z.enum(["metric", "imperial"]).default("metric"),
}),
execute: async ({ city, units }) => {
// ✨ DX Magic: 'city' and 'units' are perfectly typed here!
const res = await fetch(
https://api.weather.com/v1?q=${city}&units=${units}
);
return res.json();
},
});
Why this code is better:
Look at theexecute function. Because we passed a Zod schema into the schema property, TypeScript's inference engine automatically knows that city is a string and units is either "metric" or "imperial". You get full autocomplete without having to write a separate TypeScript interface. You write the schema once, and you get both runtime validation and compile-time types. That is the essence of a great Developer Experience.
Step 3: The Pipeline Runner
Individual actions are great, but the real power comes from chaining them together. We need a runner that takes an initial input, passes it to the first action, and feeds the output to the next.
Let's build a lightweight pipeline runner:
export class PipelineRunner {
private steps: PipelineAction<any, any>[] = [];
pipe<TInput extends z.ZodTypeAny, TOutput>(
action: PipelineAction<TInput, TOutput>
) {
this.steps.push(action);
return this; // Enable fluent chaining
}
async execute(initialPayload: unknown) {
let currentPayload = initialPayload;
for (const [index, step] of this.steps.entries()) {
console.log([Pipeline] Executing step ${index + 1}: ${step.def.name});
try {
// The output of the current step becomes the input of the next
currentPayload = await step.run(currentPayload);
} catch (error) {
console.error([Pipeline] Failed at step ${step.def.name}:, error);
throw error;
}
}
return currentPayload;
}
}
Performance vs DX
When architects evaluate a pattern like this, they usually weigh the overhead of the abstraction against the benefits. Let's break it down comprehensively.
Developer Experience (DX)
From a DX perspective, this pattern is a massive win.- Self-Documenting: New team members don't need to guess what an API integration requires. The Zod schema serves as living, executable documentation.
- Fail-Fast Mechanism: Because validation happens at the boundary of every action, you never end up in a situation where a malformed string causes a database crash 4 layers deep into your application.
- Reduced Boilerplate: You no longer need to write
try/catchblocks and validation logic in every single service file. The pipeline handles it.
Backend Performance
Does adding a class wrapper and runtime validation slow things down? Technically, yes, Zod parsing takes a few milliseconds. However, the macro-level performance gains vastly outweigh this micro-level overhead:- Memory Optimization: By preventing malformed requests from executing complex business logic, you save significant CPU cycles and memory allocations.
- Predictable Execution: The pipeline structure allows you to easily implement a caching layer between steps. If Step 1 (fetching heavy data) was already computed for a specific input, the pipeline runner can return the cached result instantly.
Verification
Let's confirm our pipeline works by chaining two actions together. We'll fetch user data, and then format it into a greeting.
// Action 1: Fetch User
const fetchUser = createAction({
name: "fetch_user",
description: "Gets user by ID",
schema: z.object({ userId: z.number() }),
execute: async ({ userId }) => {
// Simulating database call
return { id: userId, name: "Chloe", role: "Architect" };
}
});
// Action 2: Format Greeting
const formatGreeting = createAction({
name: "format_greeting",
description: "Creates a welcome message",
// Notice how the input schema matches the output of fetchUser!
schema: z.object({
id: z.number(),
name: z.string(),
role: z.string()
}),
execute: async (user) => {
return Welcome aboard, ${user.name} the ${user.role}!;
}
});
// Run the pipeline
async function testPipeline() {
const pipeline = new PipelineRunner()
.pipe(fetchUser)
.pipe(formatGreeting);
const result = await pipeline.execute({ userId: 1 });
console.log(result); // "Welcome aboard, Chloe the Architect!"
}
testPipeline();
If you run this code, you should see the beautifully formatted greeting in your console, along with the automated execution logs from our runner.
Troubleshooting
If you are implementing this in your own codebase, you might run into a few common pitfalls:
- TypeScript Inference Errors: If
z.inferis returningany, ensure you have"strict": trueenabled in yourtsconfig.json. TypeScript needs strict mode to properly infer complex generics. - Pipeline Type Mismatches: Currently, our
PipelineRunner.pipe()method accepts any action. If you want strict enforcement where Action A's output must match Action B's input, you would need to implement advanced generic chaining on thePipelineRunnerclass. For most teams, runtime Zod validation is sufficient and much easier to read. - Async Timeouts: If an action hangs indefinitely (like a bad API call), the whole pipeline stalls. Consider adding a
Promise.race()timeout wrapper inside thePipelineAction.runmethod to enforce strict execution limits.
What You Built
You just transformed a messy, unpredictable web of API calls into a highly structured, strongly-typed TypeScript action pipeline. You defined a standard interface, built a DX-friendly registration factory, and orchestrated it all with a sequential runner.
Your components and services are way leaner now, and your fellow developers will thank you for the autocomplete. Happy Coding! 🚀
FAQ
Can I use this with asynchronous event-driven architectures?
Yes! While our runner is sequential, you can modify thePipelineRunner to publish the output of an action to a message broker (like RabbitMQ or Kafka) instead of immediately passing it to the next function.