Create agents to accomplish specific tasks with tools inside a network.
name
, system
prompt and a model
. All configuration options are detailed in the createAgent
reference.
Here is a simple agent created using the createAgent
function:
import { createAgent, openai } from '@inngest/agent-kit';
const codeWriterAgent = createAgent({
name: 'Code writer',
system:
'You are an expert TypeScript programmer. Given a set of asks, you think step-by-step to plan clean, ' +
'idiomatic TypeScript code, with comments and tests as necessary.' +
'Do not respond with anything else other than the following XML tags:' +
'- If you would like to write code, add all code within the following tags (replace $filename and $contents appropriately):' +
" <file name='$filename.ts'>$contents</file>",
model: openai('gpt-4o-mini'),
});
system
prompts can be static strings, they are more powerful when they
are dynamic system prompts defined as callbacks
that can add additional context at runtime.run()
with a user prompt. This performs an inference call to the model with the system prompt as the first message and the input as the user message.
const { output } = codeWriterAgent.run(
'Write a typescript function that removes unnecessary whitespace',
);
console.log(output);
// [{ role: 'assistant', content: 'function removeUnecessaryWhitespace(...' }]
description
is required. Learn
more about using Agents in Networks here.run()
), Tools are included in calls to the language model through features like OpenAI’s “function calling” or Claude’s “tool use.”
Tools are defined using the createTool
function and are passed to agents via the tools
parameter:
import { createAgent, createTool, openai } from '@inngest/agent-kit';
const listChargesTool = createTool({
name: 'list_charges',
description:
"Returns all of a user's charges. Call this whenever you need to find one or more charges between a date range.",
parameters: z.array(
z.object({
userId: z.string(),
}),
),
handler: async (output, { network, agent, step }) => {
// output is strongly typed to match the parameter type.
},
});
const supportAgent = createAgent({
name: 'Customer support specialist',
system: 'You are an customer support specialist...',
model: openai('gpt-3.5-turbo'),
tools: [listChargesTool],
});
run()
is called, any step that the model decides to call is immediately executed before returning the output. Read the “How agents work” section for additional information.
Learn more about Tools in this guide.
run()
, there are several steps that happen:
Preparing the prompts
system
prompt, the run()
user
prompt, and Network State, if the agent is part
of a Network.onStart
lifecycle hook.Inference call
model
using Inngest’s step.ai
. step.ai
automatically retries on failure and caches the result for durability.The result is parsed into an InferenceResult
object that contains all messages, tool calls and the raw API response from the model.onResponse
lifecycle hook.Tool calling
tools
, the Tool is automatically called.onFinish
lifecycle hook is called with the updated InferenceResult
. This enables you to modify or inspect the output of the called tools.Complete
import { createAgent, openai } from '@inngest/agent-kit';
const agent = createAgent({
name: 'Code writer',
description: 'An expert TypeScript programmer which can write and debug code.',
system: '...',
model: openai('gpt-3.5-turbo'),
lifecycle: {
onStart: async ({ prompt, network: { state }, history }) => {
// Dynamically alter prompts using Network state and history.
return { prompt, history }
},
},
});
lifecycle
options object.
const agent = createAgent({
name: 'Code writer',
description:
'An expert TypeScript programmer which can write and debug code.',
// The system prompt can be dynamically created at runtime using Network state:
system: async ({ network }) => {
// A default base prompt to build from:
const basePrompt =
'You are an expert TypeScript programmer. ' +
'Given a set of asks, think step-by-step to plan clean, ' +
'idiomatic TypeScript code, with comments and tests as necessary.';
// Inspect the Network state, checking for existing code saved as files:
const files: Record<string, string> | undefined = network.state.data.files;
if (!files) {
return basePrompt;
}
// Add the files from Network state as additional context automatically
let additionalContext = 'The following code already exists:';
for (const [name, content] of Object.entries(files)) {
additionalContext += `<file name='${name}'>${content}</file>`;
}
return `${basePrompt} ${additionalContext}`;
},
});
const codeWriterAgent = createAgent({
name: 'Copy editor',
system:
`You are an expert copy editor. Given a draft article, you provide ` +
`actionable improvements for spelling, grammar, punctuation, and formatting.`,
model: openai('gpt-3.5-turbo'),
});
description
that enables an LLM to decide when to call it, Agents also have an description
parameter. This is required when using Agents within Networks. Here is an example of an Agent with a description:
const codeWriterAgent = createAgent({
name: 'Code writer',
description:
'An expert TypeScript programmer which can write and debug code. Call this when custom code is required to complete a task.',
system: `...`,
model: openai('gpt-3.5-turbo'),
});