Files
agentic-coding-starter-kit/docs/technical/ai/streaming.md
Leon van Zyl a3a151c67a feat: comprehensive boilerplate improvements
Security & Stability:
- Add Next.js 16 proxy.ts for BetterAuth cookie-based auth protection
- Add rate limiting for API routes (src/lib/rate-limit.ts)
- Add Zod validation for chat API request bodies
- Add session auth check to chat and diagnostics endpoints
- Add security headers in next.config.ts (CSP, X-Frame-Options, etc.)
- Add file upload validation and sanitization in storage.ts

Core UX Components:
- Add error boundaries (error.tsx, not-found.tsx, chat/error.tsx)
- Add loading states (skeleton.tsx, spinner.tsx, loading.tsx files)
- Add toast notifications with Sonner
- Add form components (input.tsx, textarea.tsx, label.tsx)
- Add database indexes for performance (schema.ts)
- Enhance chat UX: timestamps, copy-to-clipboard, thinking indicator,
  error display, localStorage message persistence

Polish & Accessibility:
- Add Open Graph and Twitter card metadata
- Add JSON-LD structured data for SEO
- Add sitemap.ts, robots.ts, manifest.ts
- Add skip-to-content link and ARIA labels in site-header
- Enable profile page quick action buttons with dialogs
- Update Next.js 15 references to Next.js 16

Developer Experience:
- Add GitHub Actions CI workflow (lint, typecheck, build)
- Add Prettier configuration (.prettierrc, .prettierignore)
- Add .nvmrc pinning Node 20
- Add ESLint rules: import/order, react-hooks/exhaustive-deps
- Add stricter TypeScript settings (exactOptionalPropertyTypes,
  noImplicitOverride)
- Add interactive setup script (scripts/setup.ts)
- Add session utility functions (src/lib/session.ts)

All changes mirrored to create-agentic-app/template/

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 14:46:15 +02:00

521 lines
22 KiB
Markdown

# Next.js App Router Quickstart
The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications.
In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenRouter API key.
If you haven't obtained your OpenRouter API key, you can do so by [signing up](https://openrouter.ai/) on the OpenRouter website and visiting [https://openrouter.ai/settings/keys](https://openrouter.ai/settings/keys).
## Create Your Application
Start by creating a new Next.js application. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside it.
<div className="mb-4">
<Note>
Be sure to select yes when prompted to use the App Router and Tailwind CSS.
If you are looking for the Next.js Pages Router quickstart guide, you can
find it [here](/docs/getting-started/nextjs-pages-router).
</Note>
</div>
<Snippet text="pnpm create next-app@latest my-ai-app" />
Navigate to the newly created directory:
<Snippet text="cd my-ai-app" />
### Install dependencies
Install `ai`, `@ai-sdk/react`, and `@openrouter/ai-sdk-provider`, the AI package, AI SDK's React hooks, and the OpenRouter provider respectively.
<Note>
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
</Note>
<div className="my-4">
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
<Tab>
<Snippet text="pnpm add ai @ai-sdk/react @openrouter/ai-sdk-provider zod" dark />
</Tab>
<Tab>
<Snippet text="npm install ai @ai-sdk/react @openrouter/ai-sdk-provider zod" dark />
</Tab>
<Tab>
<Snippet text="yarn add ai @ai-sdk/react @openrouter/ai-sdk-provider zod" dark />
</Tab>
<Tab>
<Snippet text="bun add ai @ai-sdk/react @openrouter/ai-sdk-provider zod" dark />
</Tab>
</Tabs>
</div>
### Configure OpenRouter API key
Create a `.env.local` file in your project root and add your OpenRouter API Key. This key is used to authenticate your application with OpenRouter.
<Snippet text="touch .env.local" />
Edit the `.env.local` file:
```env filename=".env.local"
OPENROUTER_API_KEY=sk-or-v1-xxxxxxxxx
OPENROUTER_MODEL=openai/gpt-5-mini
```
Replace the API key with your actual OpenRouter API key from [https://openrouter.ai/settings/keys](https://openrouter.ai/settings/keys).
<Note className="mb-4">
You can browse available models at [https://openrouter.ai/models](https://openrouter.ai/models) and set your preferred model via the `OPENROUTER_MODEL` environment variable.
</Note>
## Create a Route Handler
Create a route handler, `app/api/chat/route.ts` and add the following code:
```tsx filename="app/api/chat/route.ts"
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
// Initialize OpenRouter with API key from environment
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const result = streamText({
model: openrouter(process.env.OPENROUTER_MODEL || "openai/gpt-5-mini"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
Let's take a look at what is happening in this code:
1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. The `messages` are of UIMessage type, which are designed for use in application UI - they contain the entire message history and associated metadata like timestamps.
2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (created using `createOpenRouter` from `@openrouter/ai-sdk-provider`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. The `messages` key expects a `ModelMessage[]` array. This type is different from `UIMessage` in that it does not include metadata, such as timestamps or sender information. To convert between these types, we use the `convertToModelMessages` function, which strips the UI-specific metadata and transforms the `UIMessage[]` array into the `ModelMessage[]` format that the model expects.
3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toUIMessageStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
4. Finally, return the result to the client to stream the response.
This Route Handler creates a POST request endpoint at `/api/chat`.
## Wire up the UI
Now that you have a Route Handler that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`app/page.tsx`) with the following code to show a list of chat messages and provide a user message input:
```tsx filename="app/page.tsx"
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
export default function Chat() {
const [input, setInput] = useState("");
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map((message) => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === "user" ? "User: " : "AI: "}
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <div key={`${message.id}-${i}`}>{part.text}</div>;
}
})}
</div>
))}
<form
onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput("");
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
```
<Note>
Make sure you add the `"use client"` directive to the top of your file. This
allows you to add interactivity with Javascript.
</Note>
This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `parts` properties).
- `sendMessage` - a function to send a message to the chat API.
The component uses local state (`useState`) to manage the input field value, and handles form submission by calling `sendMessage` with the input text and then clearing the input field.
The LLM's response is accessed through the message `parts` array. Each message contains an ordered array of `parts` that represents everything the model generated in its response. These parts can include plain text, reasoning tokens, and more that you will see later. The `parts` array preserves the sequence of the model's outputs, allowing you to display or process each component in the order it was generated.
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
<Snippet text="pnpm run dev" />
Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Next.js.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your Route Handler
Modify your `app/api/chat/route.ts` file to include the new weather tool:
```tsx filename="app/api/chat/route.ts" highlight="2,13-27"
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { streamText, UIMessage, convertToModelMessages, tool } from "ai";
import { z } from "zod";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const result = streamText({
model: openrouter(process.env.OPENROUTER_MODEL || "openai/gpt-5-mini"),
messages: convertToModelMessages(messages),
tools: {
weather: tool({
description: "Get the weather in a location (fahrenheit)",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toUIMessageStreamResponse();
}
```
In this updated code:
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines `inputSchema` using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this input from the context of the conversation. If it can't, it will ask the user for the missing information.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary input. The `execute` function will then be automatically run, and the tool output will be added to the `messages` as a `tool` message.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result on the client via the `tool-weather` part of the `message.parts` array.
<Note>
Tool parts are always named `tool-{toolName}`, where `{toolName}` is the key
you used when defining the tool. In this case, since we defined the tool as
`weather`, the part type is `tool-weather`.
</Note>
### Update the UI
To display the tool invocation in your UI, update your `app/page.tsx` file:
```tsx filename="app/page.tsx" highlight="16-21"
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
export default function Chat() {
const [input, setInput] = useState("");
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map((message) => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === "user" ? "User: " : "AI: "}
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <div key={`${message.id}-${i}`}>{part.text}</div>;
case "tool-weather":
return (
<pre key={`${message.id}-${i}`}>
{JSON.stringify(part, null, 2)}
</pre>
);
}
})}
</div>
))}
<form
onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput("");
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
```
With this change, you're updating the UI to handle different message parts. For text parts, you display the text content as before. For weather tool invocations, you display a JSON representation of the tool call and its result.
Now, when you ask about the weather, you'll see the tool call and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool is now visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using `stopWhen`. By default, `stopWhen` is set to `stepCountIs(1)`, which means generation stops after the first step when there are tool results. By changing this condition, you can allow the model to automatically send tool results back to itself to trigger additional generations until your specified stopping condition is met. In this case, you want the model to continue generating so it can use the weather tool results to answer your original question.
### Update Your Route Handler
Modify your `app/api/chat/route.ts` file to include the `stopWhen` condition:
```tsx filename="app/api/chat/route.ts"
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import {
streamText,
UIMessage,
convertToModelMessages,
tool,
stepCountIs,
} from "ai";
import { z } from "zod";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const result = streamText({
model: openrouter(process.env.OPENROUTER_MODEL || "openai/gpt-5-mini"),
messages: convertToModelMessages(messages),
stopWhen: stepCountIs(5),
tools: {
weather: tool({
description: "Get the weather in a location (fahrenheit)",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toUIMessageStreamResponse();
}
```
In this updated code:
1. You set `stopWhen` to be when `stepCountIs` 5, allowing the model to use up to 5 "steps" for any given generation.
2. You add an `onStepFinish` callback to log any `toolResults` from each step of the interaction, helping you understand the model's tool usage. This means we can also delete the `toolCall` and `toolResult` `console.log` statements from the previous example.
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting `stopWhen: stepCountIs(5)`, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit.
### Add another tool
Update your `app/api/chat/route.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
```tsx filename="app/api/chat/route.ts" highlight="34-47"
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import {
streamText,
UIMessage,
convertToModelMessages,
tool,
stepCountIs,
} from "ai";
import { z } from "zod";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const result = streamText({
model: openrouter(process.env.OPENROUTER_MODEL || "openai/gpt-5-mini"),
messages: convertToModelMessages(messages),
stopWhen: stepCountIs(5),
tools: {
weather: tool({
description: "Get the weather in a location (fahrenheit)",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: "Convert a temperature in fahrenheit to celsius",
inputSchema: z.object({
temperature: z
.number()
.describe("The temperature in fahrenheit to convert"),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toUIMessageStreamResponse();
}
```
### Update Your Frontend
update your `app/page.tsx` file to render the new temperature conversion tool:
```tsx filename="app/page.tsx" highlight="21"
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
export default function Chat() {
const [input, setInput] = useState("");
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map((message) => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === "user" ? "User: " : "AI: "}
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <div key={`${message.id}-${i}`}>{part.text}</div>;
case "tool-weather":
case "tool-convertFahrenheitToCelsius":
return (
<pre key={`${message.id}-${i}`}>
{JSON.stringify(part, null, 2)}
</pre>
);
}
})}
</div>
))}
<form
onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput("");
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
```
This update handles the new `tool-convertFahrenheitToCelsius` part type, displaying the temperature conversion tool calls and results in the UI.
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool output displayed.
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).