npx command
This commit is contained in:
94
create-agentic-app/template/docs/business/starter-prompt.md
Normal file
94
create-agentic-app/template/docs/business/starter-prompt.md
Normal file
@@ -0,0 +1,94 @@
|
||||
I'm working with an agentic coding boilerplate project that includes authentication, database integration, and AI capabilities. Here's what's already set up:
|
||||
|
||||
## Current Agentic Coding Boilerplate Structure
|
||||
|
||||
- **Authentication**: Better Auth with Google OAuth integration
|
||||
- **Database**: Drizzle ORM with PostgreSQL setup
|
||||
- **AI Integration**: Vercel AI SDK with OpenAI integration
|
||||
- **UI**: shadcn/ui components with Tailwind CSS
|
||||
- **Current Routes**:
|
||||
- `/` - Home page with setup instructions and feature overview
|
||||
- `/dashboard` - Protected dashboard page (requires authentication)
|
||||
- `/chat` - AI chat interface (requires OpenAI API key)
|
||||
|
||||
## Important Context
|
||||
|
||||
This is an **agentic coding boilerplate/starter template** - all existing pages and components are meant to be examples and should be **completely replaced** to build the actual AI-powered application.
|
||||
|
||||
### CRITICAL: You MUST Override All Boilerplate Content
|
||||
|
||||
**DO NOT keep any boilerplate components, text, or UI elements unless explicitly requested.** This includes:
|
||||
|
||||
- **Remove all placeholder/demo content** (setup checklists, welcome messages, boilerplate text)
|
||||
- **Replace the entire navigation structure** - don't keep the existing site header or nav items
|
||||
- **Override all page content completely** - don't append to existing pages, replace them entirely
|
||||
- **Remove or replace all example components** (setup-checklist, starter-prompt-modal, etc.)
|
||||
- **Replace placeholder routes and pages** with the actual application functionality
|
||||
|
||||
### Required Actions:
|
||||
|
||||
1. **Start Fresh**: Treat existing components as temporary scaffolding to be removed
|
||||
2. **Complete Replacement**: Build the new application from scratch using the existing tech stack
|
||||
3. **No Hybrid Approach**: Don't try to integrate new features alongside existing boilerplate content
|
||||
4. **Clean Slate**: The final application should have NO trace of the original boilerplate UI or content
|
||||
|
||||
The only things to preserve are:
|
||||
|
||||
- **All installed libraries and dependencies** (DO NOT uninstall or remove any packages from package.json)
|
||||
- **Authentication system** (but customize the UI/flow as needed)
|
||||
- **Database setup and schema** (but modify schema as needed for your use case)
|
||||
- **Core configuration files** (next.config.ts, tsconfig.json, tailwind.config.ts, etc.)
|
||||
- **Build and development scripts** (keep all npm/pnpm scripts in package.json)
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- Next.js 15 with App Router
|
||||
- TypeScript
|
||||
- Tailwind CSS
|
||||
- Better Auth for authentication
|
||||
- Drizzle ORM + PostgreSQL
|
||||
- Vercel AI SDK
|
||||
- shadcn/ui components
|
||||
- Lucide React icons
|
||||
|
||||
## Component Development Guidelines
|
||||
|
||||
**Always prioritize shadcn/ui components** when building the application:
|
||||
|
||||
1. **First Choice**: Use existing shadcn/ui components from the project
|
||||
2. **Second Choice**: Install additional shadcn/ui components using `pnpm dlx shadcn@latest add <component-name>`
|
||||
3. **Last Resort**: Only create custom components or use other libraries if shadcn/ui doesn't provide a suitable option
|
||||
|
||||
The project already includes several shadcn/ui components (button, dialog, avatar, etc.) and follows their design system. Always check the [shadcn/ui documentation](https://ui.shadcn.com/docs/components) for available components before implementing alternatives.
|
||||
|
||||
## What I Want to Build
|
||||
|
||||
Basic todo list app with the ability for users to add, remove, update, complete and view todos.
|
||||
|
||||
## Request
|
||||
|
||||
Please help me transform this boilerplate into my actual application. **You MUST completely replace all existing boilerplate code** to match my project requirements. The current implementation is just temporary scaffolding that should be entirely removed and replaced.
|
||||
|
||||
## Final Reminder: COMPLETE REPLACEMENT REQUIRED
|
||||
|
||||
🚨 **IMPORTANT**: Do not preserve any of the existing boilerplate UI, components, or content. The user expects a completely fresh application that implements their requirements from scratch. Any remnants of the original boilerplate (like setup checklists, welcome screens, demo content, or placeholder navigation) indicate incomplete implementation.
|
||||
|
||||
**Success Criteria**: The final application should look and function as if it was built from scratch for the specific use case, with no evidence of the original boilerplate template.
|
||||
|
||||
## Post-Implementation Documentation
|
||||
|
||||
After completing the implementation, you MUST document any new features or significant changes in the `/docs/features/` directory:
|
||||
|
||||
1. **Create Feature Documentation**: For each major feature implemented, create a markdown file in `/docs/features/` that explains:
|
||||
|
||||
- What the feature does
|
||||
- How it works
|
||||
- Key components and files involved
|
||||
- Usage examples
|
||||
- Any configuration or setup required
|
||||
|
||||
2. **Update Existing Documentation**: If you modify existing functionality, update the relevant documentation files to reflect the changes.
|
||||
|
||||
3. **Document Design Decisions**: Include any important architectural or design decisions made during implementation.
|
||||
|
||||
This documentation helps maintain the project and assists future developers working with the codebase.
|
||||
503
create-agentic-app/template/docs/technical/ai/streaming.md
Normal file
503
create-agentic-app/template/docs/technical/ai/streaming.md
Normal file
@@ -0,0 +1,503 @@
|
||||
# Next.js App Router Quickstart
|
||||
|
||||
The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications.
|
||||
|
||||
In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
|
||||
|
||||
If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To follow this quickstart, you'll need:
|
||||
|
||||
- Node.js 18+ and pnpm installed on your local development machine.
|
||||
- An OpenAI API key.
|
||||
|
||||
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
|
||||
|
||||
## Create Your Application
|
||||
|
||||
Start by creating a new Next.js application. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside it.
|
||||
|
||||
<div className="mb-4">
|
||||
<Note>
|
||||
Be sure to select yes when prompted to use the App Router and Tailwind CSS.
|
||||
If you are looking for the Next.js Pages Router quickstart guide, you can
|
||||
find it [here](/docs/getting-started/nextjs-pages-router).
|
||||
</Note>
|
||||
</div>
|
||||
|
||||
<Snippet text="pnpm create next-app@latest my-ai-app" />
|
||||
|
||||
Navigate to the newly created directory:
|
||||
|
||||
<Snippet text="cd my-ai-app" />
|
||||
|
||||
### Install dependencies
|
||||
|
||||
Install `ai`, `@ai-sdk/react`, and `@ai-sdk/openai`, the AI package, AI SDK's React hooks, and AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively.
|
||||
|
||||
<Note>
|
||||
The AI SDK is designed to be a unified interface to interact with any large
|
||||
language model. This means that you can change model and providers with just
|
||||
one line of code! Learn more about [available providers](/providers) and
|
||||
[building custom providers](/providers/community-providers/custom-providers)
|
||||
in the [providers](/providers) section.
|
||||
</Note>
|
||||
<div className="my-4">
|
||||
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
|
||||
<Tab>
|
||||
<Snippet text="pnpm add ai @ai-sdk/react @ai-sdk/openai zod" dark />
|
||||
</Tab>
|
||||
<Tab>
|
||||
<Snippet text="npm install ai @ai-sdk/react @ai-sdk/openai zod" dark />
|
||||
</Tab>
|
||||
<Tab>
|
||||
<Snippet text="yarn add ai @ai-sdk/react @ai-sdk/openai zod" dark />
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
<Snippet text="bun add ai @ai-sdk/react @ai-sdk/openai zod" dark />
|
||||
</Tab>
|
||||
|
||||
</Tabs>
|
||||
</div>
|
||||
|
||||
### Configure OpenAI API key
|
||||
|
||||
Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
|
||||
|
||||
<Snippet text="touch .env.local" />
|
||||
|
||||
Edit the `.env.local` file:
|
||||
|
||||
```env filename=".env.local"
|
||||
OPENAI_API_KEY=xxxxxxxxx
|
||||
```
|
||||
|
||||
Replace `xxxxxxxxx` with your actual OpenAI API key.
|
||||
|
||||
<Note className="mb-4">
|
||||
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
|
||||
environment variable.
|
||||
</Note>
|
||||
|
||||
## Create a Route Handler
|
||||
|
||||
Create a route handler, `app/api/chat/route.ts` and add the following code:
|
||||
|
||||
```tsx filename="app/api/chat/route.ts"
|
||||
import { openai } from "@ai-sdk/openai";
|
||||
import { streamText, UIMessage, convertToModelMessages } from "ai";
|
||||
|
||||
// Allow streaming responses up to 30 seconds
|
||||
export const maxDuration = 30;
|
||||
|
||||
export async function POST(req: Request) {
|
||||
const { messages }: { messages: UIMessage[] } = await req.json();
|
||||
|
||||
const result = streamText({
|
||||
model: openai(process.env.OPENAI_MODEL || "gpt-5-mini"),
|
||||
messages: convertToModelMessages(messages),
|
||||
});
|
||||
|
||||
return result.toUIMessageStreamResponse();
|
||||
}
|
||||
```
|
||||
|
||||
Let's take a look at what is happening in this code:
|
||||
|
||||
1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. The `messages` are of UIMessage type, which are designed for use in application UI - they contain the entire message history and associated metadata like timestamps.
|
||||
2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. The `messages` key expects a `ModelMessage[]` array. This type is different from `UIMessage` in that it does not include metadata, such as timestamps or sender information. To convert between these types, we use the `convertToModelMessages` function, which strips the UI-specific metadata and transforms the `UIMessage[]` array into the `ModelMessage[]` format that the model expects.
|
||||
3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toUIMessageStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
|
||||
4. Finally, return the result to the client to stream the response.
|
||||
|
||||
This Route Handler creates a POST request endpoint at `/api/chat`.
|
||||
|
||||
## Wire up the UI
|
||||
|
||||
Now that you have a Route Handler that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
|
||||
|
||||
Update your root page (`app/page.tsx`) with the following code to show a list of chat messages and provide a user message input:
|
||||
|
||||
```tsx filename="app/page.tsx"
|
||||
"use client";
|
||||
|
||||
import { useChat } from "@ai-sdk/react";
|
||||
import { useState } from "react";
|
||||
|
||||
export default function Chat() {
|
||||
const [input, setInput] = useState("");
|
||||
const { messages, sendMessage } = useChat();
|
||||
return (
|
||||
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
|
||||
{messages.map((message) => (
|
||||
<div key={message.id} className="whitespace-pre-wrap">
|
||||
{message.role === "user" ? "User: " : "AI: "}
|
||||
{message.parts.map((part, i) => {
|
||||
switch (part.type) {
|
||||
case "text":
|
||||
return <div key={`${message.id}-${i}`}>{part.text}</div>;
|
||||
}
|
||||
})}
|
||||
</div>
|
||||
))}
|
||||
|
||||
<form
|
||||
onSubmit={(e) => {
|
||||
e.preventDefault();
|
||||
sendMessage({ text: input });
|
||||
setInput("");
|
||||
}}
|
||||
>
|
||||
<input
|
||||
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
|
||||
value={input}
|
||||
placeholder="Say something..."
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
/>
|
||||
</form>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
<Note>
|
||||
Make sure you add the `"use client"` directive to the top of your file. This
|
||||
allows you to add interactivity with Javascript.
|
||||
</Note>
|
||||
|
||||
This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
|
||||
|
||||
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `parts` properties).
|
||||
- `sendMessage` - a function to send a message to the chat API.
|
||||
|
||||
The component uses local state (`useState`) to manage the input field value, and handles form submission by calling `sendMessage` with the input text and then clearing the input field.
|
||||
|
||||
The LLM's response is accessed through the message `parts` array. Each message contains an ordered array of `parts` that represents everything the model generated in its response. These parts can include plain text, reasoning tokens, and more that you will see later. The `parts` array preserves the sequence of the model's outputs, allowing you to display or process each component in the order it was generated.
|
||||
|
||||
## Running Your Application
|
||||
|
||||
With that, you have built everything you need for your chatbot! To start your application, use the command:
|
||||
|
||||
<Snippet text="pnpm run dev" />
|
||||
|
||||
Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Next.js.
|
||||
|
||||
## Enhance Your Chatbot with Tools
|
||||
|
||||
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
|
||||
|
||||
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
|
||||
|
||||
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
|
||||
|
||||
Let's enhance your chatbot by adding a simple weather tool.
|
||||
|
||||
### Update Your Route Handler
|
||||
|
||||
Modify your `app/api/chat/route.ts` file to include the new weather tool:
|
||||
|
||||
```tsx filename="app/api/chat/route.ts" highlight="2,13-27"
|
||||
import { openai } from "@ai-sdk/openai";
|
||||
import { streamText, UIMessage, convertToModelMessages, tool } from "ai";
|
||||
import { z } from "zod";
|
||||
|
||||
export const maxDuration = 30;
|
||||
|
||||
export async function POST(req: Request) {
|
||||
const { messages }: { messages: UIMessage[] } = await req.json();
|
||||
|
||||
const result = streamText({
|
||||
model: openai(process.env.OPENAI_MODEL || "gpt-5-mini"),
|
||||
messages: convertToModelMessages(messages),
|
||||
tools: {
|
||||
weather: tool({
|
||||
description: "Get the weather in a location (fahrenheit)",
|
||||
inputSchema: z.object({
|
||||
location: z.string().describe("The location to get the weather for"),
|
||||
}),
|
||||
execute: async ({ location }) => {
|
||||
const temperature = Math.round(Math.random() * (90 - 32) + 32);
|
||||
return {
|
||||
location,
|
||||
temperature,
|
||||
};
|
||||
},
|
||||
}),
|
||||
},
|
||||
});
|
||||
|
||||
return result.toUIMessageStreamResponse();
|
||||
}
|
||||
```
|
||||
|
||||
In this updated code:
|
||||
|
||||
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
|
||||
2. You define a `tools` object with a `weather` tool. This tool:
|
||||
|
||||
- Has a description that helps the model understand when to use it.
|
||||
- Defines `inputSchema` using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this input from the context of the conversation. If it can't, it will ask the user for the missing information.
|
||||
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
|
||||
|
||||
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary input. The `execute` function will then be automatically run, and the tool output will be added to the `messages` as a `tool` message.
|
||||
|
||||
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
|
||||
|
||||
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result on the client via the `tool-weather` part of the `message.parts` array.
|
||||
|
||||
<Note>
|
||||
Tool parts are always named `tool-{toolName}`, where `{toolName}` is the key
|
||||
you used when defining the tool. In this case, since we defined the tool as
|
||||
`weather`, the part type is `tool-weather`.
|
||||
</Note>
|
||||
|
||||
### Update the UI
|
||||
|
||||
To display the tool invocation in your UI, update your `app/page.tsx` file:
|
||||
|
||||
```tsx filename="app/page.tsx" highlight="16-21"
|
||||
"use client";
|
||||
|
||||
import { useChat } from "@ai-sdk/react";
|
||||
import { useState } from "react";
|
||||
|
||||
export default function Chat() {
|
||||
const [input, setInput] = useState("");
|
||||
const { messages, sendMessage } = useChat();
|
||||
return (
|
||||
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
|
||||
{messages.map((message) => (
|
||||
<div key={message.id} className="whitespace-pre-wrap">
|
||||
{message.role === "user" ? "User: " : "AI: "}
|
||||
{message.parts.map((part, i) => {
|
||||
switch (part.type) {
|
||||
case "text":
|
||||
return <div key={`${message.id}-${i}`}>{part.text}</div>;
|
||||
case "tool-weather":
|
||||
return (
|
||||
<pre key={`${message.id}-${i}`}>
|
||||
{JSON.stringify(part, null, 2)}
|
||||
</pre>
|
||||
);
|
||||
}
|
||||
})}
|
||||
</div>
|
||||
))}
|
||||
|
||||
<form
|
||||
onSubmit={(e) => {
|
||||
e.preventDefault();
|
||||
sendMessage({ text: input });
|
||||
setInput("");
|
||||
}}
|
||||
>
|
||||
<input
|
||||
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
|
||||
value={input}
|
||||
placeholder="Say something..."
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
/>
|
||||
</form>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
With this change, you're updating the UI to handle different message parts. For text parts, you display the text content as before. For weather tool invocations, you display a JSON representation of the tool call and its result.
|
||||
|
||||
Now, when you ask about the weather, you'll see the tool call and its result displayed in your chat interface.
|
||||
|
||||
## Enabling Multi-Step Tool Calls
|
||||
|
||||
You may have noticed that while the tool is now visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
|
||||
|
||||
To solve this, you can enable multi-step tool calls using `stopWhen`. By default, `stopWhen` is set to `stepCountIs(1)`, which means generation stops after the first step when there are tool results. By changing this condition, you can allow the model to automatically send tool results back to itself to trigger additional generations until your specified stopping condition is met. In this case, you want the model to continue generating so it can use the weather tool results to answer your original question.
|
||||
|
||||
### Update Your Route Handler
|
||||
|
||||
Modify your `app/api/chat/route.ts` file to include the `stopWhen` condition:
|
||||
|
||||
```tsx filename="app/api/chat/route.ts"
|
||||
import { openai } from "@ai-sdk/openai";
|
||||
import {
|
||||
streamText,
|
||||
UIMessage,
|
||||
convertToModelMessages,
|
||||
tool,
|
||||
stepCountIs,
|
||||
} from "ai";
|
||||
import { z } from "zod";
|
||||
|
||||
export const maxDuration = 30;
|
||||
|
||||
export async function POST(req: Request) {
|
||||
const { messages }: { messages: UIMessage[] } = await req.json();
|
||||
|
||||
const result = streamText({
|
||||
model: openai(process.env.OPENAI_MODEL || "gpt-5-mini"),
|
||||
messages: convertToModelMessages(messages),
|
||||
stopWhen: stepCountIs(5),
|
||||
tools: {
|
||||
weather: tool({
|
||||
description: "Get the weather in a location (fahrenheit)",
|
||||
inputSchema: z.object({
|
||||
location: z.string().describe("The location to get the weather for"),
|
||||
}),
|
||||
execute: async ({ location }) => {
|
||||
const temperature = Math.round(Math.random() * (90 - 32) + 32);
|
||||
return {
|
||||
location,
|
||||
temperature,
|
||||
};
|
||||
},
|
||||
}),
|
||||
},
|
||||
});
|
||||
|
||||
return result.toUIMessageStreamResponse();
|
||||
}
|
||||
```
|
||||
|
||||
In this updated code:
|
||||
|
||||
1. You set `stopWhen` to be when `stepCountIs` 5, allowing the model to use up to 5 "steps" for any given generation.
|
||||
2. You add an `onStepFinish` callback to log any `toolResults` from each step of the interaction, helping you understand the model's tool usage. This means we can also delete the `toolCall` and `toolResult` `console.log` statements from the previous example.
|
||||
|
||||
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
|
||||
|
||||
By setting `stopWhen: stepCountIs(5)`, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit.
|
||||
|
||||
### Add another tool
|
||||
|
||||
Update your `app/api/chat/route.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
|
||||
|
||||
```tsx filename="app/api/chat/route.ts" highlight="34-47"
|
||||
import { openai } from "@ai-sdk/openai";
|
||||
import {
|
||||
streamText,
|
||||
UIMessage,
|
||||
convertToModelMessages,
|
||||
tool,
|
||||
stepCountIs,
|
||||
} from "ai";
|
||||
import { z } from "zod";
|
||||
|
||||
export const maxDuration = 30;
|
||||
|
||||
export async function POST(req: Request) {
|
||||
const { messages }: { messages: UIMessage[] } = await req.json();
|
||||
|
||||
const result = streamText({
|
||||
model: openai(process.env.OPENAI_MODEL || "gpt-5-mini"),
|
||||
messages: convertToModelMessages(messages),
|
||||
stopWhen: stepCountIs(5),
|
||||
tools: {
|
||||
weather: tool({
|
||||
description: "Get the weather in a location (fahrenheit)",
|
||||
inputSchema: z.object({
|
||||
location: z.string().describe("The location to get the weather for"),
|
||||
}),
|
||||
execute: async ({ location }) => {
|
||||
const temperature = Math.round(Math.random() * (90 - 32) + 32);
|
||||
return {
|
||||
location,
|
||||
temperature,
|
||||
};
|
||||
},
|
||||
}),
|
||||
convertFahrenheitToCelsius: tool({
|
||||
description: "Convert a temperature in fahrenheit to celsius",
|
||||
inputSchema: z.object({
|
||||
temperature: z
|
||||
.number()
|
||||
.describe("The temperature in fahrenheit to convert"),
|
||||
}),
|
||||
execute: async ({ temperature }) => {
|
||||
const celsius = Math.round((temperature - 32) * (5 / 9));
|
||||
return {
|
||||
celsius,
|
||||
};
|
||||
},
|
||||
}),
|
||||
},
|
||||
});
|
||||
|
||||
return result.toUIMessageStreamResponse();
|
||||
}
|
||||
```
|
||||
|
||||
### Update Your Frontend
|
||||
|
||||
update your `app/page.tsx` file to render the new temperature conversion tool:
|
||||
|
||||
```tsx filename="app/page.tsx" highlight="21"
|
||||
"use client";
|
||||
|
||||
import { useChat } from "@ai-sdk/react";
|
||||
import { useState } from "react";
|
||||
|
||||
export default function Chat() {
|
||||
const [input, setInput] = useState("");
|
||||
const { messages, sendMessage } = useChat();
|
||||
return (
|
||||
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
|
||||
{messages.map((message) => (
|
||||
<div key={message.id} className="whitespace-pre-wrap">
|
||||
{message.role === "user" ? "User: " : "AI: "}
|
||||
{message.parts.map((part, i) => {
|
||||
switch (part.type) {
|
||||
case "text":
|
||||
return <div key={`${message.id}-${i}`}>{part.text}</div>;
|
||||
case "tool-weather":
|
||||
case "tool-convertFahrenheitToCelsius":
|
||||
return (
|
||||
<pre key={`${message.id}-${i}`}>
|
||||
{JSON.stringify(part, null, 2)}
|
||||
</pre>
|
||||
);
|
||||
}
|
||||
})}
|
||||
</div>
|
||||
))}
|
||||
|
||||
<form
|
||||
onSubmit={(e) => {
|
||||
e.preventDefault();
|
||||
sendMessage({ text: input });
|
||||
setInput("");
|
||||
}}
|
||||
>
|
||||
<input
|
||||
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
|
||||
value={input}
|
||||
placeholder="Say something..."
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
/>
|
||||
</form>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
This update handles the new `tool-convertFahrenheitToCelsius` part type, displaying the temperature conversion tool calls and results in the UI.
|
||||
|
||||
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
|
||||
|
||||
1. The model will call the weather tool for New York.
|
||||
2. You'll see the tool output displayed.
|
||||
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
|
||||
4. The model will then use that information to provide a natural language response about the weather in New York.
|
||||
|
||||
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
|
||||
|
||||
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
|
||||
|
||||
## Where to Next?
|
||||
|
||||
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
|
||||
|
||||
- To learn more about the AI SDK, read through the [documentation](/docs).
|
||||
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
|
||||
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
|
||||
409
create-agentic-app/template/docs/technical/ai/structured-data.md
Normal file
409
create-agentic-app/template/docs/technical/ai/structured-data.md
Normal file
@@ -0,0 +1,409 @@
|
||||
# Generating Structured Data
|
||||
|
||||
While text generation can be useful, your use case will likely call for generating structured data.
|
||||
For example, you might want to extract information from text, classify data, or generate synthetic data.
|
||||
|
||||
Many language models are capable of generating structured data, often defined as using "JSON modes" or "tools".
|
||||
However, you need to manually provide schemas and then validate the generated data as LLMs can produce incorrect or incomplete structured data.
|
||||
|
||||
The AI SDK standardises structured object generation across model providers
|
||||
with the [`generateObject`](/docs/reference/ai-sdk-core/generate-object)
|
||||
and [`streamObject`](/docs/reference/ai-sdk-core/stream-object) functions.
|
||||
You can use both functions with different output strategies, e.g. `array`, `object`, `enum`, or `no-schema`,
|
||||
and with different generation modes, e.g. `auto`, `tool`, or `json`.
|
||||
You can use [Zod schemas](/docs/reference/ai-sdk-core/zod-schema), [Valibot](/docs/reference/ai-sdk-core/valibot-schema), or [JSON schemas](/docs/reference/ai-sdk-core/json-schema) to specify the shape of the data that you want,
|
||||
and the AI model will generate data that conforms to that structure.
|
||||
|
||||
<Note>
|
||||
You can pass Zod objects directly to the AI SDK functions or use the
|
||||
`zodSchema` helper function.
|
||||
</Note>
|
||||
|
||||
## Generate Object
|
||||
|
||||
The `generateObject` generates structured data from a prompt.
|
||||
The schema is also used to validate the generated data, ensuring type safety and correctness.
|
||||
|
||||
```ts
|
||||
import { generateObject } from "ai";
|
||||
import { z } from "zod";
|
||||
|
||||
const { object } = await generateObject({
|
||||
model: "openai/gpt-4.1",
|
||||
schema: z.object({
|
||||
recipe: z.object({
|
||||
name: z.string(),
|
||||
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
|
||||
steps: z.array(z.string()),
|
||||
}),
|
||||
}),
|
||||
prompt: "Generate a lasagna recipe.",
|
||||
});
|
||||
```
|
||||
|
||||
<Note>
|
||||
See `generateObject` in action with [these examples](#more-examples)
|
||||
</Note>
|
||||
|
||||
### Accessing response headers & body
|
||||
|
||||
Sometimes you need access to the full response from the model provider,
|
||||
e.g. to access some provider-specific headers or body content.
|
||||
|
||||
You can access the raw response headers and body using the `response` property:
|
||||
|
||||
```ts
|
||||
import { generateObject } from "ai";
|
||||
|
||||
const result = await generateObject({
|
||||
// ...
|
||||
});
|
||||
|
||||
console.log(JSON.stringify(result.response.headers, null, 2));
|
||||
console.log(JSON.stringify(result.response.body, null, 2));
|
||||
```
|
||||
|
||||
## Stream Object
|
||||
|
||||
Given the added complexity of returning structured data, model response time can be unacceptable for your interactive use case.
|
||||
With the [`streamObject`](/docs/reference/ai-sdk-core/stream-object) function, you can stream the model's response as it is generated.
|
||||
|
||||
```ts
|
||||
import { streamObject } from "ai";
|
||||
|
||||
const { partialObjectStream } = streamObject({
|
||||
// ...
|
||||
});
|
||||
|
||||
// use partialObjectStream as an async iterable
|
||||
for await (const partialObject of partialObjectStream) {
|
||||
console.log(partialObject);
|
||||
}
|
||||
```
|
||||
|
||||
You can use `streamObject` to stream generated UIs in combination with React Server Components (see [Generative UI](../ai-sdk-rsc))) or the [`useObject`](/docs/reference/ai-sdk-ui/use-object) hook.
|
||||
|
||||
<Note>See `streamObject` in action with [these examples](#more-examples)</Note>
|
||||
|
||||
### `onError` callback
|
||||
|
||||
`streamObject` immediately starts streaming.
|
||||
Errors become part of the stream and are not thrown to prevent e.g. servers from crashing.
|
||||
|
||||
To log errors, you can provide an `onError` callback that is triggered when an error occurs.
|
||||
|
||||
```tsx highlight="5-7"
|
||||
import { streamObject } from "ai";
|
||||
|
||||
const result = streamObject({
|
||||
// ...
|
||||
onError({ error }) {
|
||||
console.error(error); // your error logging logic here
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## Output Strategy
|
||||
|
||||
You can use both functions with different output strategies, e.g. `array`, `object`, `enum`, or `no-schema`.
|
||||
|
||||
### Object
|
||||
|
||||
The default output strategy is `object`, which returns the generated data as an object.
|
||||
You don't need to specify the output strategy if you want to use the default.
|
||||
|
||||
### Array
|
||||
|
||||
If you want to generate an array of objects, you can set the output strategy to `array`.
|
||||
When you use the `array` output strategy, the schema specifies the shape of an array element.
|
||||
With `streamObject`, you can also stream the generated array elements using `elementStream`.
|
||||
|
||||
```ts highlight="7,18"
|
||||
import { openai } from "@ai-sdk/openai";
|
||||
import { streamObject } from "ai";
|
||||
import { z } from "zod";
|
||||
|
||||
const { elementStream } = streamObject({
|
||||
model: openai("gpt-4.1"),
|
||||
output: "array",
|
||||
schema: z.object({
|
||||
name: z.string(),
|
||||
class: z
|
||||
.string()
|
||||
.describe("Character class, e.g. warrior, mage, or thief."),
|
||||
description: z.string(),
|
||||
}),
|
||||
prompt: "Generate 3 hero descriptions for a fantasy role playing game.",
|
||||
});
|
||||
|
||||
for await (const hero of elementStream) {
|
||||
console.log(hero);
|
||||
}
|
||||
```
|
||||
|
||||
### Enum
|
||||
|
||||
If you want to generate a specific enum value, e.g. for classification tasks,
|
||||
you can set the output strategy to `enum`
|
||||
and provide a list of possible values in the `enum` parameter.
|
||||
|
||||
<Note>Enum output is only available with `generateObject`.</Note>
|
||||
|
||||
```ts highlight="5-6"
|
||||
import { generateObject } from "ai";
|
||||
|
||||
const { object } = await generateObject({
|
||||
model: "openai/gpt-4.1",
|
||||
output: "enum",
|
||||
enum: ["action", "comedy", "drama", "horror", "sci-fi"],
|
||||
prompt:
|
||||
"Classify the genre of this movie plot: " +
|
||||
'"A group of astronauts travel through a wormhole in search of a ' +
|
||||
'new habitable planet for humanity."',
|
||||
});
|
||||
```
|
||||
|
||||
### No Schema
|
||||
|
||||
In some cases, you might not want to use a schema,
|
||||
for example when the data is a dynamic user request.
|
||||
You can use the `output` setting to set the output format to `no-schema` in those cases
|
||||
and omit the schema parameter.
|
||||
|
||||
```ts highlight="6"
|
||||
import { openai } from "@ai-sdk/openai";
|
||||
import { generateObject } from "ai";
|
||||
|
||||
const { object } = await generateObject({
|
||||
model: openai("gpt-4.1"),
|
||||
output: "no-schema",
|
||||
prompt: "Generate a lasagna recipe.",
|
||||
});
|
||||
```
|
||||
|
||||
## Schema Name and Description
|
||||
|
||||
You can optionally specify a name and description for the schema. These are used by some providers for additional LLM guidance, e.g. via tool or schema name.
|
||||
|
||||
```ts highlight="6-7"
|
||||
import { generateObject } from "ai";
|
||||
import { z } from "zod";
|
||||
|
||||
const { object } = await generateObject({
|
||||
model: "openai/gpt-4.1",
|
||||
schemaName: "Recipe",
|
||||
schemaDescription: "A recipe for a dish.",
|
||||
schema: z.object({
|
||||
name: z.string(),
|
||||
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
|
||||
steps: z.array(z.string()),
|
||||
}),
|
||||
prompt: "Generate a lasagna recipe.",
|
||||
});
|
||||
```
|
||||
|
||||
## Accessing Reasoning
|
||||
|
||||
You can access the reasoning used by the language model to generate the object via the `reasoning` property on the result. This property contains a string with the model's thought process, if available.
|
||||
|
||||
```ts
|
||||
import { openai, OpenAIResponsesProviderOptions } from "@ai-sdk/openai";
|
||||
import { generateObject } from "ai";
|
||||
import { z } from "zod/v4";
|
||||
|
||||
const result = await generateObject({
|
||||
model: openai("gpt-5"),
|
||||
schema: z.object({
|
||||
recipe: z.object({
|
||||
name: z.string(),
|
||||
ingredients: z.array(
|
||||
z.object({
|
||||
name: z.string(),
|
||||
amount: z.string(),
|
||||
})
|
||||
),
|
||||
steps: z.array(z.string()),
|
||||
}),
|
||||
}),
|
||||
prompt: "Generate a lasagna recipe.",
|
||||
providerOptions: {
|
||||
openai: {
|
||||
strictJsonSchema: true,
|
||||
reasoningSummary: "detailed",
|
||||
} satisfies OpenAIResponsesProviderOptions,
|
||||
},
|
||||
});
|
||||
|
||||
console.log(result.reasoning);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
When `generateObject` cannot generate a valid object, it throws a [`AI_NoObjectGeneratedError`](/docs/reference/ai-sdk-errors/ai-no-object-generated-error).
|
||||
|
||||
This error occurs when the AI provider fails to generate a parsable object that conforms to the schema.
|
||||
It can arise due to the following reasons:
|
||||
|
||||
- The model failed to generate a response.
|
||||
- The model generated a response that could not be parsed.
|
||||
- The model generated a response that could not be validated against the schema.
|
||||
|
||||
The error preserves the following information to help you log the issue:
|
||||
|
||||
- `text`: The text that was generated by the model. This can be the raw text or the tool call text, depending on the object generation mode.
|
||||
- `response`: Metadata about the language model response, including response id, timestamp, and model.
|
||||
- `usage`: Request token usage.
|
||||
- `cause`: The cause of the error (e.g. a JSON parsing error). You can use this for more detailed error handling.
|
||||
|
||||
```ts
|
||||
import { generateObject, NoObjectGeneratedError } from "ai";
|
||||
|
||||
try {
|
||||
await generateObject({ model, schema, prompt });
|
||||
} catch (error) {
|
||||
if (NoObjectGeneratedError.isInstance(error)) {
|
||||
console.log("NoObjectGeneratedError");
|
||||
console.log("Cause:", error.cause);
|
||||
console.log("Text:", error.text);
|
||||
console.log("Response:", error.response);
|
||||
console.log("Usage:", error.usage);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Repairing Invalid or Malformed JSON
|
||||
|
||||
<Note type="warning">
|
||||
The `repairText` function is experimental and may change in the future.
|
||||
</Note>
|
||||
|
||||
Sometimes the model will generate invalid or malformed JSON.
|
||||
You can use the `repairText` function to attempt to repair the JSON.
|
||||
|
||||
It receives the error, either a `JSONParseError` or a `TypeValidationError`,
|
||||
and the text that was generated by the model.
|
||||
You can then attempt to repair the text and return the repaired text.
|
||||
|
||||
```ts highlight="7-10"
|
||||
import { generateObject } from "ai";
|
||||
|
||||
const { object } = await generateObject({
|
||||
model,
|
||||
schema,
|
||||
prompt,
|
||||
experimental_repairText: async ({ text, error }) => {
|
||||
// example: add a closing brace to the text
|
||||
return text + "}";
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## Structured outputs with `generateText` and `streamText`
|
||||
|
||||
You can generate structured data with `generateText` and `streamText` by using the `experimental_output` setting.
|
||||
|
||||
<Note>
|
||||
Some models, e.g. those by OpenAI, support structured outputs and tool calling
|
||||
at the same time. This is only possible with `generateText` and `streamText`.
|
||||
</Note>
|
||||
|
||||
<Note type="warning">
|
||||
Structured output generation with `generateText` and `streamText` is
|
||||
experimental and may change in the future.
|
||||
</Note>
|
||||
|
||||
### `generateText`
|
||||
|
||||
```ts highlight="2,4-18"
|
||||
// experimental_output is a structured object that matches the schema:
|
||||
const { experimental_output } = await generateText({
|
||||
// ...
|
||||
experimental_output: Output.object({
|
||||
schema: z.object({
|
||||
name: z.string(),
|
||||
age: z.number().nullable().describe("Age of the person."),
|
||||
contact: z.object({
|
||||
type: z.literal("email"),
|
||||
value: z.string(),
|
||||
}),
|
||||
occupation: z.object({
|
||||
type: z.literal("employed"),
|
||||
company: z.string(),
|
||||
position: z.string(),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
prompt: "Generate an example person for testing.",
|
||||
});
|
||||
```
|
||||
|
||||
### `streamText`
|
||||
|
||||
```ts highlight="2,4-18"
|
||||
// experimental_partialOutputStream contains generated partial objects:
|
||||
const { experimental_partialOutputStream } = await streamText({
|
||||
// ...
|
||||
experimental_output: Output.object({
|
||||
schema: z.object({
|
||||
name: z.string(),
|
||||
age: z.number().nullable().describe("Age of the person."),
|
||||
contact: z.object({
|
||||
type: z.literal("email"),
|
||||
value: z.string(),
|
||||
}),
|
||||
occupation: z.object({
|
||||
type: z.literal("employed"),
|
||||
company: z.string(),
|
||||
position: z.string(),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
prompt: "Generate an example person for testing.",
|
||||
});
|
||||
```
|
||||
|
||||
## More Examples
|
||||
|
||||
You can see `generateObject` and `streamObject` in action using various frameworks in the following examples:
|
||||
|
||||
### `generateObject`
|
||||
|
||||
<ExampleLinks
|
||||
examples={[
|
||||
{
|
||||
title: 'Learn to generate objects in Node.js',
|
||||
link: '/examples/node/generating-structured-data/generate-object',
|
||||
},
|
||||
{
|
||||
title:
|
||||
'Learn to generate objects in Next.js with Route Handlers (AI SDK UI)',
|
||||
link: '/examples/next-pages/basics/generating-object',
|
||||
},
|
||||
{
|
||||
title:
|
||||
'Learn to generate objects in Next.js with Server Actions (AI SDK RSC)',
|
||||
link: '/examples/next-app/basics/generating-object',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
|
||||
### `streamObject`
|
||||
|
||||
<ExampleLinks
|
||||
examples={[
|
||||
{
|
||||
title: 'Learn to stream objects in Node.js',
|
||||
link: '/examples/node/streaming-structured-data/stream-object',
|
||||
},
|
||||
{
|
||||
title:
|
||||
'Learn to stream objects in Next.js with Route Handlers (AI SDK UI)',
|
||||
link: '/examples/next-pages/basics/streaming-object-generation',
|
||||
},
|
||||
{
|
||||
title:
|
||||
'Learn to stream objects in Next.js with Server Actions (AI SDK RSC)',
|
||||
link: '/examples/next-app/basics/streaming-object-generation',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
476
create-agentic-app/template/docs/technical/betterauth/polar.md
Normal file
476
create-agentic-app/template/docs/technical/betterauth/polar.md
Normal file
@@ -0,0 +1,476 @@
|
||||
# plugins: Polar
|
||||
|
||||
URL: /docs/plugins/polar
|
||||
Source: https://raw.githubusercontent.com/better-auth/better-auth/refs/heads/main/docs/content/docs/plugins/polar.mdx
|
||||
|
||||
Better Auth Plugin for Payment and Checkouts using Polar
|
||||
|
||||
---
|
||||
|
||||
title: Polar
|
||||
description: Better Auth Plugin for Payment and Checkouts using Polar
|
||||
|
||||
---
|
||||
|
||||
[Polar](https://polar.sh) is a developer first payment infrastructure. Out of the box it provides a lot of developer first integrations for payments, checkouts and more. This plugin helps you integrate Polar with Better Auth to make your auth + payments flow seamless.
|
||||
|
||||
<Callout>
|
||||
This plugin is maintained by Polar team. For bugs, issues or feature requests,
|
||||
please visit the [Polar GitHub
|
||||
repo](https://github.com/polarsource/polar-adapters).
|
||||
</Callout>
|
||||
|
||||
## Features
|
||||
|
||||
- Checkout Integration
|
||||
- Customer Portal
|
||||
- Automatic Customer creation on signup
|
||||
- Event Ingestion & Customer Meters for flexible Usage Based Billing
|
||||
- Handle Polar Webhooks securely with signature verification
|
||||
- Reference System to associate purchases with organizations
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pnpm add better-auth @polar-sh/better-auth @polar-sh/sdk
|
||||
```
|
||||
|
||||
## Preparation
|
||||
|
||||
Go to your Polar Organization Settings, and create an Organization Access Token. Add it to your environment.
|
||||
|
||||
```bash
|
||||
# .env
|
||||
POLAR_ACCESS_TOKEN=...
|
||||
```
|
||||
|
||||
### Configuring BetterAuth Server
|
||||
|
||||
The Polar plugin comes with a handful additional plugins which adds functionality to your stack.
|
||||
|
||||
- Checkout - Enables a seamless checkout integration
|
||||
- Portal - Makes it possible for your customers to manage their orders, subscriptions & granted benefits
|
||||
- Usage - Simple extension for listing customer meters & ingesting events for Usage Based Billing
|
||||
- Webhooks - Listen for relevant Polar webhooks
|
||||
|
||||
```typescript
|
||||
import { betterAuth } from "better-auth";
|
||||
import { polar, checkout, portal, usage, webhooks } from "@polar-sh/better-auth";
|
||||
import { Polar } from "@polar-sh/sdk";
|
||||
|
||||
const polarClient = new Polar({
|
||||
accessToken: process.env.POLAR_ACCESS_TOKEN,
|
||||
// Use 'sandbox' if you're using the Polar Sandbox environment
|
||||
// Remember that access tokens, products, etc. are completely separated between environments.
|
||||
// Access tokens obtained in Production are for instance not usable in the Sandbox environment.
|
||||
server: 'sandbox'
|
||||
});
|
||||
|
||||
const auth = betterAuth({
|
||||
// ... Better Auth config
|
||||
plugins: [
|
||||
polar({
|
||||
client: polarClient,
|
||||
createCustomerOnSignUp: true,
|
||||
use: [
|
||||
checkout({
|
||||
products: [
|
||||
{
|
||||
productId: "123-456-789", // ID of Product from Polar Dashboard
|
||||
slug: "pro" // Custom slug for easy reference in Checkout URL, e.g. /checkout/pro
|
||||
}
|
||||
],
|
||||
successUrl: "/success?checkout_id={CHECKOUT_ID}",
|
||||
authenticatedUsersOnly: true
|
||||
}),
|
||||
portal(),
|
||||
usage(),
|
||||
webhooks({
|
||||
secret: process.env.POLAR_WEBHOOK_SECRET,
|
||||
onCustomerStateChanged: (payload) => // Triggered when anything regarding a customer changes
|
||||
onOrderPaid: (payload) => // Triggered when an order was paid (purchase, subscription renewal, etc.)
|
||||
... // Over 25 granular webhook handlers
|
||||
onPayload: (payload) => // Catch-all for all events
|
||||
})
|
||||
],
|
||||
})
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
### Configuring BetterAuth Client
|
||||
|
||||
You will be using the BetterAuth Client to interact with the Polar functionalities.
|
||||
|
||||
```typescript
|
||||
import { createAuthClient } from "better-auth/react";
|
||||
import { polarClient } from "@polar-sh/better-auth";
|
||||
|
||||
// This is all that is needed
|
||||
// All Polar plugins, etc. should be attached to the server-side BetterAuth config
|
||||
export const authClient = createAuthClient({
|
||||
plugins: [polarClient()],
|
||||
});
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
```typescript
|
||||
import { betterAuth } from "better-auth";
|
||||
import {
|
||||
polar,
|
||||
checkout,
|
||||
portal,
|
||||
usage,
|
||||
webhooks,
|
||||
} from "@polar-sh/better-auth";
|
||||
import { Polar } from "@polar-sh/sdk";
|
||||
|
||||
const polarClient = new Polar({
|
||||
accessToken: process.env.POLAR_ACCESS_TOKEN,
|
||||
// Use 'sandbox' if you're using the Polar Sandbox environment
|
||||
// Remember that access tokens, products, etc. are completely separated between environments.
|
||||
// Access tokens obtained in Production are for instance not usable in the Sandbox environment.
|
||||
server: "sandbox",
|
||||
});
|
||||
|
||||
const auth = betterAuth({
|
||||
// ... Better Auth config
|
||||
plugins: [
|
||||
polar({
|
||||
client: polarClient,
|
||||
createCustomerOnSignUp: true,
|
||||
getCustomerCreateParams: ({ user }, request) => ({
|
||||
metadata: {
|
||||
myCustomProperty: 123,
|
||||
},
|
||||
}),
|
||||
use: [
|
||||
// This is where you add Polar plugins
|
||||
],
|
||||
}),
|
||||
],
|
||||
});
|
||||
```
|
||||
|
||||
### Required Options
|
||||
|
||||
- `client`: Polar SDK client instance
|
||||
|
||||
### Optional Options
|
||||
|
||||
- `createCustomerOnSignUp`: Automatically create a Polar customer when a user signs up
|
||||
- `getCustomerCreateParams`: Custom function to provide additional customer creation metadata
|
||||
|
||||
### Customers
|
||||
|
||||
When `createCustomerOnSignUp` is enabled, a new Polar Customer is automatically created when a new User is added in the Better-Auth Database.
|
||||
|
||||
All new customers are created with an associated `externalId`, which is the ID of your User in the Database. This allows us to skip any Polar to User mapping in your Database.
|
||||
|
||||
## Checkout Plugin
|
||||
|
||||
To support checkouts in your app, simply pass the Checkout plugin to the use-property.
|
||||
|
||||
```typescript
|
||||
import { polar, checkout } from "@polar-sh/better-auth";
|
||||
|
||||
const auth = betterAuth({
|
||||
// ... Better Auth config
|
||||
plugins: [
|
||||
polar({
|
||||
...
|
||||
use: [
|
||||
checkout({
|
||||
// Optional field - will make it possible to pass a slug to checkout instead of Product ID
|
||||
products: [ { productId: "123-456-789", slug: "pro" } ],
|
||||
// Relative URL to return to when checkout is successfully completed
|
||||
successUrl: "/success?checkout_id={CHECKOUT_ID}",
|
||||
// Whether you want to allow unauthenticated checkout sessions or not
|
||||
authenticatedUsersOnly: true
|
||||
})
|
||||
],
|
||||
})
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
When checkouts are enabled, you're able to initialize Checkout Sessions using the checkout-method on the BetterAuth Client. This will redirect the user to the Product Checkout.
|
||||
|
||||
```typescript
|
||||
await authClient.checkout({
|
||||
// Any Polar Product ID can be passed here
|
||||
products: ["e651f46d-ac20-4f26-b769-ad088b123df2"],
|
||||
// Or, if you setup "products" in the Checkout Config, you can pass the slug
|
||||
slug: "pro",
|
||||
});
|
||||
```
|
||||
|
||||
Checkouts will automatically carry the authenticated User as the customer to the checkout. Email-address will be "locked-in".
|
||||
|
||||
If `authenticatedUsersOnly` is `false` - then it will be possible to trigger checkout sessions without any associated customer.
|
||||
|
||||
### Organization Support
|
||||
|
||||
This plugin supports the Organization plugin. If you pass the organization ID to the Checkout referenceId, you will be able to keep track of purchases made from organization members.
|
||||
|
||||
```typescript
|
||||
const organizationId = (await authClient.organization.list())?.data?.[0]?.id,
|
||||
|
||||
await authClient.checkout({
|
||||
// Any Polar Product ID can be passed here
|
||||
products: ["e651f46d-ac20-4f26-b769-ad088b123df2"],
|
||||
// Or, if you setup "products" in the Checkout Config, you can pass the slug
|
||||
slug: 'pro',
|
||||
// Reference ID will be saved as `referenceId` in the metadata of the checkout, order & subscription object
|
||||
referenceId: organizationId
|
||||
});
|
||||
```
|
||||
|
||||
## Portal Plugin
|
||||
|
||||
A plugin which enables customer management of their purchases, orders and subscriptions.
|
||||
|
||||
```typescript
|
||||
import { polar, checkout, portal } from "@polar-sh/better-auth";
|
||||
|
||||
const auth = betterAuth({
|
||||
// ... Better Auth config
|
||||
plugins: [
|
||||
polar({
|
||||
...
|
||||
use: [
|
||||
checkout(...),
|
||||
portal()
|
||||
],
|
||||
})
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
The portal-plugin gives the BetterAuth Client a set of customer management methods, scoped under `authClient.customer`.
|
||||
|
||||
### Customer Portal Management
|
||||
|
||||
The following method will redirect the user to the Polar Customer Portal, where they can see orders, purchases, subscriptions, benefits, etc.
|
||||
|
||||
```typescript
|
||||
await authClient.customer.portal();
|
||||
```
|
||||
|
||||
### Customer State
|
||||
|
||||
The portal plugin also adds a convenient state-method for retrieving the general Customer State.
|
||||
|
||||
```typescript
|
||||
const { data: customerState } = await authClient.customer.state();
|
||||
```
|
||||
|
||||
The customer state object contains:
|
||||
|
||||
- All the data about the customer.
|
||||
- The list of their active subscriptions
|
||||
- Note: This does not include subscriptions done by a parent organization. See the subscription list-method below for more information.
|
||||
- The list of their granted benefits.
|
||||
- The list of their active meters, with their current balance.
|
||||
|
||||
Thus, with that single object, you have all the required information to check if you should provision access to your service or not.
|
||||
|
||||
[You can learn more about the Polar Customer State in the Polar Docs](https://docs.polar.sh/integrate/customer-state).
|
||||
|
||||
### Benefits, Orders & Subscriptions
|
||||
|
||||
The portal plugin adds 3 convenient methods for listing benefits, orders & subscriptions relevant to the authenticated user/customer.
|
||||
|
||||
[All of these methods use the Polar CustomerPortal APIs](https://docs.polar.sh/api-reference/customer-portal)
|
||||
|
||||
#### Benefits
|
||||
|
||||
This method only lists granted benefits for the authenticated user/customer.
|
||||
|
||||
```typescript
|
||||
const { data: benefits } = await authClient.customer.benefits.list({
|
||||
query: {
|
||||
page: 1,
|
||||
limit: 10,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
#### Orders
|
||||
|
||||
This method lists orders like purchases and subscription renewals for the authenticated user/customer.
|
||||
|
||||
```typescript
|
||||
const { data: orders } = await authClient.customer.orders.list({
|
||||
query: {
|
||||
page: 1,
|
||||
limit: 10,
|
||||
productBillingType: "one_time", // or 'recurring'
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
#### Subscriptions
|
||||
|
||||
This method lists the subscriptions associated with authenticated user/customer.
|
||||
|
||||
```typescript
|
||||
const { data: subscriptions } = await authClient.customer.subscriptions.list({
|
||||
query: {
|
||||
page: 1,
|
||||
limit: 10,
|
||||
active: true,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**Important** - Organization Support
|
||||
|
||||
This will **not** return subscriptions made by a parent organization to the authenticated user.
|
||||
|
||||
However, you can pass a `referenceId` to this method. This will return all subscriptions associated with that referenceId instead of subscriptions associated with the user.
|
||||
|
||||
So in order to figure out if a user should have access, pass the user's organization ID to see if there is an active subscription for that organization.
|
||||
|
||||
```typescript
|
||||
const organizationId = (await authClient.organization.list())?.data?.[0]?.id,
|
||||
|
||||
const { data: subscriptions } = await authClient.customer.orders.list({
|
||||
query: {
|
||||
page: 1,
|
||||
limit: 10,
|
||||
active: true,
|
||||
referenceId: organizationId
|
||||
},
|
||||
});
|
||||
|
||||
const userShouldHaveAccess = subscriptions.some(
|
||||
sub => // Your logic to check subscription product or whatever.
|
||||
)
|
||||
```
|
||||
|
||||
## Usage Plugin
|
||||
|
||||
A simple plugin for Usage Based Billing.
|
||||
|
||||
```typescript
|
||||
import { polar, checkout, portal, usage } from "@polar-sh/better-auth";
|
||||
|
||||
const auth = betterAuth({
|
||||
// ... Better Auth config
|
||||
plugins: [
|
||||
polar({
|
||||
...
|
||||
use: [
|
||||
checkout(...),
|
||||
portal(),
|
||||
usage()
|
||||
],
|
||||
})
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
### Event Ingestion
|
||||
|
||||
Polar's Usage Based Billing builds entirely on event ingestion. Ingest events from your application, create Meters to represent that usage, and add metered prices to Products to charge for it.
|
||||
|
||||
[Learn more about Usage Based Billing in the Polar Docs.](https://docs.polar.sh/features/usage-based-billing/introduction)
|
||||
|
||||
```typescript
|
||||
const { data: ingested } = await authClient.usage.ingest({
|
||||
event: "file-uploads",
|
||||
metadata: {
|
||||
uploadedFiles: 12,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
The authenticated user is automatically associated with the ingested event.
|
||||
|
||||
### Customer Meters
|
||||
|
||||
A simple method for listing the authenticated user's Usage Meters, or as we call them, Customer Meters.
|
||||
|
||||
Customer Meter's contains all information about their consumption on your defined meters.
|
||||
|
||||
- Customer Information
|
||||
- Meter Information
|
||||
- Customer Meter Information
|
||||
- Consumed Units
|
||||
- Credited Units
|
||||
- Balance
|
||||
|
||||
```typescript
|
||||
const { data: customerMeters } = await authClient.usage.meters.list({
|
||||
query: {
|
||||
page: 1,
|
||||
limit: 10,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## Webhooks Plugin
|
||||
|
||||
The Webhooks plugin can be used to capture incoming events from your Polar organization.
|
||||
|
||||
```typescript
|
||||
import { polar, webhooks } from "@polar-sh/better-auth";
|
||||
|
||||
const auth = betterAuth({
|
||||
// ... Better Auth config
|
||||
plugins: [
|
||||
polar({
|
||||
...
|
||||
use: [
|
||||
webhooks({
|
||||
secret: process.env.POLAR_WEBHOOK_SECRET,
|
||||
onCustomerStateChanged: (payload) => // Triggered when anything regarding a customer changes
|
||||
onOrderPaid: (payload) => // Triggered when an order was paid (purchase, subscription renewal, etc.)
|
||||
... // Over 25 granular webhook handlers
|
||||
onPayload: (payload) => // Catch-all for all events
|
||||
})
|
||||
],
|
||||
})
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
Configure a Webhook endpoint in your Polar Organization Settings page. Webhook endpoint is configured at /polar/webhooks.
|
||||
|
||||
Add the secret to your environment.
|
||||
|
||||
```bash
|
||||
# .env
|
||||
POLAR_WEBHOOK_SECRET=...
|
||||
```
|
||||
|
||||
The plugin supports handlers for all Polar webhook events:
|
||||
|
||||
- `onPayload` - Catch-all handler for any incoming Webhook event
|
||||
- `onCheckoutCreated` - Triggered when a checkout is created
|
||||
- `onCheckoutUpdated` - Triggered when a checkout is updated
|
||||
- `onOrderCreated` - Triggered when an order is created
|
||||
- `onOrderPaid` - Triggered when an order is paid
|
||||
- `onOrderRefunded` - Triggered when an order is refunded
|
||||
- `onRefundCreated` - Triggered when a refund is created
|
||||
- `onRefundUpdated` - Triggered when a refund is updated
|
||||
- `onSubscriptionCreated` - Triggered when a subscription is created
|
||||
- `onSubscriptionUpdated` - Triggered when a subscription is updated
|
||||
- `onSubscriptionActive` - Triggered when a subscription becomes active
|
||||
- `onSubscriptionCanceled` - Triggered when a subscription is canceled
|
||||
- `onSubscriptionRevoked` - Triggered when a subscription is revoked
|
||||
- `onSubscriptionUncanceled` - Triggered when a subscription cancellation is reversed
|
||||
- `onProductCreated` - Triggered when a product is created
|
||||
- `onProductUpdated` - Triggered when a product is updated
|
||||
- `onOrganizationUpdated` - Triggered when an organization is updated
|
||||
- `onBenefitCreated` - Triggered when a benefit is created
|
||||
- `onBenefitUpdated` - Triggered when a benefit is updated
|
||||
- `onBenefitGrantCreated` - Triggered when a benefit grant is created
|
||||
- `onBenefitGrantUpdated` - Triggered when a benefit grant is updated
|
||||
- `onBenefitGrantRevoked` - Triggered when a benefit grant is revoked
|
||||
- `onCustomerCreated` - Triggered when a customer is created
|
||||
- `onCustomerUpdated` - Triggered when a customer is updated
|
||||
- `onCustomerDeleted` - Triggered when a customer is deleted
|
||||
- `onCustomerStateChanged` - Triggered when a customer is created
|
||||
123
create-agentic-app/template/docs/technical/react-markdown.md
Normal file
123
create-agentic-app/template/docs/technical/react-markdown.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# react-markdown
|
||||
|
||||
React component to render markdown.
|
||||
|
||||
## Contents
|
||||
|
||||
- [Install](#install)
|
||||
- [Use](#use)
|
||||
- [API](#api)
|
||||
- [Examples](#examples)
|
||||
- [Plugins](#plugins)
|
||||
|
||||
## What is this?
|
||||
|
||||
This package is a React component that can be given a string of markdown that it'll safely render to React elements. You can pass plugins to change how markdown is transformed and pass components that will be used instead of normal HTML elements.
|
||||
|
||||
## Install
|
||||
|
||||
```sh
|
||||
npm install react-markdown
|
||||
```
|
||||
|
||||
## Use
|
||||
|
||||
Basic usage:
|
||||
|
||||
```js
|
||||
import Markdown from "react-markdown";
|
||||
|
||||
const markdown = "# Hi, *Pluto*!";
|
||||
|
||||
<Markdown>{markdown}</Markdown>
|
||||
```
|
||||
|
||||
With plugins:
|
||||
|
||||
```js
|
||||
import Markdown from "react-markdown";
|
||||
import remarkGfm from "remark-gfm";
|
||||
|
||||
const markdown = `Just a link: www.nasa.gov.`;
|
||||
|
||||
<Markdown remarkPlugins={[remarkGfm]}>{markdown}</Markdown>
|
||||
```
|
||||
|
||||
## API
|
||||
|
||||
Key props:
|
||||
|
||||
- `children` — markdown string to render
|
||||
- `remarkPlugins` — array of remark plugins
|
||||
- `rehypePlugins` — array of rehype plugins
|
||||
- `components` — object mapping HTML tags to React components
|
||||
- `allowedElements` — array of allowed HTML tags
|
||||
- `disallowedElements` — array of disallowed HTML tags
|
||||
|
||||
## Examples
|
||||
|
||||
### Using GitHub Flavored Markdown
|
||||
|
||||
```js
|
||||
import Markdown from "react-markdown";
|
||||
import remarkGfm from "remark-gfm";
|
||||
|
||||
const markdown = `
|
||||
* [x] todo
|
||||
* [ ] done
|
||||
|
||||
| Column 1 | Column 2 |
|
||||
|----------|----------|
|
||||
| Cell 1 | Cell 2 |
|
||||
`;
|
||||
|
||||
<Markdown remarkPlugins={[remarkGfm]}>{markdown}</Markdown>
|
||||
```
|
||||
|
||||
### Custom Components (Syntax Highlighting)
|
||||
|
||||
```js
|
||||
import Markdown from "react-markdown";
|
||||
import { Prism as SyntaxHighlighter } from "react-syntax-highlighter";
|
||||
import { dark } from "react-syntax-highlighter/dist/esm/styles/prism";
|
||||
|
||||
const markdown = `
|
||||
\`\`\`js
|
||||
console.log('Hello, world!');
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
<Markdown
|
||||
components={{
|
||||
code(props) {
|
||||
const { children, className, ...rest } = props;
|
||||
const match = /language-(\w+)/.exec(className || "");
|
||||
return match ? (
|
||||
<SyntaxHighlighter
|
||||
{...rest}
|
||||
PreTag="div"
|
||||
children={String(children).replace(/\n$/, "")}
|
||||
language={match[1]}
|
||||
style={dark}
|
||||
/>
|
||||
) : (
|
||||
<code {...rest} className={className}>
|
||||
{children}
|
||||
</code>
|
||||
);
|
||||
},
|
||||
}}
|
||||
>
|
||||
{markdown}
|
||||
</Markdown>
|
||||
```
|
||||
|
||||
## Plugins
|
||||
|
||||
Common plugins:
|
||||
|
||||
- `remark-gfm` — GitHub Flavored Markdown (tables, task lists, strikethrough)
|
||||
- `remark-math` — Math notation support
|
||||
- `rehype-katex` — Render math with KaTeX
|
||||
- `rehype-highlight` — Syntax highlighting
|
||||
- `rehype-raw` — Allow raw HTML (use carefully for security)
|
||||
Reference in New Issue
Block a user