feat: implement streaming conversation generation with real-time message display
- Replace generateObject with streamObject for real-time conversation streaming - Add progressive message loading with scroll-to-load functionality - Implement proper state reset when new URLs are submitted - Add enhanced logging for debugging streaming issues - Include structured data documentation for AI SDK streaming - Update UI to show loading state and streaming progress - Fix message visibility management with initial 10 message limit 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
410
docs/technical/structured-data.md
Normal file
410
docs/technical/structured-data.md
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
|
||||||
|
# Generating Structured Data
|
||||||
|
|
||||||
|
While text generation can be useful, your use case will likely call for generating structured data.
|
||||||
|
For example, you might want to extract information from text, classify data, or generate synthetic data.
|
||||||
|
|
||||||
|
Many language models are capable of generating structured data, often defined as using "JSON modes" or "tools".
|
||||||
|
However, you need to manually provide schemas and then validate the generated data as LLMs can produce incorrect or incomplete structured data.
|
||||||
|
|
||||||
|
The AI SDK standardises structured object generation across model providers
|
||||||
|
with the [`generateObject`](/docs/reference/ai-sdk-core/generate-object)
|
||||||
|
and [`streamObject`](/docs/reference/ai-sdk-core/stream-object) functions.
|
||||||
|
You can use both functions with different output strategies, e.g. `array`, `object`, `enum`, or `no-schema`,
|
||||||
|
and with different generation modes, e.g. `auto`, `tool`, or `json`.
|
||||||
|
You can use [Zod schemas](/docs/reference/ai-sdk-core/zod-schema), [Valibot](/docs/reference/ai-sdk-core/valibot-schema), or [JSON schemas](/docs/reference/ai-sdk-core/json-schema) to specify the shape of the data that you want,
|
||||||
|
and the AI model will generate data that conforms to that structure.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
You can pass Zod objects directly to the AI SDK functions or use the
|
||||||
|
`zodSchema` helper function.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
## Generate Object
|
||||||
|
|
||||||
|
The `generateObject` generates structured data from a prompt.
|
||||||
|
The schema is also used to validate the generated data, ensuring type safety and correctness.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const { object } = await generateObject({
|
||||||
|
model: 'openai/gpt-4.1',
|
||||||
|
schema: z.object({
|
||||||
|
recipe: z.object({
|
||||||
|
name: z.string(),
|
||||||
|
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
|
||||||
|
steps: z.array(z.string()),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate a lasagna recipe.',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
See `generateObject` in action with [these examples](#more-examples)
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
### Accessing response headers & body
|
||||||
|
|
||||||
|
Sometimes you need access to the full response from the model provider,
|
||||||
|
e.g. to access some provider-specific headers or body content.
|
||||||
|
|
||||||
|
You can access the raw response headers and body using the `response` property:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateObject({
|
||||||
|
// ...
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(JSON.stringify(result.response.headers, null, 2));
|
||||||
|
console.log(JSON.stringify(result.response.body, null, 2));
|
||||||
|
```
|
||||||
|
|
||||||
|
## Stream Object
|
||||||
|
|
||||||
|
Given the added complexity of returning structured data, model response time can be unacceptable for your interactive use case.
|
||||||
|
With the [`streamObject`](/docs/reference/ai-sdk-core/stream-object) function, you can stream the model's response as it is generated.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { streamObject } from 'ai';
|
||||||
|
|
||||||
|
const { partialObjectStream } = streamObject({
|
||||||
|
// ...
|
||||||
|
});
|
||||||
|
|
||||||
|
// use partialObjectStream as an async iterable
|
||||||
|
for await (const partialObject of partialObjectStream) {
|
||||||
|
console.log(partialObject);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use `streamObject` to stream generated UIs in combination with React Server Components (see [Generative UI](../ai-sdk-rsc))) or the [`useObject`](/docs/reference/ai-sdk-ui/use-object) hook.
|
||||||
|
|
||||||
|
<Note>See `streamObject` in action with [these examples](#more-examples)</Note>
|
||||||
|
|
||||||
|
### `onError` callback
|
||||||
|
|
||||||
|
`streamObject` immediately starts streaming.
|
||||||
|
Errors become part of the stream and are not thrown to prevent e.g. servers from crashing.
|
||||||
|
|
||||||
|
To log errors, you can provide an `onError` callback that is triggered when an error occurs.
|
||||||
|
|
||||||
|
```tsx highlight="5-7"
|
||||||
|
import { streamObject } from 'ai';
|
||||||
|
|
||||||
|
const result = streamObject({
|
||||||
|
// ...
|
||||||
|
onError({ error }) {
|
||||||
|
console.error(error); // your error logging logic here
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Strategy
|
||||||
|
|
||||||
|
You can use both functions with different output strategies, e.g. `array`, `object`, `enum`, or `no-schema`.
|
||||||
|
|
||||||
|
### Object
|
||||||
|
|
||||||
|
The default output strategy is `object`, which returns the generated data as an object.
|
||||||
|
You don't need to specify the output strategy if you want to use the default.
|
||||||
|
|
||||||
|
### Array
|
||||||
|
|
||||||
|
If you want to generate an array of objects, you can set the output strategy to `array`.
|
||||||
|
When you use the `array` output strategy, the schema specifies the shape of an array element.
|
||||||
|
With `streamObject`, you can also stream the generated array elements using `elementStream`.
|
||||||
|
|
||||||
|
```ts highlight="7,18"
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { streamObject } from 'ai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const { elementStream } = streamObject({
|
||||||
|
model: openai('gpt-4.1'),
|
||||||
|
output: 'array',
|
||||||
|
schema: z.object({
|
||||||
|
name: z.string(),
|
||||||
|
class: z
|
||||||
|
.string()
|
||||||
|
.describe('Character class, e.g. warrior, mage, or thief.'),
|
||||||
|
description: z.string(),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate 3 hero descriptions for a fantasy role playing game.',
|
||||||
|
});
|
||||||
|
|
||||||
|
for await (const hero of elementStream) {
|
||||||
|
console.log(hero);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enum
|
||||||
|
|
||||||
|
If you want to generate a specific enum value, e.g. for classification tasks,
|
||||||
|
you can set the output strategy to `enum`
|
||||||
|
and provide a list of possible values in the `enum` parameter.
|
||||||
|
|
||||||
|
<Note>Enum output is only available with `generateObject`.</Note>
|
||||||
|
|
||||||
|
```ts highlight="5-6"
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
|
||||||
|
const { object } = await generateObject({
|
||||||
|
model: 'openai/gpt-4.1',
|
||||||
|
output: 'enum',
|
||||||
|
enum: ['action', 'comedy', 'drama', 'horror', 'sci-fi'],
|
||||||
|
prompt:
|
||||||
|
'Classify the genre of this movie plot: ' +
|
||||||
|
'"A group of astronauts travel through a wormhole in search of a ' +
|
||||||
|
'new habitable planet for humanity."',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### No Schema
|
||||||
|
|
||||||
|
In some cases, you might not want to use a schema,
|
||||||
|
for example when the data is a dynamic user request.
|
||||||
|
You can use the `output` setting to set the output format to `no-schema` in those cases
|
||||||
|
and omit the schema parameter.
|
||||||
|
|
||||||
|
```ts highlight="6"
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
|
||||||
|
const { object } = await generateObject({
|
||||||
|
model: openai('gpt-4.1'),
|
||||||
|
output: 'no-schema',
|
||||||
|
prompt: 'Generate a lasagna recipe.',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Schema Name and Description
|
||||||
|
|
||||||
|
You can optionally specify a name and description for the schema. These are used by some providers for additional LLM guidance, e.g. via tool or schema name.
|
||||||
|
|
||||||
|
```ts highlight="6-7"
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const { object } = await generateObject({
|
||||||
|
model: 'openai/gpt-4.1',
|
||||||
|
schemaName: 'Recipe',
|
||||||
|
schemaDescription: 'A recipe for a dish.',
|
||||||
|
schema: z.object({
|
||||||
|
name: z.string(),
|
||||||
|
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
|
||||||
|
steps: z.array(z.string()),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate a lasagna recipe.',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Accessing Reasoning
|
||||||
|
|
||||||
|
You can access the reasoning used by the language model to generate the object via the `reasoning` property on the result. This property contains a string with the model's thought process, if available.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { openai, OpenAIResponsesProviderOptions } from '@ai-sdk/openai';
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-5'),
|
||||||
|
schema: z.object({
|
||||||
|
recipe: z.object({
|
||||||
|
name: z.string(),
|
||||||
|
ingredients: z.array(
|
||||||
|
z.object({
|
||||||
|
name: z.string(),
|
||||||
|
amount: z.string(),
|
||||||
|
}),
|
||||||
|
),
|
||||||
|
steps: z.array(z.string()),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate a lasagna recipe.',
|
||||||
|
providerOptions: {
|
||||||
|
openai: {
|
||||||
|
strictJsonSchema: true,
|
||||||
|
reasoningSummary: 'detailed',
|
||||||
|
} satisfies OpenAIResponsesProviderOptions,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(result.reasoning);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
When `generateObject` cannot generate a valid object, it throws a [`AI_NoObjectGeneratedError`](/docs/reference/ai-sdk-errors/ai-no-object-generated-error).
|
||||||
|
|
||||||
|
This error occurs when the AI provider fails to generate a parsable object that conforms to the schema.
|
||||||
|
It can arise due to the following reasons:
|
||||||
|
|
||||||
|
- The model failed to generate a response.
|
||||||
|
- The model generated a response that could not be parsed.
|
||||||
|
- The model generated a response that could not be validated against the schema.
|
||||||
|
|
||||||
|
The error preserves the following information to help you log the issue:
|
||||||
|
|
||||||
|
- `text`: The text that was generated by the model. This can be the raw text or the tool call text, depending on the object generation mode.
|
||||||
|
- `response`: Metadata about the language model response, including response id, timestamp, and model.
|
||||||
|
- `usage`: Request token usage.
|
||||||
|
- `cause`: The cause of the error (e.g. a JSON parsing error). You can use this for more detailed error handling.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { generateObject, NoObjectGeneratedError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
await generateObject({ model, schema, prompt });
|
||||||
|
} catch (error) {
|
||||||
|
if (NoObjectGeneratedError.isInstance(error)) {
|
||||||
|
console.log('NoObjectGeneratedError');
|
||||||
|
console.log('Cause:', error.cause);
|
||||||
|
console.log('Text:', error.text);
|
||||||
|
console.log('Response:', error.response);
|
||||||
|
console.log('Usage:', error.usage);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Repairing Invalid or Malformed JSON
|
||||||
|
|
||||||
|
<Note type="warning">
|
||||||
|
The `repairText` function is experimental and may change in the future.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
Sometimes the model will generate invalid or malformed JSON.
|
||||||
|
You can use the `repairText` function to attempt to repair the JSON.
|
||||||
|
|
||||||
|
It receives the error, either a `JSONParseError` or a `TypeValidationError`,
|
||||||
|
and the text that was generated by the model.
|
||||||
|
You can then attempt to repair the text and return the repaired text.
|
||||||
|
|
||||||
|
```ts highlight="7-10"
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
|
||||||
|
const { object } = await generateObject({
|
||||||
|
model,
|
||||||
|
schema,
|
||||||
|
prompt,
|
||||||
|
experimental_repairText: async ({ text, error }) => {
|
||||||
|
// example: add a closing brace to the text
|
||||||
|
return text + '}';
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Structured outputs with `generateText` and `streamText`
|
||||||
|
|
||||||
|
You can generate structured data with `generateText` and `streamText` by using the `experimental_output` setting.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
Some models, e.g. those by OpenAI, support structured outputs and tool calling
|
||||||
|
at the same time. This is only possible with `generateText` and `streamText`.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
<Note type="warning">
|
||||||
|
Structured output generation with `generateText` and `streamText` is
|
||||||
|
experimental and may change in the future.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
### `generateText`
|
||||||
|
|
||||||
|
```ts highlight="2,4-18"
|
||||||
|
// experimental_output is a structured object that matches the schema:
|
||||||
|
const { experimental_output } = await generateText({
|
||||||
|
// ...
|
||||||
|
experimental_output: Output.object({
|
||||||
|
schema: z.object({
|
||||||
|
name: z.string(),
|
||||||
|
age: z.number().nullable().describe('Age of the person.'),
|
||||||
|
contact: z.object({
|
||||||
|
type: z.literal('email'),
|
||||||
|
value: z.string(),
|
||||||
|
}),
|
||||||
|
occupation: z.object({
|
||||||
|
type: z.literal('employed'),
|
||||||
|
company: z.string(),
|
||||||
|
position: z.string(),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate an example person for testing.',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### `streamText`
|
||||||
|
|
||||||
|
```ts highlight="2,4-18"
|
||||||
|
// experimental_partialOutputStream contains generated partial objects:
|
||||||
|
const { experimental_partialOutputStream } = await streamText({
|
||||||
|
// ...
|
||||||
|
experimental_output: Output.object({
|
||||||
|
schema: z.object({
|
||||||
|
name: z.string(),
|
||||||
|
age: z.number().nullable().describe('Age of the person.'),
|
||||||
|
contact: z.object({
|
||||||
|
type: z.literal('email'),
|
||||||
|
value: z.string(),
|
||||||
|
}),
|
||||||
|
occupation: z.object({
|
||||||
|
type: z.literal('employed'),
|
||||||
|
company: z.string(),
|
||||||
|
position: z.string(),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate an example person for testing.',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## More Examples
|
||||||
|
|
||||||
|
You can see `generateObject` and `streamObject` in action using various frameworks in the following examples:
|
||||||
|
|
||||||
|
### `generateObject`
|
||||||
|
|
||||||
|
<ExampleLinks
|
||||||
|
examples={[
|
||||||
|
{
|
||||||
|
title: 'Learn to generate objects in Node.js',
|
||||||
|
link: '/examples/node/generating-structured-data/generate-object',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title:
|
||||||
|
'Learn to generate objects in Next.js with Route Handlers (AI SDK UI)',
|
||||||
|
link: '/examples/next-pages/basics/generating-object',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title:
|
||||||
|
'Learn to generate objects in Next.js with Server Actions (AI SDK RSC)',
|
||||||
|
link: '/examples/next-app/basics/generating-object',
|
||||||
|
},
|
||||||
|
]}
|
||||||
|
/>
|
||||||
|
|
||||||
|
### `streamObject`
|
||||||
|
|
||||||
|
<ExampleLinks
|
||||||
|
examples={[
|
||||||
|
{
|
||||||
|
title: 'Learn to stream objects in Node.js',
|
||||||
|
link: '/examples/node/streaming-structured-data/stream-object',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title:
|
||||||
|
'Learn to stream objects in Next.js with Route Handlers (AI SDK UI)',
|
||||||
|
link: '/examples/next-pages/basics/streaming-object-generation',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title:
|
||||||
|
'Learn to stream objects in Next.js with Server Actions (AI SDK RSC)',
|
||||||
|
link: '/examples/next-app/basics/streaming-object-generation',
|
||||||
|
},
|
||||||
|
]}
|
||||||
|
/>
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
import { NextRequest, NextResponse } from 'next/server';
|
import { NextRequest, NextResponse } from 'next/server';
|
||||||
import { generateObject } from 'ai';
|
import { streamObject } from 'ai';
|
||||||
import { mistral } from '@ai-sdk/mistral';
|
import { mistral } from '@ai-sdk/mistral';
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
|
|
||||||
@@ -10,11 +10,6 @@ const messageSchema = z.object({
|
|||||||
timestamp: z.string(),
|
timestamp: z.string(),
|
||||||
});
|
});
|
||||||
|
|
||||||
const conversationSchema = z.object({
|
|
||||||
messages: z.array(messageSchema),
|
|
||||||
detectedLanguage: z.string(),
|
|
||||||
});
|
|
||||||
|
|
||||||
export async function POST(request: NextRequest) {
|
export async function POST(request: NextRequest) {
|
||||||
try {
|
try {
|
||||||
const { content, title, url } = await request.json();
|
const { content, title, url } = await request.json();
|
||||||
@@ -26,20 +21,21 @@ export async function POST(request: NextRequest) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log('Generating conversation for:', { title, url, contentLength: content.length });
|
console.log('Generating streaming conversation for:', { title, url, contentLength: content.length, contentPreview: content.substring(0, 200) + '...' });
|
||||||
|
|
||||||
// Generate podcast conversation using Mistral
|
// Stream podcast conversation using Mistral
|
||||||
const { object } = await generateObject({
|
const result = streamObject({
|
||||||
model: mistral('mistral-medium-latest'),
|
model: mistral('mistral-medium-latest'),
|
||||||
schema: conversationSchema,
|
output: 'array',
|
||||||
schemaName: 'PodcastConversation',
|
schema: messageSchema,
|
||||||
schemaDescription: 'A podcast-style conversation between two hosts discussing scraped content',
|
schemaName: 'PodcastMessage',
|
||||||
|
schemaDescription: 'A single message in a podcast-style conversation between two hosts',
|
||||||
prompt: `You are generating a podcast conversation between two hosts discussing the following scraped content from "${title}" at ${url}.
|
prompt: `You are generating a podcast conversation between two hosts discussing the following scraped content from "${title}" at ${url}.
|
||||||
|
|
||||||
CONTENT:
|
CONTENT:
|
||||||
${content}
|
${content}
|
||||||
|
|
||||||
Generate a natural, engaging podcast conversation with exactly 10 turns (5 per host). The conversation should:
|
Generate a natural, engaging podcast conversation with at least 20 turns (10 per host). The conversation should:
|
||||||
|
|
||||||
1. HOST 1 PERSONALITY: Bubbly, excited, enthusiastic, and optimistic. Uses expressions like "Wow!", "Amazing!", "That's so cool!". Often laughs [giggles] and shows genuine excitement.
|
1. HOST 1 PERSONALITY: Bubbly, excited, enthusiastic, and optimistic. Uses expressions like "Wow!", "Amazing!", "That's so cool!". Often laughs [giggles] and shows genuine excitement.
|
||||||
|
|
||||||
@@ -55,20 +51,55 @@ Generate a natural, engaging podcast conversation with exactly 10 turns (5 per h
|
|||||||
|
|
||||||
7. The conversation should flow naturally and cover the main points of the content
|
7. The conversation should flow naturally and cover the main points of the content
|
||||||
|
|
||||||
Format the response as a structured object with messages array and detected language.`,
|
8. Create a substantial conversation that thoroughly explores the content from multiple angles
|
||||||
|
|
||||||
|
Generate the messages one by one as an array. Each message should have:
|
||||||
|
- id: sequential number as string
|
||||||
|
- speaker: either "host1" or "host2" alternating
|
||||||
|
- text: the message content with emotional expressions in brackets
|
||||||
|
- timestamp: in MM:SS format`,
|
||||||
temperature: 0.7,
|
temperature: 0.7,
|
||||||
maxTokens: 2000,
|
maxTokens: 4000,
|
||||||
|
onError({ error }) {
|
||||||
|
console.error('Streaming error:', error);
|
||||||
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
console.log('Conversation generated successfully');
|
// Create streaming response with Server-Sent Events
|
||||||
|
const stream = new ReadableStream({
|
||||||
|
async start(controller) {
|
||||||
|
try {
|
||||||
|
console.log('Setting up element stream...');
|
||||||
|
const { elementStream } = result;
|
||||||
|
console.log('Element stream created:', !!elementStream);
|
||||||
|
|
||||||
|
let messageCount = 0;
|
||||||
|
for await (const message of elementStream) {
|
||||||
|
messageCount++;
|
||||||
|
console.log('Streaming message:', messageCount, message);
|
||||||
|
const data = `data: ${JSON.stringify(message)}\n\n`;
|
||||||
|
controller.enqueue(new TextEncoder().encode(data));
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('Stream completed with', messageCount, 'messages');
|
||||||
|
|
||||||
|
// Send completion signal
|
||||||
|
controller.enqueue(new TextEncoder().encode('data: [DONE]\n\n'));
|
||||||
|
controller.close();
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Streaming error:', error);
|
||||||
|
controller.error(error);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
return NextResponse.json({
|
return new Response(stream, {
|
||||||
success: true,
|
headers: {
|
||||||
data: {
|
'Content-Type': 'text/event-stream',
|
||||||
messages: object.messages,
|
'Cache-Control': 'no-cache',
|
||||||
detectedLanguage: object.detectedLanguage,
|
'Connection': 'keep-alive',
|
||||||
generatedAt: new Date().toISOString()
|
'Access-Control-Allow-Origin': '*',
|
||||||
}
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|||||||
@@ -20,16 +20,17 @@ export async function POST(request: NextRequest) {
|
|||||||
console.log('Attempting to scrape URL:', url);
|
console.log('Attempting to scrape URL:', url);
|
||||||
|
|
||||||
// Scrape the website
|
// Scrape the website
|
||||||
|
console.log('Attempting to scrape with Firecrawl...');
|
||||||
const result = await firecrawl.scrape(url, {
|
const result = await firecrawl.scrape(url, {
|
||||||
formats: ['markdown', 'html']
|
formats: ['markdown', 'html']
|
||||||
});
|
});
|
||||||
|
|
||||||
console.log('Firecrawl result received');
|
console.log('Firecrawl result received:', JSON.stringify(result, null, 2));
|
||||||
|
|
||||||
// Check if we have the expected data structure
|
// Check if we have the expected data structure
|
||||||
if (!result || !result.markdown) {
|
if (!result || !result.markdown) {
|
||||||
console.error('Invalid Firecrawl response:', result);
|
console.error('Invalid Firecrawl response:', result);
|
||||||
throw new Error('Invalid response from Firecrawl API');
|
throw new Error(`Invalid response from Firecrawl API: ${JSON.stringify(result)}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create a truncated excerpt for display
|
// Create a truncated excerpt for display
|
||||||
|
|||||||
163
src/app/page.tsx
163
src/app/page.tsx
@@ -1,6 +1,6 @@
|
|||||||
'use client';
|
'use client';
|
||||||
|
|
||||||
import { useState } from 'react';
|
import { useState, useEffect, useRef, useCallback } from 'react';
|
||||||
import { Button } from '@/components/ui/button';
|
import { Button } from '@/components/ui/button';
|
||||||
import { Input } from '@/components/ui/input';
|
import { Input } from '@/components/ui/input';
|
||||||
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
|
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
|
||||||
@@ -18,6 +18,8 @@ export default function Home() {
|
|||||||
const [url, setUrl] = useState('');
|
const [url, setUrl] = useState('');
|
||||||
const [isLoading, setIsLoading] = useState(false);
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
const [messages, setMessages] = useState<Message[]>([]);
|
const [messages, setMessages] = useState<Message[]>([]);
|
||||||
|
const [visibleMessages, setVisibleMessages] = useState<Message[]>([]);
|
||||||
|
const scrollContainerRef = useRef<HTMLDivElement>(null);
|
||||||
const [isPlaying, setIsPlaying] = useState(false);
|
const [isPlaying, setIsPlaying] = useState(false);
|
||||||
const [currentTime, setCurrentTime] = useState(0);
|
const [currentTime, setCurrentTime] = useState(0);
|
||||||
const [duration, setDuration] = useState(0);
|
const [duration, setDuration] = useState(0);
|
||||||
@@ -33,6 +35,11 @@ export default function Home() {
|
|||||||
e.preventDefault();
|
e.preventDefault();
|
||||||
setIsLoading(true);
|
setIsLoading(true);
|
||||||
|
|
||||||
|
// Clear existing conversation before starting new one
|
||||||
|
setMessages([]);
|
||||||
|
setVisibleMessages([]);
|
||||||
|
setDuration(0);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Call Firecrawl API to scrape the website
|
// Call Firecrawl API to scrape the website
|
||||||
const response = await fetch('/api/scrape', {
|
const response = await fetch('/api/scrape', {
|
||||||
@@ -68,12 +75,19 @@ export default function Home() {
|
|||||||
text: 'Sorry, I encountered an error while trying to scrape the website. Please check the URL and try again.',
|
text: 'Sorry, I encountered an error while trying to scrape the website. Please check the URL and try again.',
|
||||||
timestamp: '0:15'
|
timestamp: '0:15'
|
||||||
}]);
|
}]);
|
||||||
|
setVisibleMessages([{
|
||||||
|
id: '1',
|
||||||
|
speaker: 'host1',
|
||||||
|
text: 'Sorry, I encountered an error while trying to scrape the website. Please check the URL and try again.',
|
||||||
|
timestamp: '0:15'
|
||||||
|
}]);
|
||||||
} finally {
|
} finally {
|
||||||
setIsLoading(false);
|
setIsLoading(false);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
const generateConversation = async (content: string, title: string, url: string) => {
|
const generateConversation = async (content: string, title: string, url: string) => {
|
||||||
|
console.log('Starting conversation generation...');
|
||||||
try {
|
try {
|
||||||
const response = await fetch('/api/generate-conversation', {
|
const response = await fetch('/api/generate-conversation', {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
@@ -83,18 +97,73 @@ export default function Home() {
|
|||||||
body: JSON.stringify({ content, title, url }),
|
body: JSON.stringify({ content, title, url }),
|
||||||
});
|
});
|
||||||
|
|
||||||
const result = await response.json();
|
console.log('Conversation API response status:', response.status);
|
||||||
|
|
||||||
if (result.success) {
|
if (!response.ok) {
|
||||||
setMessages(result.data.messages);
|
throw new Error('Failed to generate conversation');
|
||||||
|
|
||||||
// Calculate duration based on conversation length
|
|
||||||
const avgTimePerMessage = 25; // seconds
|
|
||||||
const totalDuration = result.data.messages.length * avgTimePerMessage;
|
|
||||||
setDuration(totalDuration);
|
|
||||||
} else {
|
|
||||||
throw new Error(result.error || 'Failed to generate conversation');
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const reader = response.body?.getReader();
|
||||||
|
const decoder = new TextDecoder();
|
||||||
|
|
||||||
|
if (!reader) {
|
||||||
|
throw new Error('No response stream available');
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('Starting to read stream...');
|
||||||
|
let messageCount = 0;
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
const { done, value } = await reader.read();
|
||||||
|
|
||||||
|
if (done) {
|
||||||
|
console.log('Stream reading completed. Total messages:', messageCount);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const chunk = decoder.decode(value);
|
||||||
|
const lines = chunk.split('\n');
|
||||||
|
|
||||||
|
for (const line of lines) {
|
||||||
|
if (line.startsWith('data: ')) {
|
||||||
|
const dataStr = line.slice(6);
|
||||||
|
|
||||||
|
if (dataStr === '[DONE]') {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const message = JSON.parse(dataStr);
|
||||||
|
messageCount++;
|
||||||
|
console.log('Received message:', messageCount, message);
|
||||||
|
|
||||||
|
// Add new message to the conversation
|
||||||
|
setMessages(prev => {
|
||||||
|
const updated = [...prev, message];
|
||||||
|
|
||||||
|
// Update duration
|
||||||
|
const avgTimePerMessage = 25; // seconds
|
||||||
|
const totalDuration = updated.length * avgTimePerMessage;
|
||||||
|
setDuration(totalDuration);
|
||||||
|
|
||||||
|
return updated;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Keep only first 10 messages visible initially
|
||||||
|
setVisibleMessages(prev => {
|
||||||
|
const updated = [...prev, message];
|
||||||
|
return updated.slice(0, 10);
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (e) {
|
||||||
|
console.error('Error parsing streaming data:', e, dataStr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('Conversation generation completed successfully');
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Conversation generation error:', error);
|
console.error('Conversation generation error:', error);
|
||||||
setMessages([{
|
setMessages([{
|
||||||
@@ -103,6 +172,12 @@ export default function Home() {
|
|||||||
text: 'Sorry, I encountered an error while generating the podcast conversation. Please try again.',
|
text: 'Sorry, I encountered an error while generating the podcast conversation. Please try again.',
|
||||||
timestamp: '0:15'
|
timestamp: '0:15'
|
||||||
}]);
|
}]);
|
||||||
|
setVisibleMessages([{
|
||||||
|
id: '1',
|
||||||
|
speaker: 'host1',
|
||||||
|
text: 'Sorry, I encountered an error while generating the podcast conversation. Please try again.',
|
||||||
|
timestamp: '0:15'
|
||||||
|
}]);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -139,6 +214,32 @@ export default function Home() {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Reset visible messages when new conversation is generated
|
||||||
|
useEffect(() => {
|
||||||
|
console.log('Messages updated:', { messagesLength: messages.length, messages: messages });
|
||||||
|
if (messages.length > 0) {
|
||||||
|
setVisibleMessages(messages.slice(0, 10));
|
||||||
|
console.log('Set visible messages to first 10');
|
||||||
|
}
|
||||||
|
}, [messages]);
|
||||||
|
|
||||||
|
const handleScroll = useCallback(() => {
|
||||||
|
const element = scrollContainerRef.current;
|
||||||
|
if (!element) return;
|
||||||
|
|
||||||
|
const { scrollTop, scrollHeight, clientHeight } = element;
|
||||||
|
// More aggressive scroll detection - trigger when user scrolls past 80% of content
|
||||||
|
const scrollPercentage = scrollTop / (scrollHeight - clientHeight);
|
||||||
|
const isNearBottom = scrollPercentage > 0.8 || scrollHeight - scrollTop - clientHeight < 150;
|
||||||
|
|
||||||
|
if (isNearBottom && visibleMessages.length < messages.length) {
|
||||||
|
// Load 5 more messages
|
||||||
|
const nextMessages = messages.slice(visibleMessages.length, visibleMessages.length + 5);
|
||||||
|
setVisibleMessages(prev => [...prev, ...nextMessages]);
|
||||||
|
}
|
||||||
|
}, [visibleMessages.length, messages]);
|
||||||
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="min-h-screen bg-background">
|
<div className="min-h-screen bg-background">
|
||||||
{/* Header */}
|
{/* Header */}
|
||||||
@@ -227,13 +328,28 @@ export default function Home() {
|
|||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle>Podcast Conversation</CardTitle>
|
<CardTitle>Podcast Conversation</CardTitle>
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent className="flex-1 overflow-y-auto space-y-4 p-6">
|
<CardContent
|
||||||
|
ref={scrollContainerRef}
|
||||||
|
className="flex-1 overflow-y-auto space-y-4 p-6"
|
||||||
|
style={{
|
||||||
|
maxHeight: '100%',
|
||||||
|
minHeight: '400px' // Force minimum height to enable scrolling
|
||||||
|
}}
|
||||||
|
onScroll={handleScroll}
|
||||||
|
>
|
||||||
{messages.length === 0 ? (
|
{messages.length === 0 ? (
|
||||||
<div className="text-center text-muted-foreground mt-8">
|
<div className="text-center text-muted-foreground mt-8">
|
||||||
<p>Enter a URL and generate a podcast to see the conversation here.</p>
|
{isLoading ? (
|
||||||
|
<div className="space-y-4">
|
||||||
|
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-primary mx-auto"></div>
|
||||||
|
<p>Generating new conversation...</p>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<p>Enter a URL and generate a podcast to see the conversation here.</p>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
messages.map((message) => (
|
visibleMessages.map((message) => (
|
||||||
<div
|
<div
|
||||||
key={message.id}
|
key={message.id}
|
||||||
className={`flex ${message.speaker === 'host1' ? 'justify-start' : 'justify-end'}`}
|
className={`flex ${message.speaker === 'host1' ? 'justify-start' : 'justify-end'}`}
|
||||||
@@ -256,6 +372,25 @@ export default function Home() {
|
|||||||
</div>
|
</div>
|
||||||
))
|
))
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{/* Scroll indicator for more messages */}
|
||||||
|
{visibleMessages.length < messages.length && (
|
||||||
|
<div className="text-center py-2 space-y-2">
|
||||||
|
<p className="text-sm text-muted-foreground">
|
||||||
|
Scroll down to load more messages ({visibleMessages.length}/{messages.length} shown)
|
||||||
|
</p>
|
||||||
|
<Button
|
||||||
|
variant="outline"
|
||||||
|
size="sm"
|
||||||
|
onClick={() => {
|
||||||
|
const nextMessages = messages.slice(visibleMessages.length, visibleMessages.length + 5);
|
||||||
|
setVisibleMessages(prev => [...prev, ...nextMessages]);
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
Load 5 More Messages
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
Reference in New Issue
Block a user