Timo Wernars

Full Stack Software Engineer

Creating bedtime stories with generative AI

Over the past few months, I’ve been experimenting with building AI chat interfaces, looking for a project that felt less like a playground demo and more like a real application. That led to Stories as a Service, a side project that generates personalized bedtime stories on the fly using the Vercel AI SDK. You provide a child’s name, age, and a theme, and the app streams a structured story object in real time, complete with a title, summary, and content, all validated through a Zod schema.

Structured Output Streaming

The interesting technical challenge here wasn't just generating text. Anyone can call an LLM and get back a wall of text. What makes it engaging is watching the story appear word by word while also showing the title and summary as they're being generated.

This is where streamObject from the AI SDK comes in. Instead of streaming plain text, you stream a structured object that gradually fills in as the LLM generates it. The API route looks like this:

import { openai } from '@ai-sdk/openai'
import { streamObject } from 'ai'

export async function POST(req: NextRequest) {
  const result = streamObject({
    model: openai('gpt-4o'),
    schema: generatedStorySchema,
    prompt: `Generate a bedtime story about ${story.character} who is ${story.age} years old. The story should be about ${STORY_THEMES[story.theme].title}. It should be fully readable in 35 seconds at a slow pace.`,
    onFinish: async ({ object }) => {
      await db.story.update({
        where: { id: parseInt(storyId) },
        data: {
          ...object,
          status: 'COMPLETED',
        },
      })
    },
  })

  return result.toTextStreamResponse()
}

The generatedStorySchema is a Zod schema that defines the structure I want back:

export const generatedStorySchema = z.object({
  content: z.string(),
  title: z.string().min(1).max(128),
  summary: z
    .string()
    .min(1)
    .max(256)
    .describe('A short summary of the story no longer than 256 characters'),
})

What's neat about this is that the schema doesn't just validate the output—it actually instructs the LLM on what structure to follow. The AI SDK handles the back-and-forth with the model to ensure the response matches your schema.

The Client Side

On the frontend, I use experimental_useObject to consume the stream. This hook deserializes the partial objects as they arrive and gives you a reactive object that updates in real-time:

import { experimental_useObject } from '@ai-sdk/react'

const { object: story, submit } = experimental_useObject({
  api: `/api/generate?storyId=${storyId}`,
  schema: generatedStorySchema,
})

As the story generates, story.title, story.summary, and story.content progressively fill in. You can render them immediately and watch the UI update character by character. It creates this satisfying typewriter effect without having to manually implement any of that streaming logic yourself.

Why Not Just Use Chat?

You might be thinking: why not just use streamText or the useChat hook? The problem with plain text streaming is that you get one continuous string. If you want structured data like a title, summary, and body, you'd have to either:

  1. Parse the text yourself (fragile and annoying)
  2. Make multiple LLM calls (slow and expensive)

With streamObject, you get typed, validated, structured output as it streams. The AI SDK does the heavy lifting of coordinating with the model to produce JSON-compliant chunks that you can deserialize incrementally.

Database Updates

One small detail I like is the onFinish callback in streamObject. Once the generation completes, it fires automatically and I update the database with the final story:

onFinish: async ({ object }) => {
  await db.story.update({
    where: { id: parseInt(storyId) },
    data: {
      ...object,
      status: 'COMPLETED',
    },
  });
},

This keeps the UI and database in sync without requiring a separate API call. The story is persisted as soon as it's done generating.

What I Learned

Building this project reinforced a few things about working with generative AI:

First, structured output is underrated. Most AI demos show chat interfaces, but for real applications you often need data in a specific shape. The AI SDK's streamObject makes this surprisingly clean.

Second, streaming matters. When you're waiting for an LLM to generate a few hundred words, watching it appear progressively feels much faster than staring at a loading spinner for 10 seconds.

Finally, Zod schemas are doing double duty here. They validate your data and guide the LLM on what to produce. That's a nice overlap between runtime validation and AI prompting.

If you want to see the full code, check it out on GitHub. It's a simple project but it demonstrates the practical side of working with generative AI beyond basic chatbots.