đ AI SDK Playground: What I Built While Learning AI
Why Iâm Sharing This
While looking for resources on building AI powered apps quickly, mainly using Vercelâs AI SDK. I came across tons of video tutorials on YouTube, plus some excellent official documentation. That made me think: this is a perfect way to introduce people to what might otherwise feel like a daunting space.
My idea was simple: build as many small AI features as possible to see whatâs doable, how long it takes, and how much code is really required for functional demos. Turns out, you can achieve a lot with minimal code and in very little time.
This isnât a production app or anything serious, itâs me experimenting with AI, testing ideas, and having fun with the UI. The result was a small demo project: check it out here.
đ§ Structured Output (Surprisingly Practical)
This is probably the most useful feature. Instead of just receiving random text from an AI, you can ask it to return structured data.
Want a recipe? Youâll get back a title, an ingredient list, and clear instructions. Need a list of PokĂ©mon? Youâll get an array of objects with names, types, and stats.
Itâs like working with an AI that actually understands what youâre asking for and formats the response so you can use it right away.
đ External Tools (Where the Magic Happens)
Hereâs where things get really fun. I hooked up a weather API so you can ask, âWhatâs the weather like in Tokyo?â and the AI fetches real data.
With web search and other APIs, you can combine services to give your AI new abilities. Itâs essentially like giving your app superpowers.
How I Built It
The Tech Stack
Pretty standard these days:
- Next.js 15
- TypeScript
- Tailwind CSS
- Shadcn/ui
- Zod (key for validating structured responses)
- Vercel AI SDK v5 (the star of the show, handling all the AI work)
- Arcjet (for securityâbot detection, rate limiting, etc. so I donât go broke from demo abuse)
The Architecture
The setup is simple. Each feature has its own API route in app/api/:
- Chat in one
- Image generation in another
- Tools in another
The UI follows the same pattern: each feature gets its own playground page, where you can also see the minimal code powering it.
The AI SDK does most of the heavy lifting. For example:
useChat()handles chat interactionsuseCompletion()covers text generation
It also manages streaming, state, and more leaving me surprised at how little code was needed.
Why Structured Output Changes Everything
Plain text is fine for chat, but structured data is far more useful for real apps. For example:
const { completion } = await streamText({
model: openai("gpt-5"),
schema: z.object({
title: z.string(),
ingredients: z.array(z.string()),
instructions: z.array(z.string()),
}),
prompt: "Give me a cookie recipe",
});
Instead of a text blob, you now get a clean object with fields you can immediately use.
Tools Make It Interesting
Connecting external services opens the door to much more powerful use cases. For example:
const { messages } = useChat({
api: "/api/tools",
tools: {
getWeather: {
description: "Get weather for a location",
parameters: z.object({
location: z.string(),
}),
},
},
});
With this, the AI can fetch the weather, perform searches, call APIs, you decide what powers to grant it.
The Good
This approach is fantastic for prototyping. You can spin up features quickly, experiment with ideas, and learn a lot about how modern apps are built.
Even better: Vercelâs AI SDK standardizes the way you work with LLMs, letting you switch providers seamlessly while keeping the same API.
The Not-So-Good
The abstraction is powerful, but it can also shield you from understanding how LLMs work under the hood. Itâs worth spending time learning how to interact with models directly, even if youâll likely use frameworks like this in production. That deeper knowledge pays off.
Extra
I didnât use them in my demo, but Vercel also offers a set of prebuilt components for AI chatbot-style apps: Vercel AI Elements. Definitely worth exploring if you want to speed things up even further.