Build a Chatbot with Next.js and Kyma API in 5 Minutes
What You'll Build
A streaming chatbot with:
- •Real-time token streaming
- •Switch between 6 AI models
- •Dark navy + gold theme
- •Deploy to Vercel in one click
Total code: under 100 lines. Total cost: free (Kyma gives you $0.50 on signup).
Prerequisites
- •Node.js 18+
- •A Kyma API key (get one free)
Step 1: Create the Project
npx create-next-app@latest my-chatbot
cd my-chatbot
npm install kyma-ai
Step 2: Add Your API Key
cp .env.example .env.local
Edit .env.local:
KYMA_API_KEY=ky-your-actual-key
Step 3: Run
npm run dev
Open http://localhost:3000. You're chatting with qwen-3.6-plus — the most popular model on Kyma.
How It Works
The API Route (15 lines)
// app/api/chat/route.ts
import { streamText } from "ai";
import { kyma } from "@kyma-api/ai-sdk";
export async function POST(req: Request) {
const { messages, model } = await req.json();
const result = streamText({
model: kyma(model || "qwen-3.6-plus"),
messages,
system: "You are a helpful assistant.",
});
return result.toUIMessageStreamResponse();
}
The kyma() function wraps the Vercel AI SDK's OpenAI-compatible provider. It reads KYMA_API_KEY from your environment automatically.
The Chat UI
The frontend uses useChat from ai/react — it handles streaming, message state, and the input form. The model selector passes the selected model via the body option.
const { messages, sendMessage, status } = useChat({
body: { model }, // passed to the API route
});
Switch Models
The dropdown lets you switch between:
- •qwen-3.6-plus — best overall quality
- •deepseek-v3 — GPT-5 class, best value
- •deepseek-r1 — chain-of-thought reasoning
- •llama-3.3-70b — fast, all-around
- •gemini-2.5-flash — 1M context window
- •qwen-3-32b — ultra-fast coding
Each model has different strengths. Try asking the same question to different models to see the difference.
Deploy to Vercel
Deploy your chatbot to Vercel — add KYMA_API_KEY as an environment variable and you're live.
Click the button, paste your KYMA_API_KEY, and you're live.
What's Next
- •Add conversation history with Supabase
- •Add tool calling for function execution
- •Try the Vercel AI SDK guide for more patterns
- •Check model recommendations to pick the right model
Cost
With $0.50 free credits, you get roughly 500-3,000 chat messages depending on the model. A typical conversation (500 input + 200 output tokens) costs about $0.0008 with qwen-3.6-plus.