Build a Production-Ready AI Chatbot: Complete Tutorial
In this comprehensive tutorial, we'll build a sophisticated AI chatbot from scratch using Next.js, OpenAI's GPT API, and modern React patterns. Our chatbot will feature streaming responses, conversation memory, user authentication, and be ready for production deployment.
What We'll Build
Features:
- π¬ Real-time streaming responses
- π§ Conversation memory and context
- π¨ Beautiful, responsive UI
- π User authentication
- π Message history and persistence
- π Production-ready deployment
- β‘ Optimized performance
Tech Stack:
- Frontend: Next.js 14, React, TypeScript, Tailwind CSS
- Backend: Next.js API routes, OpenAI API
- Database: Prisma with PostgreSQL
- Authentication: NextAuth.js
- Deployment: Vercel
Prerequisites
Before we start, make sure you have:
- Node.js 18+ installed
- Basic knowledge of React and TypeScript
- An OpenAI API key
- A PostgreSQL database (we'll use Supabase)
Project Setup
1. Initialize Next.js Project
npx create-next-app@latest ai-chatbot --typescript --tailwind --eslint --app
cd ai-chatbot
2. Install Dependencies
npm install openai prisma @prisma/client next-auth @next-auth/prisma-adapter
npm install @types/node --save-dev
npm install framer-motion react-markdown remark-gfm
3. Environment Configuration
Create .env.local
:
# OpenAI
OPENAI_API_KEY=your_openai_api_key
# Database
DATABASE_URL="postgresql://username:password@host:port/database"
# NextAuth
NEXTAUTH_URL="http://localhost:3000"
NEXTAUTH_SECRET="your_nextauth_secret"
# GitHub OAuth (optional)
GITHUB_ID="your_github_client_id"
GITHUB_SECRET="your_github_client_secret"
Database Schema Design
Prisma Schema Setup
Create prisma/schema.prisma
:
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Account {
id String @id @default(cuid())
userId String
type String
provider String
providerAccountId String
refresh_token String? @db.Text
access_token String? @db.Text
expires_at Int?
token_type String?
scope String?
id_token String? @db.Text
session_state String?
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@unique([provider, providerAccountId])
}
model Session {
id String @id @default(cuid())
sessionToken String @unique
userId String
expires DateTime
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
}
model User {
id String @id @default(cuid())
name String?
email String? @unique
emailVerified DateTime?
image String?
accounts Account[]
sessions Session[]
conversations Conversation[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Conversation {
id String @id @default(cuid())
title String
userId String
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
messages Message[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([userId])
}
model Message {
id String @id @default(cuid())
content String @db.Text
role MessageRole
conversationId String
conversation Conversation @relation(fields: [conversationId], references: [id], onDelete: Cascade)
createdAt DateTime @default(now())
@@index([conversationId])
}
enum MessageRole {
USER
ASSISTANT
SYSTEM
}
Initialize Database
npx prisma generate
npx prisma db push
Core Components Development
1. OpenAI Service
Create lib/openai.ts
:
import OpenAI from 'openai';
if (!process.env.OPENAI_API_KEY) {
throw new Error('Missing OPENAI_API_KEY environment variable');
}
export const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export interface ChatMessage {
role: 'user' | 'assistant' | 'system';
content: string;
}
export async function* generateChatStream(
messages: ChatMessage[],
options: {
model?: string;
temperature?: number;
maxTokens?: number;
} = {}
) {
const {
model = 'gpt-3.5-turbo',
temperature = 0.7,
maxTokens = 1000,
} = options;
try {
const stream = await openai.chat.completions.create({
model,
messages,
temperature,
max_tokens: maxTokens,
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
yield content;
}
}
} catch (error) {
console.error('OpenAI API error:', error);
throw new Error('Failed to generate AI response');
}
}
2. Database Service
Create lib/db.ts
:
import { PrismaClient } from '@prisma/client';
const globalForPrisma = globalThis as unknown as {
prisma: PrismaClient | undefined;
};
export const prisma = globalForPrisma.prisma ?? new PrismaClient();
if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma;
// Conversation management
export async function createConversation(userId: string, title: string) {
return prisma.conversation.create({
data: {
title,
userId,
},
});
}
export async function getConversations(userId: string) {
return prisma.conversation.findMany({
where: { userId },
orderBy: { updatedAt: 'desc' },
include: {
messages: {
take: 1,
orderBy: { createdAt: 'desc' },
},
_count: {
select: { messages: true },
},
},
});
}
export async function getConversationWithMessages(
conversationId: string,
userId: string
) {
return prisma.conversation.findUnique({
where: {
id: conversationId,
userId, // Ensure user owns conversation
},
include: {
messages: {
orderBy: { createdAt: 'asc' },
},
},
});
}
export async function addMessage(
conversationId: string,
role: 'USER' | 'ASSISTANT' | 'SYSTEM',
content: string
) {
return prisma.message.create({
data: {
conversationId,
role,
content,
},
});
}
export async function updateConversationTitle(
conversationId: string,
title: string
) {
return prisma.conversation.update({
where: { id: conversationId },
data: { title },
});
}
3. Authentication Setup
Create lib/auth.ts
:
import { NextAuthOptions } from 'next-auth';
import { PrismaAdapter } from '@next-auth/prisma-adapter';
import GitHubProvider from 'next-auth/providers/github';
import { prisma } from './db';
export const authOptions: NextAuthOptions = {
adapter: PrismaAdapter(prisma),
providers: [
GitHubProvider({
clientId: process.env.GITHUB_ID!,
clientSecret: process.env.GITHUB_SECRET!,
}),
],
callbacks: {
session: async ({ session, token }) => {
if (session?.user && token?.sub) {
session.user.id = token.sub;
}
return session;
},
jwt: async ({ user, token }) => {
if (user) {
token.uid = user.id;
}
return token;
},
},
session: {
strategy: 'jwt',
},
};
Create app/api/auth/[...nextauth]/route.ts
:
import NextAuth from 'next-auth';
import { authOptions } from '@/lib/auth';
const handler = NextAuth(authOptions);
export { handler as GET, handler as POST };
API Routes Development
1. Chat Streaming Endpoint
Create app/api/chat/route.ts
:
import { NextRequest, NextResponse } from 'next/server';
import { getServerSession } from 'next-auth/next';
import { authOptions } from '@/lib/auth';
import { generateChatStream } from '@/lib/openai';
import { addMessage, getConversationWithMessages } from '@/lib/db';
export async function POST(req: NextRequest) {
try {
const session = await getServerSession(authOptions);
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
const { message, conversationId } = await req.json();
if (!message || !conversationId) {
return NextResponse.json(
{ error: 'Message and conversationId are required' },
{ status: 400 }
);
}
// Verify user owns conversation
const conversation = await getConversationWithMessages(
conversationId,
session.user.id
);
if (!conversation) {
return NextResponse.json(
{ error: 'Conversation not found' },
{ status: 404 }
);
}
// Save user message
await addMessage(conversationId, 'USER', message);
// Prepare messages for OpenAI
const messages = [
{
role: 'system' as const,
content: 'You are a helpful AI assistant. Provide clear, accurate, and helpful responses.',
},
...conversation.messages.map((msg) => ({
role: msg.role.toLowerCase() as 'user' | 'assistant',
content: msg.content,
})),
{
role: 'user' as const,
content: message,
},
];
// Create readable stream for response
const encoder = new TextEncoder();
let assistantResponse = '';
const stream = new ReadableStream({
async start(controller) {
try {
for await (const chunk of generateChatStream(messages)) {
assistantResponse += chunk;
controller.enqueue(encoder.encode(chunk));
}
// Save assistant response
await addMessage(conversationId, 'ASSISTANT', assistantResponse);
controller.close();
} catch (error) {
console.error('Streaming error:', error);
controller.error(error);
}
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
'Transfer-Encoding': 'chunked',
},
});
} catch (error) {
console.error('Chat API error:', error);
return NextResponse.json(
{ error: 'Internal server error' },
{ status: 500 }
);
}
}
2. Conversations Management
Create app/api/conversations/route.ts
:
import { NextRequest, NextResponse } from 'next/server';
import { getServerSession } from 'next-auth/next';
import { authOptions } from '@/lib/auth';
import { createConversation, getConversations } from '@/lib/db';
export async function GET() {
try {
const session = await getServerSession(authOptions);
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
const conversations = await getConversations(session.user.id);
return NextResponse.json(conversations);
} catch (error) {
console.error('Get conversations error:', error);
return NextResponse.json(
{ error: 'Internal server error' },
{ status: 500 }
);
}
}
export async function POST(req: NextRequest) {
try {
const session = await getServerSession(authOptions);
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
const { title } = await req.json();
if (!title) {
return NextResponse.json(
{ error: 'Title is required' },
{ status: 400 }
);
}
const conversation = await createConversation(session.user.id, title);
return NextResponse.json(conversation);
} catch (error) {
console.error('Create conversation error:', error);
return NextResponse.json(
{ error: 'Internal server error' },
{ status: 500 }
);
}
}
Frontend Components
1. Chat Message Component
Create components/ChatMessage.tsx
:
'use client';
import { motion } from 'framer-motion';
import ReactMarkdown from 'react-markdown';
import remarkGfm from 'remark-gfm';
import { User, Bot } from 'lucide-react';
interface ChatMessageProps {
message: {
id: string;
content: string;
role: 'USER' | 'ASSISTANT';
createdAt: Date;
};
isStreaming?: boolean;
}
export default function ChatMessage({ message, isStreaming }: ChatMessageProps) {
const isUser = message.role === 'USER';
return (
<motion.div
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3 }}
className={`flex gap-4 p-4 ${
isUser ? 'bg-blue-50 dark:bg-blue-900/20' : 'bg-gray-50 dark:bg-gray-800'
}`}
>
<div className="flex-shrink-0">
<div
className={`w-8 h-8 rounded-full flex items-center justify-center ${
isUser
? 'bg-blue-600 text-white'
: 'bg-gray-600 text-white'
}`}
>
{isUser ? <User size={16} /> : <Bot size={16} />}
</div>
</div>
<div className="flex-1 min-w-0">
<div className="text-sm font-medium text-gray-900 dark:text-gray-100 mb-1">
{isUser ? 'You' : 'AI Assistant'}
</div>
<div className="prose prose-sm dark:prose-invert max-w-none">
{isUser ? (
<p className="whitespace-pre-wrap">{message.content}</p>
) : (
<ReactMarkdown remarkPlugins={[remarkGfm]}>
{message.content}
</ReactMarkdown>
)}
{isStreaming && (
<motion.span
animate={{ opacity: [1, 0] }}
transition={{ duration: 0.8, repeat: Infinity }}
className="inline-block w-2 h-4 bg-gray-400 ml-1"
/>
)}
</div>
<div className="text-xs text-gray-500 mt-2">
{message.createdAt.toLocaleTimeString()}
</div>
</div>
</motion.div>
);
}
2. Chat Input Component
Create components/ChatInput.tsx
:
'use client';
import { useState, useRef } from 'react';
import { Send, Loader2 } from 'lucide-react';
interface ChatInputProps {
onSendMessage: (message: string) => void;
disabled?: boolean;
placeholder?: string;
}
export default function ChatInput({
onSendMessage,
disabled = false,
placeholder = "Type your message..."
}: ChatInputProps) {
const [message, setMessage] = useState('');
const textareaRef = useRef<HTMLTextAreaElement>(null);
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (message.trim() && !disabled) {
onSendMessage(message.trim());
setMessage('');
// Reset textarea height
if (textareaRef.current) {
textareaRef.current.style.height = 'auto';
}
}
};
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
handleSubmit(e);
}
};
const handleInput = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
setMessage(e.target.value);
// Auto-resize textarea
const textarea = e.target;
textarea.style.height = 'auto';
textarea.style.height = `${textarea.scrollHeight}px`;
};
return (
<form onSubmit={handleSubmit} className="flex gap-2 p-4 border-t">
<div className="flex-1 relative">
<textarea
ref={textareaRef}
value={message}
onChange={handleInput}
onKeyDown={handleKeyDown}
placeholder={placeholder}
disabled={disabled}
rows={1}
className="w-full resize-none rounded-lg border border-gray-300 dark:border-gray-600
bg-white dark:bg-gray-800 px-3 py-2 text-sm
focus:border-blue-500 focus:outline-none focus:ring-1 focus:ring-blue-500
disabled:bg-gray-100 dark:disabled:bg-gray-700 disabled:cursor-not-allowed
min-h-[40px] max-h-[120px]"
style={{ overflow: 'hidden' }}
/>
</div>
<button
type="submit"
disabled={disabled || !message.trim()}
className="flex items-center justify-center w-10 h-10 rounded-lg
bg-blue-600 text-white hover:bg-blue-700
disabled:bg-gray-300 dark:disabled:bg-gray-600 disabled:cursor-not-allowed
transition-colors duration-200"
>
{disabled ? (
<Loader2 size={16} className="animate-spin" />
) : (
<Send size={16} />
)}
</button>
</form>
);
}
3. Conversation Sidebar
Create components/ConversationSidebar.tsx
:
'use client';
import { useState, useEffect } from 'react';
import { Plus, MessageSquare, Trash2 } from 'lucide-react';
interface Conversation {
id: string;
title: string;
createdAt: Date;
updatedAt: Date;
_count: { messages: number };
}
interface ConversationSidebarProps {
currentConversationId?: string;
onSelectConversation: (id: string) => void;
onNewConversation: () => void;
}
export default function ConversationSidebar({
currentConversationId,
onSelectConversation,
onNewConversation
}: ConversationSidebarProps) {
const [conversations, setConversations] = useState<Conversation[]>([]);
const [loading, setLoading] = useState(true);
useEffect(() => {
fetchConversations();
}, []);
const fetchConversations = async () => {
try {
const response = await fetch('/api/conversations');
if (response.ok) {
const data = await response.json();
setConversations(data);
}
} catch (error) {
console.error('Failed to fetch conversations:', error);
} finally {
setLoading(false);
}
};
const handleNewConversation = async () => {
try {
const response = await fetch('/api/conversations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ title: 'New Conversation' }),
});
if (response.ok) {
const newConversation = await response.json();
setConversations([newConversation, ...conversations]);
onNewConversation();
onSelectConversation(newConversation.id);
}
} catch (error) {
console.error('Failed to create conversation:', error);
}
};
if (loading) {
return (
<div className="w-80 bg-gray-50 dark:bg-gray-900 border-r border-gray-200 dark:border-gray-700 p-4">
<div className="animate-pulse space-y-4">
<div className="h-10 bg-gray-200 dark:bg-gray-700 rounded"></div>
{[...Array(5)].map((_, i) => (
<div key={i} className="h-12 bg-gray-200 dark:bg-gray-700 rounded"></div>
))}
</div>
</div>
);
}
return (
<div className="w-80 bg-gray-50 dark:bg-gray-900 border-r border-gray-200 dark:border-gray-700 flex flex-col">
<div className="p-4 border-b border-gray-200 dark:border-gray-700">
<button
onClick={handleNewConversation}
className="w-full flex items-center gap-2 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition-colors"
>
<Plus size={16} />
New Conversation
</button>
</div>
<div className="flex-1 overflow-y-auto p-4 space-y-2">
{conversations.map((conversation) => (
<button
key={conversation.id}
onClick={() => onSelectConversation(conversation.id)}
className={`w-full text-left p-3 rounded-lg transition-colors ${
currentConversationId === conversation.id
? 'bg-blue-100 dark:bg-blue-900 border border-blue-200 dark:border-blue-700'
: 'hover:bg-gray-100 dark:hover:bg-gray-800'
}`}
>
<div className="flex items-start gap-2">
<MessageSquare size={16} className="mt-1 flex-shrink-0" />
<div className="flex-1 min-w-0">
<div className="font-medium text-sm truncate">
{conversation.title}
</div>
<div className="text-xs text-gray-500 mt-1">
{conversation._count.messages} messages
</div>
<div className="text-xs text-gray-400 mt-1">
{new Date(conversation.updatedAt).toLocaleDateString()}
</div>
</div>
</div>
</button>
))}
{conversations.length === 0 && (
<div className="text-center text-gray-500 py-8">
<MessageSquare size={48} className="mx-auto mb-4 opacity-50" />
<p>No conversations yet</p>
<p className="text-sm">Start a new conversation to begin</p>
</div>
)}
</div>
</div>
);
}
Main Chat Interface
Create components/ChatInterface.tsx
:
'use client';
import { useState, useEffect, useRef } from 'react';
import { useSession } from 'next-auth/react';
import ChatMessage from './ChatMessage';
import ChatInput from './ChatInput';
import ConversationSidebar from './ConversationSidebar';
interface Message {
id: string;
content: string;
role: 'USER' | 'ASSISTANT';
createdAt: Date;
}
export default function ChatInterface() {
const { data: session } = useSession();
const [messages, setMessages] = useState<Message[]>([]);
const [currentConversationId, setCurrentConversationId] = useState<string>();
const [isStreaming, setIsStreaming] = useState(false);
const [streamingMessageId, setStreamingMessageId] = useState<string>();
const messagesEndRef = useRef<HTMLDivElement>(null);
const scrollToBottom = () => {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
};
useEffect(() => {
scrollToBottom();
}, [messages]);
const loadConversation = async (conversationId: string) => {
try {
const response = await fetch(`/api/conversations/${conversationId}`);
if (response.ok) {
const conversation = await response.json();
setMessages(conversation.messages || []);
setCurrentConversationId(conversationId);
}
} catch (error) {
console.error('Failed to load conversation:', error);
}
};
const handleNewConversation = () => {
setMessages([]);
setCurrentConversationId(undefined);
};
const handleSendMessage = async (content: string) => {
if (!currentConversationId) {
// Create new conversation first
try {
const response = await fetch('/api/conversations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ title: content.slice(0, 50) + '...' }),
});
if (response.ok) {
const newConversation = await response.json();
setCurrentConversationId(newConversation.id);
// Continue with sending message
sendMessageToConversation(content, newConversation.id);
}
} catch (error) {
console.error('Failed to create conversation:', error);
return;
}
} else {
sendMessageToConversation(content, currentConversationId);
}
};
const sendMessageToConversation = async (content: string, conversationId: string) => {
// Add user message to UI immediately
const userMessage: Message = {
id: `temp-${Date.now()}`,
content,
role: 'USER',
createdAt: new Date(),
};
setMessages(prev => [...prev, userMessage]);
// Create placeholder for assistant response
const assistantMessageId = `assistant-${Date.now()}`;
const assistantMessage: Message = {
id: assistantMessageId,
content: '',
role: 'ASSISTANT',
createdAt: new Date(),
};
setMessages(prev => [...prev, assistantMessage]);
setStreamingMessageId(assistantMessageId);
setIsStreaming(true);
try {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: content,
conversationId,
}),
});
if (!response.ok) {
throw new Error('Failed to send message');
}
const reader = response.body?.getReader();
if (!reader) {
throw new Error('No response body');
}
let assistantContent = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
assistantContent += chunk;
// Update the assistant message in real-time
setMessages(prev =>
prev.map(msg =>
msg.id === assistantMessageId
? { ...msg, content: assistantContent }
: msg
)
);
}
} catch (error) {
console.error('Error sending message:', error);
// Show error message
setMessages(prev =>
prev.map(msg =>
msg.id === assistantMessageId
? { ...msg, content: 'Sorry, I encountered an error. Please try again.' }
: msg
)
);
} finally {
setIsStreaming(false);
setStreamingMessageId(undefined);
}
};
if (!session) {
return (
<div className="flex items-center justify-center h-screen">
<div className="text-center">
<h1 className="text-2xl font-bold mb-4">AI Chatbot</h1>
<p className="text-gray-600 mb-4">Please sign in to continue</p>
<button
onClick={() => signIn('github')}
className="px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700"
>
Sign In with GitHub
</button>
</div>
</div>
);
}
return (
<div className="flex h-screen bg-white dark:bg-gray-900">
<ConversationSidebar
currentConversationId={currentConversationId}
onSelectConversation={loadConversation}
onNewConversation={handleNewConversation}
/>
<div className="flex-1 flex flex-col">
<div className="flex-1 overflow-y-auto">
{messages.length === 0 ? (
<div className="flex items-center justify-center h-full">
<div className="text-center">
<h2 className="text-2xl font-bold mb-4">Welcome to AI Chatbot</h2>
<p className="text-gray-600 mb-4">Start a conversation by typing a message below</p>
</div>
</div>
) : (
<div className="divide-y divide-gray-200 dark:divide-gray-700">
{messages.map((message) => (
<ChatMessage
key={message.id}
message={message}
isStreaming={isStreaming && message.id === streamingMessageId}
/>
))}
</div>
)}
<div ref={messagesEndRef} />
</div>
<ChatInput
onSendMessage={handleSendMessage}
disabled={isStreaming}
placeholder={isStreaming ? "AI is thinking..." : "Type your message..."}
/>
</div>
</div>
);
}
Authentication Provider Setup
Create components/Providers.tsx
:
'use client';
import { SessionProvider } from 'next-auth/react';
export default function Providers({
children,
}: {
children: React.ReactNode;
}) {
return <SessionProvider>{children}</SessionProvider>;
}
Update app/layout.tsx
:
import './globals.css';
import type { Metadata } from 'next';
import { Inter } from 'next/font/google';
import Providers from '@/components/Providers';
const inter = Inter({ subsets: ['latin'] });
export const metadata: Metadata = {
title: 'AI Chatbot',
description: 'A production-ready AI chatbot built with Next.js and OpenAI',
};
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en">
<body className={inter.className}>
<Providers>
{children}
</Providers>
</body>
</html>
);
}
Main Page
Update app/page.tsx
:
import ChatInterface from '@/components/ChatInterface';
export default function Home() {
return <ChatInterface />;
}
Deployment Setup
1. Environment Variables for Production
In your production environment (Vercel, Railway, etc.), set:
OPENAI_API_KEY=your_production_openai_key
DATABASE_URL=your_production_database_url
NEXTAUTH_URL=https://yourdomain.com
NEXTAUTH_SECRET=your_production_secret
GITHUB_ID=your_github_client_id
GITHUB_SECRET=your_github_client_secret
2. Build Scripts
Update package.json
:
{
"scripts": {
"dev": "next dev",
"build": "prisma generate && next build",
"start": "next start",
"lint": "next lint",
"db:push": "prisma db push",
"db:studio": "prisma studio"
}
}
3. Vercel Deployment
Create vercel.json
:
{
"buildCommand": "npm run build",
"devCommand": "npm run dev",
"installCommand": "npm install",
"functions": {
"app/api/chat/route.ts": {
"maxDuration": 60
}
}
}
Deploy to Vercel:
npm install -g vercel
vercel
Performance Optimizations
1. Message Caching
Create lib/cache.ts
:
import { LRUCache } from 'lru-cache';
const conversationCache = new LRUCache<string, any>({
max: 100,
ttl: 1000 * 60 * 10, // 10 minutes
});
export function getCachedConversation(id: string) {
return conversationCache.get(id);
}
export function setCachedConversation(id: string, data: any) {
conversationCache.set(id, data);
}
2. Streaming Optimizations
Update streaming to handle backpressure:
// In your chat API route
const stream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder();
let buffer = '';
try {
for await (const chunk of generateChatStream(messages)) {
buffer += chunk;
// Send chunks in reasonable sizes
if (buffer.length > 50) {
controller.enqueue(encoder.encode(buffer));
buffer = '';
}
}
// Send remaining buffer
if (buffer) {
controller.enqueue(encoder.encode(buffer));
}
controller.close();
} catch (error) {
controller.error(error);
}
},
});
Testing
1. API Route Tests
Create __tests__/api/chat.test.ts
:
import { NextRequest } from 'next/server';
import { POST } from '@/app/api/chat/route';
// Mock the dependencies
jest.mock('@/lib/auth');
jest.mock('@/lib/openai');
jest.mock('@/lib/db');
describe('/api/chat', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('should return 401 for unauthenticated requests', async () => {
// Mock unauthenticated session
require('@/lib/auth').getServerSession.mockResolvedValue(null);
const request = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
body: JSON.stringify({ message: 'Hello', conversationId: '123' }),
});
const response = await POST(request);
expect(response.status).toBe(401);
});
it('should process valid chat requests', async () => {
// Mock authenticated session
require('@/lib/auth').getServerSession.mockResolvedValue({
user: { id: 'user123' }
});
// Mock conversation data
require('@/lib/db').getConversationWithMessages.mockResolvedValue({
id: 'conv123',
messages: []
});
// Mock OpenAI response
require('@/lib/openai').generateChatStream.mockImplementation(async function* () {
yield 'Hello ';
yield 'there!';
});
const request = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
body: JSON.stringify({ message: 'Hello', conversationId: 'conv123' }),
});
const response = await POST(request);
expect(response.status).toBe(200);
expect(response.headers.get('Content-Type')).toContain('text/plain');
});
});
2. Component Tests
Create __tests__/components/ChatMessage.test.tsx
:
import { render, screen } from '@testing-library/react';
import ChatMessage from '@/components/ChatMessage';
describe('ChatMessage', () => {
const mockMessage = {
id: '1',
content: 'Hello, world!',
role: 'USER' as const,
createdAt: new Date('2024-01-01T12:00:00Z'),
};
it('renders user message correctly', () => {
render(<ChatMessage message={mockMessage} />);
expect(screen.getByText('You')).toBeInTheDocument();
expect(screen.getByText('Hello, world!')).toBeInTheDocument();
});
it('renders assistant message correctly', () => {
const assistantMessage = {
...mockMessage,
role: 'ASSISTANT' as const,
};
render(<ChatMessage message={assistantMessage} />);
expect(screen.getByText('AI Assistant')).toBeInTheDocument();
expect(screen.getByText('Hello, world!')).toBeInTheDocument();
});
it('shows streaming indicator when streaming', () => {
render(<ChatMessage message={mockMessage} isStreaming={true} />);
// Check for animation class or streaming indicator
expect(document.querySelector('.animate-pulse')).toBeInTheDocument();
});
});
Security Considerations
1. Rate Limiting
Create lib/rateLimit.ts
:
import { LRUCache } from 'lru-cache';
type Options = {
uniqueTokenPerInterval?: number;
interval?: number;
};
export default function rateLimit(options: Options = {}) {
const tokenCache = new LRUCache({
max: options.uniqueTokenPerInterval || 500,
ttl: options.interval || 60000,
});
return {
check: (limit: number, token: string) =>
new Promise<void>((resolve, reject) => {
const tokenCount = (tokenCache.get(token) as number[]) || [0];
if (tokenCount[0] === 0) {
tokenCache.set(token, tokenCount);
}
tokenCount[0] += 1;
const currentUsage = tokenCount[0];
const isRateLimited = currentUsage >= limit;
if (isRateLimited) {
reject(new Error('Rate limit exceeded'));
} else {
resolve();
}
})
};
}
export const limiter = rateLimit({
interval: 60 * 1000, // 60 seconds
uniqueTokenPerInterval: 500, // Max 500 users per second
});
Apply rate limiting to API routes:
// In your API routes
import { limiter } from '@/lib/rateLimit';
export async function POST(req: NextRequest) {
try {
// Rate limiting by IP
const ip = req.ip ?? '127.0.0.1';
await limiter.check(10, ip); // 10 requests per minute per IP
// Your existing code...
} catch (error) {
return NextResponse.json(
{ error: 'Rate limit exceeded' },
{ status: 429 }
);
}
}
2. Input Validation
Create lib/validation.ts
:
import { z } from 'zod';
export const chatMessageSchema = z.object({
message: z
.string()
.min(1, 'Message cannot be empty')
.max(4000, 'Message too long')
.refine((val) => val.trim().length > 0, 'Message cannot be only whitespace'),
conversationId: z.string().uuid('Invalid conversation ID'),
});
export const createConversationSchema = z.object({
title: z
.string()
.min(1, 'Title cannot be empty')
.max(200, 'Title too long'),
});
Use validation in API routes:
import { chatMessageSchema } from '@/lib/validation';
export async function POST(req: NextRequest) {
try {
const body = await req.json();
const { message, conversationId } = chatMessageSchema.parse(body);
// Your existing code...
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
{ error: 'Invalid input', details: error.errors },
{ status: 400 }
);
}
// Handle other errors...
}
}
Monitoring and Analytics
1. Error Tracking
Install Sentry:
npm install @sentry/nextjs
Create sentry.client.config.js
:
import * as Sentry from '@sentry/nextjs';
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampleRate: 1.0,
});
2. Usage Analytics
Track important metrics:
// lib/analytics.ts
export function trackChatMessage(userId: string, messageLength: number) {
// Send to your analytics service
if (typeof window !== 'undefined') {
// Client-side tracking
gtag('event', 'chat_message_sent', {
user_id: userId,
message_length: messageLength,
});
}
}
export function trackConversationCreated(userId: string) {
// Track conversation creation
if (typeof window !== 'undefined') {
gtag('event', 'conversation_created', {
user_id: userId,
});
}
}
Conclusion
Congratulations! You've built a production-ready AI chatbot with:
β
Real-time streaming responses
β
Persistent conversation history
β
User authentication and security
β
Beautiful, responsive UI
β
Rate limiting and validation
β
Error handling and monitoring
β
Performance optimizations
β
Comprehensive testing
Next Steps
Enhancements you can add:
- File upload and image analysis
- Voice input/output
- Conversation sharing
- Custom AI personalities
- Integration with external APIs
- Mobile app with React Native
Scaling considerations:
- Implement Redis for session storage
- Add a CDN for static assets
- Use a message queue for high-volume processing
- Implement database read replicas
- Add monitoring and alerting
This chatbot provides a solid foundation that you can extend and customize for your specific needs. The modular architecture makes it easy to add new features and integrations as your requirements evolve.
Ready to deploy your chatbot? Share your implementation and let the community know how you've customized it for your use case!