2025-04-19 Web Development
How I Built My Personal AI Chatbot for OWolf.com
By O. Wolfson
Recently, I added a custom AI chatbot to my website.
It’s a simple, lightweight assistant — modeled after my professional persona — that can answer questions, remember parts of the conversation, and reference documents I provide.
It's a work in progress, but here’s a behind-the-scenes look at how it works so far.
Core Structure
The chatbot is made of two main parts:
- Frontend (Next.js 15 client component)
- Server actions (Next.js 15 server functions)
The frontend handles user input, shows messages, and calls server-side functions when a question is submitted.
The server-side action processes the question intelligently using OpenAI, Supabase, and a system prompt.
Conversation Flow
Here’s the basic flow when a user sends a message:
- User types a message and hits submit.
- The message is saved locally into a chat history (
messages
state). - The client calls a server action
askQuestion(question, history)
. - The server:
- Embeds the user's question into a vector.
- Matches relevant documents from a Supabase database (based on similarity search).
- Loads a base system prompt.
- Formats the full context:
- System instructions
- Relevant documents
- The recent conversation history (the last 10 messages)
- The new user question
- Sends that full context to OpenAI.
- OpenAI generates a response based on the complete situation.
- The new assistant message is added to the chat history.
Memory Without True Memory
One clever part:
Although there’s no "true memory" (like a database of past chats), the chatbot pretends to remember by sending the last 10 messages along with each new question.
This lets the AI keep the conversation thread alive naturally — it "feels" like memory to the user.
If you keep chatting, the AI will still be able to refer to previous topics because those are continually included in the prompt.
Technical Details
-
Frontend:
A simpleChatPage
React component withuseState
,useTransition
, anduseRef
for managing input, auto-scrolling, and message updates. -
Server Actions:
AaskQuestion
function that uses:getEmbedding(question)
to create a semantic vector.Supabase RPC
(match_mdx_chunks
) to fetch similar documents.- A system prompt file (
/prompts/system-prompt.txt
) that defines the assistant's personality. chatWithContext(question, [fullContext])
to call OpenAI.
-
Supabase:
A Postgres database with amatch_mdx_chunks
function for fast semantic search using pgvector. -
Styling:
Tailwind CSS with custom Shadcn components for inputs, buttons, and cards.
The System Prompt
At the heart of the chatbot is the system prompt, which defines its identity:
"You are OWolf, the AI version of O. Wolfson (Oliver Wolfson), a professional software developer. You live at https://owolf.com, O. Wolfson's personal blog, where he journals about his software development journey."
This keeps the AI grounded in my voice, my background, and my professional context — even when users ask unexpected questions.
Why Build It?
I wanted a chatbot that feels like an extension of my site and my writing —
Not just a generic AI, but something that:
- Knows my content
- Stays in character
- Understands the flow of an ongoing conversation
And importantly, something that runs efficiently, privately, and is easy to extend later.
What's Next?
Future ideas I’m exploring:
- Streaming responses (typing effect like ChatGPT)
- Saving full chat sessions to Supabase for resuming conversations
- Markdown rendering for better display of code and formatting
- Different personalities or expert modes (e.g., "mentor mode", "debug mode")
If you're curious to see the chatbot in action, feel free to try it at owolf.com/chat.