VP by day.
Builder the rest of the time.
I lead pre-sales architects across North America for a leading database, cloud, enterprise applications, and AI company. On weekends I fine-tune small models, build RAG eval harnesses, and ship local LLM systems on an RTX 5090. This is the lab notebook.
Robot Car: Origin Story (How Our Little Hero Got His Wheels)
I have a working robot car in my office. I've never built one. I didn't read the manual. Here's what the build looked like when the AI had the knowledge and I had the hands.
The Prompt Is the Program
Operating systems were built for humans. What happens when the user is an AI? I built a robot car driven entirely by an LLM to find out.
Claude Memory V2: What I Learned Running AI Memory in Production
Three weeks of running persistent memory for Claude Code taught me what actually matters—and led to a complete restructure.
Unifying 12 Years of Knowledge: From OneNote to Local Markdown to Oracle AI Database
How I migrated 5,178 OneNote documents spanning 12 years and built auto-sync for local markdown notes—turning scattered notes into a unified, AI-searchable knowledge base powered by Oracle AI Database 26ai.
Building Persistent Memory for Claude Code
How I built a cross-machine memory system for Claude Code using MCP, PostgreSQL, and semantic search.
AIOS ThinkTank
What does an operating system look like when the user is an LLM? I'm building a robot car driven entirely by AI to find out — hardware, firmware, and the prompt layer that ties them together.
SLM Research
Field notes from testing the Specialized Language Model thesis: that small, fine-tuned models on the right data beat large general-purpose ones on the work I actually do.
Building a Knowledge Management App with Oracle AI Database 26ai
Building a multimodal knowledge base on Oracle AI Database 26ai — CLIP-in-database, JSON Duality, Vector Search, and a vision-aware RAG pipeline that actually sees images instead of describing their captions.
Building an SLM Powerhouse
Designing, building, and stress-testing a local workstation for fine-tuning specialized language models. RTX 5090, 22-hour training runs, and the part list that actually ships.