Building My AI Workstation: Kicking Off the Project

Over the past few years, I’ve watched the evolution of AI move from research labs into the everyday workflows of business, productivity, and creativity. Models that once required massive clusters are now accessible to individuals, and the frontier is shifting toward Specialized Language Models (SLMs) — models tuned for specific domains, tasks, and contexts.

To push my own understanding forward, I’ve decided to build a personal AI workstation: a high-performance rig that will let me fine-tune, benchmark, and experiment with large models in a hands-on way. This is not just about performance numbers — it’s about exploring how to apply AI practically and learning what it really takes to train and deploy models tailored for business value.

Why This Project?

There are three main goals for this build:

  1. Hands-on learning with SLMs. I want to go deeper into how models can be specialized — what works, what doesn’t, and what resources are required.
  2. Practical experimentation. Fine-tuning models locally provides an environment where I can test ideas quickly, without waiting on shared infrastructure or cloud quotas.
  3. Personal brand and knowledge sharing. By documenting the process here, I aim to demonstrate that I’m not just talking about AI adoption in business — I’m actively experimenting with it, sharing the ups and downs of building and using a system like this.

The Build

After a lot of research and part comparisons, here’s the final component list for the workstation:

  • Motherboard: Gigabyte B650 Aorus Elite AX
  • CPU: AMD Ryzen 9 7900X
  • RAM: G.Skill Flare X5 64GB DDR5-6000 CL30
  • Primary SSD: Samsung 990 Pro 2TB NVMe
  • Secondary SSD: TeamGroup MP44Q 4TB NVMe
  • PSU: Corsair RM1000e (2025)
  • Case: Fractal Pop XL Silent
  • Case Fans: 2 × Arctic P14
  • CPU Cooler: be quiet! Dark Rock Pro 5
  • GPU: NVIDIA RTX 5090 Gaming OC

This combination balances raw compute power (with the 5090 GPU at the core), ample high-speed storage for model weights and datasets, and quiet cooling for a system that will often be under heavy sustained loads.

What’s Next

Over the coming days, I’ll be posting updates with photos and notes as I build out the system. Expect plenty of real-world detail: cable management victories (and mistakes), BIOS tweaks, thermals, and the first time I load a model that truly pushes the GPU.

From there, I’ll dive into the software side: Ubuntu 24.04 as the primary OS (with a Windows partition for occasional extra use of the GPU…), driver setup, and my initial workflow for fine-tuning language models.


This blog is as much about the journey as the destination. My hope is that by the time this project is complete, I’ll have a workstation that not only supports my research and tinkering but also serves as a showcase of how AI exploration can be practical, applied, and — most importantly — shared.