Building my first AI Agent - Part 0: Basics
Why am I building an AI agent?
Let's face it, AI is everywhere now, and as much as I was skeptical the truth is that it has come a long way since 2022. My workplace has been pushing it non-stop on all our tasks. And after realizing that AI would always code faster than me, I decided to give it a go on most of my work tasks.
However, for my personal projects I still go old-school by doing artisanal handmade coding. But as usual, my curiosity wants me to learn how things like Claude, Claude Code, ChatGPT and other agentic software work. I decided to build my own from scratch (but not training my own model).
The project I want to build is an assistant that can do knowledge retrieval, tool calling and memorization. The first use case I want to tackle is "Do I need to bring a jacket with me today?". Living in Seattle, that's an everyday question. I figured if I could have an AI helping me decide that, instead of stepping outside and seeing how cold and rainy it is.
This will encompass pretty much all the aspects I want to learn: tool calling, agent loop, and knowledge retrieval.
The plan
As usual, my all-or-nothing thinking got me to buy a couple books on the subject, the ones I got are:
-
Hands-On Large Language Models: Language Understanding and Generation by Alammar and Grootendorst
-
AI Engineering: Building Applications with Foundation Models by Huyen
-
Building Applications with AI Agents: Designing and Implementing Multiagent Systems by Albada

The things that I want to learn and figure out how they work are:
- Tool calling, maybe use an MCP server
- Reasoning, can it create a plan on how to tackle the task at hand?
- Memory and Context, can it remember important pieces of information?
- Knowledge Retrieval, implement a way for the AI to get additional knowledge
A stretch goal would be to have subagents but we'll see how this goes first!
The tools
Since I'm loving using Rust and this is just for my own curiosity, I will keep it simple and just go for a Rust implementation. Immersing myself is how I learn best.
For the LLM I will use Ollama at least at the beginning, this means that while the models are not as "powerful" as Claude or ChatGPT, it will be cheap and anyone could replicate this. I don't want to choose a model in particular right now, that is an implementation detail that can be easily changed by selecting a different one.
As for the agentic loop, I could use something like LangChain or LangGraph, but honestly I think I will learn more if I just go raw with Rust and see where I get. At the end of the day, I want to understand and learn the implementation of a full agentic assistant.
The project
I like using codenames for my projects, as a kid I loved movies and shows that used them. For this project, I decided to go for Tonalli, which is an Aztec concept around the mind and the soul. I thought this would go nicely with an AI agent.
Next steps
For now, I think the next thing is to start tackling the basic interface to send messages to the LLM. So I would have a connection between the Rust app and Ollama and be able to have a conversation.
What to expect in this series
My rough roadmap for this series is:
- Part 1: LLM integration
- Part 2: Tool calling
- Part 3: Agent loop
- Part 4: Reasoning and planning
- Part 5: Memory and context management
- Part 6: Knowledge retrieval
Stay tuned for Part 1!