Skip to content
AI System
March 20, 202610 min read

System Architecture

The Hub-and-Spoke model explained. How Memory, Skills, Integrations, Agents, and Voice connect into one coherent system.

Ja Shia

Ja Shia

AI Consultant

Share

An AI system without architecture is just a collection of subscriptions...

The Hub-and-Spoke Model

Most people interact with AI through disconnected tools. ChatGPT for writing, Cursor for code, a separate automation platform for workflows. Each tool is an island. None of them know about the others.

The Hub-and-Spoke model changes this. One central hub — your primary AI interface — connects to every other tool through structured spokes. The hub holds your context, your memory, and your preferences. The spokes extend its capabilities.

In my system, Claude Code is the hub. Everything else is a spoke.

The Five Layers

1. Memory

Memory is the foundation. Without it, every interaction starts from zero. Your AI system needs three tiers of memory:

  • Working memory — The current conversation context
  • Episodic memory — Records of past interactions and decisions
  • Semantic memory — Your knowledge base, preferences, and domain expertise

CLAUDE.md files, project documentation, and structured archives all serve as memory. They give the hub persistent context that survives session boundaries.

2. Skills

Skills are reusable capabilities. A skill might be "write a git commit message in this project's style" or "generate a blog post following our content framework." Skills encode your workflows into repeatable patterns the AI can execute consistently.

The key insight: skills are not prompts. They are structured instructions with defined inputs, outputs, and constraints. They live as files in your system, version-controlled and improvable.

3. Integrations

Integrations connect your AI system to external services. Email, calendars, project management tools, APIs, databases. Each integration is a spoke that extends what the hub can do.

MCP (Model Context Protocol) is the standard that makes this work. Instead of building custom integrations for each tool, MCP provides a universal protocol. One connection pattern, infinite tools.

4. Agents

Agents are autonomous workflows. Unlike skills (which you invoke), agents run on their own — monitoring, processing, and acting based on triggers. A webhook handler that creates content pages. A lead qualifier that processes inbound emails. A code reviewer that runs on every pull request.

Agents combine skills and integrations into self-running systems.

5. Voice

Voice is how you interact with the system. Command line, chat interface, voice input, or API calls. The voice layer translates your intent into actions the hub can route to the right skill, integration, or agent.

How They Connect

The power is in the connections, not the individual layers:

  • Memory feeds Skills — Your knowledge base makes skills context-aware
  • Skills power Agents — Agents compose multiple skills into workflows
  • Integrations extend everything — Every layer gets more capable with each new connection
  • Voice unifies access — One interface to control all layers

Building Incrementally

You do not need all five layers on day one. Start with Memory (a CLAUDE.md file). Add one Skill. Connect one Integration. The architecture grows with you.

The goal is not complexity. It is coherence — every piece working together instead of in isolation.

Share
AI SystemArchitectureMCP

The AI Alchemist

Practical AI strategies, behind-the-scenes builds, and emerging tools — delivered weekly to practitioners.