A full-stack, multi-provider AI chat platform where users compose a custom team of AI agents — each with its own persona, model, and expertise — and converse with all of them simultaneously. Built on Angular 19, NestJS, Python LangGraph, and a managed three-provider LLM catalog.
Modern AI tooling has converged on a single-chatbot paradigm: one model, one persona, one answer per message. For users tackling complex decisions — architectural, creative, technical, or strategic — this means getting one filtered perspective from one model's training data, shaped by one static set of instructions. For serious work, that single perspective is rarely sufficient.
The deeper problem is rigidity. Most off-the-shelf AI platforms don't allow per-agent control over personality, tone, language, model version, or provider. A team that wants to compare OpenAI versus Anthropic outputs must juggle separate tools and separate conversations, manually switching context between them. Specialized tasks — math, research, creative writing, domain-specific advice — all get routed to a single generalist regardless of fit.
Building intelligent routing that delegates queries to the most capable specialist — while keeping the experience transparent to the end user — requires orchestration infrastructure that is non-trivial to design, build, and maintain. That was the core engineering challenge at the heart of Agents Army: making multi-agent orchestration feel effortless, not complex.
"When we're tackling complex architectural or business problems, one AI perspective isn't enough — we need multiple expert voices to triangulate the best path forward. The single-chatbot model just wasn't cutting it for serious work."
Agents Army introduces the concept of an agent group: a curated team of AI agents, each configured independently with its own provider, model, personality, tone, language, and visual identity. When a user sends a message, the NestJS ChatService broadcasts it in parallel to all active agents in the selected group — collecting independent, simultaneous responses keyed by agent ID. The user sees a multi-voice conversation, not a single answer.
For intelligent task routing, a Python LangGraph supervisor service implements a supervisor multi-agent graph pattern. A supervisor agent receives the task, decides which specialist — math agent, research agent — is best suited, and hands off using typed transfer tools. The supervisor never answers directly; it only routes. This runs as a separate Flask microservice behind the same REST interface, completely transparent to the frontend.
The workspace also ships Archie — a standalone TypeScript CLI tool powered by LangGraph that performs AI-assisted software architecture analysis. Archie builds a persistent knowledge graph from project documents, enables multi-turn Q&A through a conversational interface, and enforces an explicit human-approval gate before writing any analysis output to disk.
Agents Army fundamentally changed how our team interacts with AI. Instead of switching between tools to get different perspectives, we compose an agent group — and every specialist responds at once. It's like having an entire AI team on call, each with a different lens on the same problem.
Tell us what you're building. We'll tell you how fast we can get you there.