All Projects
AI Platform Multi-Agent SaaS Full-Stack Developer Tools

Agents Army · Multi-Agent AI

A full-stack, multi-provider AI chat platform where users compose a custom team of AI agents — each with its own persona, model, and expertise — and converse with all of them simultaneously. Built on Angular 19, NestJS, Python LangGraph, and a managed three-provider LLM catalog.

3 Providers
OpenAI · Anthropic · Google Gemini
Parallel Responses
Every agent answers simultaneously
4 Services
Frontend · API · Orchestration · CLI
The Challenge

One chatbot, one answer —
never enough.

Modern AI tooling has converged on a single-chatbot paradigm: one model, one persona, one answer per message. For users tackling complex decisions — architectural, creative, technical, or strategic — this means getting one filtered perspective from one model's training data, shaped by one static set of instructions. For serious work, that single perspective is rarely sufficient.

The deeper problem is rigidity. Most off-the-shelf AI platforms don't allow per-agent control over personality, tone, language, model version, or provider. A team that wants to compare OpenAI versus Anthropic outputs must juggle separate tools and separate conversations, manually switching context between them. Specialized tasks — math, research, creative writing, domain-specific advice — all get routed to a single generalist regardless of fit.

Building intelligent routing that delegates queries to the most capable specialist — while keeping the experience transparent to the end user — requires orchestration infrastructure that is non-trivial to design, build, and maintain. That was the core engineering challenge at the heart of Agents Army: making multi-agent orchestration feel effortless, not complex.

"When we're tackling complex architectural or business problems, one AI perspective isn't enough — we need multiple expert voices to triangulate the best path forward. The single-chatbot model just wasn't cutting it for serious work."

One message. Every agent
responds at once.

Agents Army introduces the concept of an agent group: a curated team of AI agents, each configured independently with its own provider, model, personality, tone, language, and visual identity. When a user sends a message, the NestJS ChatService broadcasts it in parallel to all active agents in the selected group — collecting independent, simultaneous responses keyed by agent ID. The user sees a multi-voice conversation, not a single answer.

For intelligent task routing, a Python LangGraph supervisor service implements a supervisor multi-agent graph pattern. A supervisor agent receives the task, decides which specialist — math agent, research agent — is best suited, and hands off using typed transfer tools. The supervisor never answers directly; it only routes. This runs as a separate Flask microservice behind the same REST interface, completely transparent to the frontend.

The workspace also ships Archie — a standalone TypeScript CLI tool powered by LangGraph that performs AI-assisted software architecture analysis. Archie builds a persistent knowledge graph from project documents, enables multi-turn Q&A through a conversational interface, and enforces an explicit human-approval gate before writing any analysis output to disk.

  • Angular 19 frontend — agent group management, model catalog selection, and real-time multi-voice chat interface
  • NestJS REST API with TypeORM and MySQL — ChatServiceFactory routes each request to the correct LLM SDK based on the agent's configured provider
  • Python Flask + LangGraph supervisor orchestration service — typed agent handoffs with math and research specialists running on Gemini 2.0 Flash
  • Managed AI model catalog — Free, Premium, and Enterprise tier assignments, extensible without redeployment
  • Per-agent configuration across six dimensions: provider, model, persona, tone, language, and visual identity
  • Archie CLI — LangGraph-powered architecture analysis tool with persistent knowledge graph (context.json) and human-in-the-loop approval gate
  • Full conversation persistence — messages, agent responses, and metadata stored in a relational database with complete audit trail
Technology Stack

Four services. Three providers. One platform.

Frontend
Angular 19 TypeScript RxJS
Backend API
NestJS TypeORM MySQL REST API
AI Orchestration
Python LangGraph LangChain Flask
LLM Providers
OpenAI SDK Anthropic SDK Google Gemini
CLI Tool (Archie)
Node.js TypeScript LangGraph OpenAI
The Outcome

Every question gets
a team of answers.

3 LLMs
Providers unified in one interface
All Agents
Respond in parallel to every message
4 Repos
Frontend · API · Orchestration · CLI

Agents Army fundamentally changed how our team interacts with AI. Instead of switching between tools to get different perspectives, we compose an agent group — and every specialist responds at once. It's like having an entire AI team on call, each with a different lens on the same problem.

Yaniv Amrami · Founder, Agents Army
Start Your Project
Ready to build
yours?

Tell us what you're building. We'll tell you how fast we can get you there.

Start a Project See All Work