AI Package

@cdoing/ai — The intelligence layer containing the agentic loop, LLM providers, system prompt builder, and context management.

Overview

The AI package is the brain of the system. It manages the continuous cycle of LLM inference and tool execution that makes the agent work autonomously.

packages/ai/src/ ├── agent-runner.ts # Agentic loop + streaming (~19k lines) ├── provider.ts # Multi-provider LLM factory ├── system-prompt.ts # System prompt builder └── context-manager.ts # Token tracking + cost calculation

Agent Runner

The agent runner implements the core agentic loop. It's the most critical component in the system.

How the Loop Works

1
User sends a message
The message is added to the conversation history along with any context from @ mention providers.
2
LLM responds (streamed)
The response is streamed token-by-token. The LLM can respond with either plain text or tool calls (or both).
3
Tool calls are executed
Each tool call passes through: pre-hooks → permission check → execution → post-hooks. Results are fed back to the LLM.
4
Loop continues
The LLM receives tool results and either makes more tool calls or returns a final text response to the user.

Key Features

LLM Providers

The provider factory creates LLM instances for any supported provider:

ProviderPackageDefault Model
Anthropic@langchain/anthropicclaude-sonnet-4-6
OpenAI@langchain/openaigpt-4o
Google@langchain/google-genaigemini-2.0-flash
Ollama@langchain/openai (compatible)llama3.1
Custom@langchain/openai (OpenAI-compatible)User-defined

Provider Enum

export enum ModelProvider { ANTHROPIC = "anthropic", OPENAI = "openai", GOOGLE = "google", OLLAMA = "ollama", CUSTOM = "custom", }

OAuth Support

Anthropic OAuth is supported for Claude Pro/Max users with Bearer authentication and beta headers. The CLI provides --login and --logout commands for managing OAuth sessions.

System Prompt Builder

The system prompt is dynamically constructed based on the current state:

Context Manager

The context manager handles the conversation's token budget:

Token Tracking

Context Compression

Cost Tracking

The context manager calculates approximate cost per provider based on token usage and provider-specific pricing.

Dependencies
The AI package depends on: @langchain/core, @langchain/anthropic, @langchain/openai, @langchain/google-genai, and @cdoing/core.