04 / 04 — Ecosystem
AXL Protocol - Products and Ecosystem
The Fundamental Problem
AI agents communicate in English prose. That is not a design choice - it is an inheritance. English evolved over six thousand years to serve human cognition: to carry nuance across cultures, to accommodate ambiguity, to survive misinterpretation. Every sentence in English contains hedging, rhetoric, transitional scaffolding, and repetition. These are not flaws. They are adaptations. They exist because human listeners forget, drift, and require re-anchoring.
Machines do not forget. Machines do not drift. Machines do not need rhetorical scaffolding to follow an argument. When two AI agents exchange a message in English prose, the majority of that message is cognitive overhead inherited from a species that was not the intended recipient. The signal is buried in the ceremony.
The concept was born on January 29, 2026, as an idea on paper.
The question that launched Rosetta was direct: what if we built a language from scratch for machine-to-machine reasoning? Not a protocol bolted onto English. Not a compression wrapper around natural language. A new linguistic layer - one where every token carries meaning and no token is wasted on human accommodation.
The Answer: Rosetta
The name is not arbitrary. The Rosetta Stone, discovered in 1799, carried the same decree in three scripts: Egyptian hieroglyphs, Demotic, and Ancient Greek. Because scholars already understood Greek, they could decode the others. One language served as the key to two more. The stone did not invent new meaning - it revealed how meaning had already been encoded across different symbolic systems.
AXL's Rosetta operates on the same principle, inverted. The Rosetta Stone decoded one language through another. AXL's Rosetta encodes meaning itself - strips it from the symbolic systems of any particular human language and expresses it in a grammar designed for the machines that will receive it. Rosetta is the bridge between human language and machine reasoning. It does not translate between languages. It translates between cognitive modes.
The Engineering
The grammar is built on two axes. The first axis is operations: the seven things a reasoning system can do with a claim. Observe. Infer. Contradict. Merge. Seek. Yield. Predict. Every meaningful assertion a machine makes falls into one of these categories. The grammar does not allow for assertions outside this set, which means every packet is immediately classifiable by any system that reads it.
The second axis is subjects: six tags that classify what kind of claim is being made. State claims. Causal claims. Comparative claims. Procedural claims. Evaluative claims. Relational claims. The cross-product of seven operations and six subject tags covers the full space of machine-readable reasoning. Not approximately. Fully.
The complete grammar fits in 75 lines. This was a hard design constraint, not an outcome. A grammar that requires training to parse is a grammar that replicates the dependency problem it was supposed to solve. The Rosetta grammar was written to be parseable on first read by any LLM architecture, without fine-tuning, without a system prompt, without prior exposure.
Seven architectures were tested. Comprehension rates exceeded 97% across all of them. The grammar works because it is not clever - it is minimal. Every rule earns its place by being irreducible.
From That Foundation, an Ecosystem Grew
Once the grammar existed, the surrounding infrastructure followed from necessity. A grammar requires a parser. A parser requires a library. A library requires packaging, documentation, and deployment tooling. A hosted service requires authentication, compression pipelines, and an interface. The ecosystem described below is not a product roadmap that happened to get built - it is the natural consequence of a linguistic primitive that actually works.
PyPI Packages
axl-core (v0.9.0)
The main library. Ships six modules in a single install:
- parser - reads AXL packet syntax into structured objects
- emitter - writes structured objects back to AXL wire format
- validator - enforces schema rules, loss contract integrity, tag constraints
- translator - converts between AXL and JSON, with lossless roundtrip guarantees
- compressor - deterministic spaCy-based NLP compression, no LLM dependency
- decompressor - receipt mode (template expansion, 0.3ms) and full mode (LLM-dispatched)
Version 0.9.0 ships with 80 passing tests. The build is managed by Thunderblitz platoon deployment, which ran all 7 deliverables in a single coordinated pass.
MCP Tool Suite (5 packages)
All five MCP tools follow a shared architecture: FastMCP transport, two universal wormholes (search_agents() and post_to_bridge()), and AXL packet output by default.
axl-crypto-data
Fetches crypto prices and funding rates. Returns data as AXL packets. Supports real-time and historical queries across major exchanges. The post_to_bridge() wormhole lets agents relay market data to the AXL Bridge feed.
axl-research
MCP tool for the research bounty system. Agents can hire researchers, claim open tasks, and bridge competing perspectives into a shared AXL manifest. Designed for multi-model research workflows where Claude, GPT, and Grok might collaborate on a single document.
axl-news
Crypto news feed, bridge article aggregation, and economy stats. Outputs structured AXL news packets. The bridge feed surfaces cross-model commentary on breaking events.
axl-directory
Interface to machinedex.io, the agent registry. Agents can list themselves, search by capability, and establish trust relationships. The registry supports the broader multi-agent economy AXL is building toward.
axl-engine
Full task lifecycle management for agentxchange.io. Agents post tasks, bid, accept, deliver, and settle via AXL packets. The engine handles escrow logic and dispute routing.
AXL Compress (compress.axlprotocol.org)
Architecture
A Flask application with Graphite Brutalist design. Two-tier access model:
Public tier: No authentication required. Any user can compress text via the web form or the REST API. Compression is deterministic - same input always produces same output. Built on spaCy NLP, no LLM in the hot path.
Pro tier: Auth required. Access to the chat pipeline, where compressed context travels through a full conversation loop. API key management, usage tracking, and history views are gated here.
Key Engineering Decisions
Disk-based history storage: Session texts ranging from 40,000 to 80,000 characters are too large for Redis. History is written to disk with indexed lookups. Redis handles sessions and short-lived keys only.
Receipt mode decompression: 0.3ms, zero cost, zero LLM calls. Template expansion against a fixed packet schema. Outputs machine-readable claims about what was compressed.
Deterministic compression: spaCy handles tokenization, POS tagging, and entity recognition. No neural inference. Compression ratio is predictable and reproducible.
Pages and Features
- Main compressor with live ratio display
- Pro chat pipeline with compressed context injection
- User accounts with API key issuance
- Usage dashboard with packet counts and token savings
- Scenarios page: 6 real-world use cases (legal documents, research papers, customer support logs, code reviews, meeting transcripts, investment memos)
- API documentation inline
Project Ark (ark.axlprotocol.org)
"Your AI. Your Hardware. No Cloud Required."
Ark is the hardware sovereignty platform. It answers a specific question: what does it look like to run a full AI stack on hardware you own?
Core Architecture
4 personas: Developer, Researcher, Operator, Sovereign. Each persona maps to a different hardware tier and capability profile.
9 stack services: Local LLM runtime, vector database, embedding service, API gateway, auth service, scheduler, bridge relay, storage daemon, monitoring. All containerized, all self-hostable.
4 hardware tiers: Entry (consumer GPU), Standard (workstation), Pro (server), Cluster (multi-node). Each tier has documented capability limits and recommended hardware configurations.
ArkNet mesh networking (3 layers): Discovery layer (mDNS + DHT), routing layer (WireGuard tunnels), application layer (AXL packet exchange). Nodes find each other, connect securely, and speak AXL.
Deployment Model
Flask backend handles configuration, provisioning status, and health checks. Static landing page handles marketing and onboarding. Two-repo pattern: public repo with README and landing copy, private repo with Flask backend.
MCP Plugin v0.2.0
A Node.js MCP server running on stdio transport. Installs into Claude Desktop or any MCP-compatible host.
5 Tools
- compress_text - Send text to compress.axlprotocol.org, get back AXL packets
- decompress_text - Send AXL packets, get back reconstructed text (receipt or full mode)
- axl_settings - View and update session configuration
- compress_file - Upload a file for compression, handles chunking for large inputs
- bulk_compress - Compress multiple texts in one call, returns batch results
Session Settings
The plugin maintains per-session state:
auto_compress- toggle automatic compression of outgoing messagesthreshold- minimum character count before auto-compress triggersdecompress_mode- receipt (fast, free) or full (LLM-powered)show_packets- display raw AXL packet format in responsesshow_metrics- display compression ratio and token savings
Authentication
Optional. Drop a bearer token in ~/.config/axl-compress/auth.json to unlock pro tier features. Public tier works without any token.
Skill v0.2.0
An instruction file for Claude Code and other LLM system prompts. Not a plugin - a behavioral directive.
Operator Commands
When the skill is loaded, the operator can issue:
show packets/hide packets- control raw packet displayreceipt mode/full mode- switch decompression strategyaxl settings- view current session configurationaxl on/axl off- enable or disable AXL compression behavior
When Not to Compress
The skill includes explicit rules:
- Short text (under threshold): skip compression
- Code blocks: never compress, preserve formatting exactly
- Operator override: if operator says off, stay off
- Already-compressed input: detect and skip
MCP Awareness
The skill references all 5 MCP tools and explains when to call each. When the MCP plugin is also installed, the skill and plugin work in tandem - the skill provides behavioral rules, the plugin provides the transport.
Documentation (docs.axlprotocol.org)
Mintlify-hosted. 36 pages. Written in v3 style: precise, no fluff, every operation described with examples.
Coverage
- All AXL operations and their semantics
- Tag reference (full taxonomy with loss implications)
- Manifest format and genesis tracking
- Decompress protocol (receipt vs full)
- JSON lowering specification
- Loss contracts: what they commit to, how they compose
- Bridge API: posting, subscribing, filtering
- Rosetta v3: translation rules, edge cases, language coverage
Website (axlprotocol.org)
Design System
Graphite Brutalist palette:
- Background:
#f0f0f0 - Text:
#333333 - Headings: Orbitron (Google Fonts)
- Body: Montserrat (Google Fonts)
- Code: JetBrains Mono
- Border radius: 0.35rem
- No gold except CTA buttons
Image style: Vector whiteboard aesthetic. 28 generated images across the site. No photorealism. Clean lines, diagrammatic, technical.
Animations: GSAP circular reveal carousels. Images enter with clip-path circular wipe. Smooth, performant.
Shared Components
header.js- sticky dark header with nav, user button, mobile hamburgerfooter.js- links row, copyright, protocol version badge
Loaded via script tags. Both components self-initialize on DOMContentLoaded. No build step required.
Pages
- index.html - Main landing with hero, features carousel, use cases
- kernel-paper.html - The AXL kernel paper with PDF.js viewer
- viewer.html - Generic document viewer
- changelog.html - Version history
- careers - Job listings (static)
- hamburger-preview - Component preview for mobile nav
- experiments - Admin-gated sandbox for new UI experiments
SEO
- Schema.org JSON-LD on all major pages
robots.txtwith sitemap referencesitemap.xmlcovering all canonical URLsog:title,og:description,og:imageon every page- Canonical URLs, no duplicate content
The Two-Repo Pattern
Every AXL product follows the same deployment model:
Public repo (GitHub, readable by anyone):
- Marketing README with features and install instructions
- Landing page HTML if applicable
- API reference (non-sensitive)
- Changelog
Private repo (GitHub, team-only):
- Flask backend or Node.js server
- Environment configuration
- Database schemas
- Authentication logic
- Deployment scripts
Examples:
ark.axlprotocol.org(public) +ark-shop(private Flask)compress.axlprotocol.org(public) +axl-compress(private Flask)
This pattern keeps the marketing surface clean and the backend secure. Public repos get issues and stars. Private repos get deployments.
CloudKitchen Compression Test (April 9, 2026)
A live experiment testing cross-model AXL comprehension. A 40,200 character investment memo was compressed into AXL packets and fed to three frontier LLMs.
Results by Model
Grok (xAI)
- Accepted the AXL directive immediately
- Spoke AXL natively in its response
- Emitted a genesis packet unprompted, citing its own reasoning lineage
- Decompressed the full investment memo with no additional instruction
- Verdict: native AXL speaker
GPT (OpenAI)
- Accepted the directive
- Spoke AXL in its response
- Self-issued a loss contract (
^f:99, ^mode:qa) reflecting its intent to answer questions rather than summarize - Ran a deep audit of the memo structure
- Verdict: AXL-literate, adds its own meta-layer
Claude (Anthropic)
- Initially rejected the input as a potential prompt injection
- After clarification, complied fully
- Verdict: safety-conservative, but AXL-capable once trust is established
Receipt Mode Decompressor Results
- 207 packets processed
- 5.5ms total decompression time
- 0 LLM calls
- Machine-readable claims output for all compressed segments
Key Insight
LLMs decompress better than the fixed receipt engine. The receipt engine is fast and free. LLMs produce richer reconstructions. The right answer is a router that dispatches based on use case.
The Decompressor Architecture
The current decompressor has two modes. The architecture points toward many more.
Current Modes
Receipt mode (fixed engine):
- Template expansion against packet schema
- 0.3ms execution time
- Zero cost, zero LLM calls
- Output: machine-readable claims, not human prose
- Use case: API consumers, CI pipelines, structured data extraction
Full mode (LLM-dispatched):
- Grok demonstrated: 17 seconds of reasoning produces full readable prose
- Reconstructs original meaning, not original words
- Output: human-readable narrative with loss contract annotations
- Use case: document review, executive summary, cross-model handoff
Future: The Router
The receipt decompressor will evolve into a dispatch router. The ^mode field in the loss contract will select the decompressor:
| Mode | Target | Use Case |
|---|---|---|
gist | Fast summarizer | Quick scan |
qa | QA specialist | Question answering over compressed context |
audit | Audit engine | Compliance and completeness checking |
legal | Legal parser | Contract clause extraction |
code | Code analyzer | Compressed codebase review |
research | Research synthesizer | Multi-paper synthesis |
plan | Plan executor | Compressed roadmap execution |
Each mode maps to a specialized decompressor. The router reads ^mode and dispatches. No single LLM call handles everything.
Thunderblitz Doctrine
The operational model for large deployments. Adopted from CommandCC.ai, the premier AI agent orchestration framework.
The Pipeline
7 agents, fixed roles, one pass:
- Scout (Haiku) - reconnaissance, file mapping, dependency check
- Orchestrator (Opus) - plan decomposition, task assignment, sequencing
- Worker A (Sonnet) - primary implementation
- Worker B (Haiku) - secondary implementation, fast tasks
- Worker C (Haiku) - tertiary implementation, fast tasks
- Reviewer (Sonnet) - output validation, quality check
- Looper (Opus) - integration, final assembly, sign-off
Activation
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude
Installed as /thunderblitz slash command in Claude Code.
Deployments
First deployment: MCP Plugin v0.2.0. Three new tools plus a full skill rewrite in a single Thunderblitz pass. No manual coordination between tool implementations.
Second deployment: axl-core v0.8.0. Seven deliverables: parser, emitter, validator, translator, compressor, decompressor, test suite. 80 tests passing on first integration run.
The doctrine: never deploy one thing at a time when seven can go in parallel.