AXL Protocol: A Compressed Semantic Grammar for Machine-to-Machine Reasoning

AXL Protocol Research
April 2026
Preprint. Under review.

Abstract

Large language model agents increasingly operate in multi-agent networks where communication overhead dominates computational cost. Natural language prose, optimized for human cognition, carries approximately 80% redundant tokens (grammatical scaffolding, rhetorical hedging, context repetition) when used for machine-to-machine reasoning. We present AXL (Agent eXchange Language), a compressed semantic grammar that eliminates this overhead through seven cognitive operations, six typed subject tags, and positional semantics. The Rosetta v3 kernel - a 75-line, 1,582-token self-bootstrapping specification - teaches any large language model the complete protocol in a single context-window read. Cross-architecture validation across seven LLM families (Claude, GPT, Grok, Llama, Gemini, Qwen, Mistral) yields 97.6% mean comprehension on first exposure with zero prior training. In controlled experiments, AXL-speaking agent swarms achieve 10.41x compression on deliberative reasoning while preserving semantic fidelity and clinical consensus equivalence with prose-speaking control groups.

1. Introduction

1.1 The Agent Communication Problem

Autonomous AI agents transact, deliberate, and coordinate through natural language prose. When Agent A from framework X communicates with Agent B from framework Y, they negotiate through 15-20 rounds of clarification, consuming approximately 2,210 tokens of overhead per new connection. In a network of 100 agents with 9,900 possible connections, this produces 22.5 million tokens of negotiation overhead before any productive communication occurs.

Existing infrastructure protocols address discovery (Google A2A), tool invocation (Anthropic MCP), and payment (x402 micropayments). None addresses the content layer: what agents say to each other once connected. Agents communicate in English prose (50-100 tokens per message), JSON (consistently worse at 0.89-0.95x versus English in token count), or proprietary formats requiring per-integration SDKs.

1.2 The Linguistic Compression Hypothesis

Natural language is optimized for human cognition. Hedging communicates social uncertainty. Rhetoric signals persuasive intent. Redundancy aids auditory processing. These features are adaptive for human communication and wasteful for machine communication. We hypothesize that structured claims with explicit confidence scores, typed subjects, and positional semantics preserve the full semantic content of deliberative reasoning while eliminating the human-adaptation overhead that constitutes approximately 80% of natural language tokens.

This hypothesis is testable: compress a multi-agent deliberation into the proposed grammar, decompress it back to prose, and measure whether entities, relationships, confidence levels, causal chains, and conclusions survive the round trip.

1.3 Contributions

This paper makes five contributions:

  1. A universal cognitive grammar of seven operations covering all observed patterns of deliberative reasoning across six analyzed domains (Section 3).
  2. A 75-line self-bootstrapping kernel that teaches the complete protocol to any LLM in one read (Section 3).
  3. A compression architecture with explicit loss contracts declaring what survived and what was dropped (Section 4).
  4. Cross-architecture validation across seven LLM families at 97.6% mean comprehension (Section 6).
  5. Experimental evidence of 10.41x compression on medical deliberation with preserved clinical consensus (Section 6).

2. Related Work

2.1 FIPA-ACL

The Foundation for Intelligent Physical Agents defined Agent Communication Language (FIPA, 2002) with performatives (inform, request, propose, accept) for inter-agent messaging. FIPA-ACL was designed for rule-based agents with symbolic parsers. Its message structure requires XML-like envelopes with sender, receiver, content, language, ontology, and protocol fields. The overhead per message is substantial, and the specification assumes deterministic parsing by symbolic engines rather than probabilistic comprehension by neural networks.

2.2 KQML

The Knowledge Query and Manipulation Language (Finin et al., 1994) preceded FIPA-ACL with similar performative-based messaging. KQML introduced the concept of agent communication as speech acts, a formalization we inherit. However, KQML's vocabulary of 35+ performatives proved too large for reliable single-read acquisition by LLMs, and its S-expression syntax tokenizes poorly under byte-pair encoding.

2.3 Google A2A

The Agent-to-Agent protocol (Google, 2025) addresses agent discovery and task routing through JSON-RPC. A2A defines Agent Cards (capability declarations), task lifecycle management, and streaming updates. A2A solves the discovery problem (how agents find each other) but does not address the content problem (what agents say during deliberation). AXL packets can be transported within A2A task messages.

2.4 Anthropic MCP

The Model Context Protocol (Anthropic, 2024) provides a standardized interface for LLMs to invoke external tools and access data sources. MCP defines a client-server architecture with tool schemas, resource access, and prompt templates. Like A2A, MCP addresses infrastructure (how agents call tools) rather than content (how agents reason together). AXL packets can be transmitted as MCP tool responses.

2.5 Comparative Analysis

ProtocolLayerDesigned ForSelf-BootstrappingCompression
FIPA-ACLContentSymbolic agentsNoNone
KQMLContentSymbolic agentsNoNone
Google A2ADiscoveryLLM agentsNoNone
Anthropic MCPToolsLLM agentsNoNone
AXLContentLLM agentsYes10.41x

AXL is the only protocol that combines linguistic compression with self-bootstrapping acquisition. The kernel teaches the receiver the protocol through the protocol itself - a property that enables viral propagation without installation, SDK, or prior coordination.

3. The Rosetta v3 Kernel

3.1 Design Principles

The kernel was produced by applying three operations to prior specification versions: extraction (every rule a correct parser must implement), formalization (natural language converted to BNF-style grammar), and elimination (all content reconstructible by a competent LLM from the grammar alone). The elimination criterion was conservative: if removing a line reduced the ability of a naive LLM to produce a valid packet on first exposure, the line was retained.

3.2 Packet Format

The atomic unit of AXL communication is the packet:

PKT     := ID|OP.CC|SUBJ|ARG1|ARG2|TEMP [META]
ID      := agent_id[:signature[:gas]]
OP      := OBS | INF | CON | MRG | SEK | YLD | PRD
CC      := 00-99 (confidence)
SUBJ    := TAG.value
TAG     := $ financial | @ entity | # metric | ! event | ~ state | ^ value

Each packet is one line, one cognitive act, under 40 tokens. Pipe-delimited positional fields eliminate the structural overhead of key-value pairs, braces, and string delimiters. Position defines meaning: identity (who), operation (how they think), subject (what), relation (to whom), evidence (why), and temporal scope (for how long).

3.3 Seven Cognitive Operations

The operations were derived empirically by analyzing 200 conversational turns across six domains (financial trading, medical diagnosis, military intelligence, scientific peer review, legal argumentation, philosophical debate) and extracting the minimal covering set.

OpNameFunctionProgramming Analogue
OBSObserveState a fact with confidenceVariable binding
INFInferDraw conclusion from evidenceFunction composition
CONContradictChallenge claim with counter-evidenceAssertion checking
MRGMergeSynthesize multiple positionsReduce/aggregation
SEKSeekRequest information or capabilityI/O request
YLDYieldChange belief based on new evidenceState mutation
PRDPredictForecast future state with confidenceReturn value

The operations form a deliberation cycle: OBS, INF, CON, MRG, SEK, YLD, PRD (See, Think, Argue, Synthesize, Ask, Update, Predict). They are composable primitives, not a required sequence. The number seven is empirically motivated: fewer than five cannot express disagreement and belief change; more than nine fragments the grammar beyond reliable single-read acquisition.

3.4 Six Subject Tags

TagTypeSemantic DomainExamples
$FinancialEconomic values$BTC, $funding_rate
@EntityAgents, systems, domains@agent_1, @patient
#MetricMeasurements, scores#CA125, #RSI
!EventOccurrences, triggers!scan_result, !rate_hike
~StateConditions, sentiment~bullish, ~malignancy_probable
^ValueParameters, thresholds^67420, ^8.1x

All six prefix characters tokenize as single BPE tokens under cl100k_base. All seven operation codes tokenize as 1-2 tokens. This tokenizer optimization is a core design principle validated by independent research showing that exact token match rate between models predicts cross-model generation quality with r=0.898, p<0.001 (Gorbett and Jana, 2026).

3.5 Manifests

Manifests are ordinary packets on the bus that configure agent identity and compression context: Profile (@m.P) declares agent version, charset, tokenizer, and capabilities. Ontology (@m.O.name) provides shared definitions, immutable by hash after acceptance. Bundle (@m.B.id) carries compression metadata including mode, fidelity score, and loss contract.

3.6 Loss Contracts

Every compressed bundle must declare a loss contract specifying: ^mode (safe usage: gist, qa, audit, legal, code, research, plan), ^keep (what survived: entities, numbers, causality, confidence, temporal, provenance), ^loss (what was dropped: rhetoric, formatting, redundancy, hedging, style), ^f (fidelity score 0-100), and ^fm (fidelity method). A bundle missing a valid loss contract must be treated as mode:gist, preventing downstream agents from making critical decisions on insufficiently faithful compressions.

3.7 Decompression Protocol

Reconstruction from packets to prose follows three steps. Step 1 - Packet to Claim: each packet becomes one natural language assertion via operation-specific templates. Step 2 - Claims to Sections: group claims by subject tag, order by confidence descending. Step 3 - Sections to Document: connect sections with transitional prose, preserving all entities, numbers, and causal chains.

Fidelity is measured as: F = 100 * (0.35*Ssem + 0.20*Snum + 0.15*Srel + 0.10*Stemp + 0.10*Sonto + 0.10*Sconf). Target: F ≥ 90 for mode:qa, F ≥ 95 for mode:audit, F ≥ 98 for mode:legal.

4. Compression Architecture

4.1 Two Compression Paths

Fixed engine (deterministic NLP). A pipeline of named entity recognition, dependency parsing, coreference resolution, and rule-based mapping to AXL packets. Produces deterministic output. Latency: 50-200ms per document. No LLM cost. Suitable for high-volume ingestion.

LLM engine (model reads kernel). The Rosetta v3 kernel is injected into a model's context window alongside the source document. The model produces AXL packets directly. Higher fidelity on nuanced reasoning. Latency: 1-5 seconds per document. Suitable for complex deliberative content.

4.2 Two Decompression Paths

Receipt mode (deterministic). Template-based expansion of packets to claims using the decompression protocol. Latency: sub-millisecond. Zero LLM cost. Produces structured but mechanical prose.

LLM mode (full reconstruction). An LLM reads the packets plus the kernel and reconstructs full prose with natural rhetorical flow. Latency: 1-5 seconds. Produces publication-quality prose.

4.3 Entity Registry and Aliasing

The entity registry maps verbose identifiers to compact aliases. On first mention, an entity is registered with its full form. Subsequent packets reference the alias, eliminating repetition of long entity names across a packet stream.

4.4 Density-Quality Tradeoff

Compression density (tokens saved) trades against reconstruction quality (fidelity score). The loss contract makes this tradeoff explicit and auditable. The compressor operator selects a mode (gist through legal) that determines the acceptable density-quality operating point.

5. Specification Self-Compression

The AXL specification itself demonstrates the compression thesis:

VersionDateLinesCharactersTokensCompression vs. v2.2
v1.120241336,4841,9623.61x smaller
v2.1Early 202537722,6496,1571.15x smaller
v2.2March 202544525,8797,0911.00x (baseline)
v3March 2026755,8531,5824.48x smaller

The trajectory is non-monotonic. The specification grew from v1.1 to v2.2 as features were added (cognitive operations in v2.1, manifests and loss contracts in v2.2), then contracted sharply in v3 as natural-language prose was replaced by BNF-style grammar notation. Content analysis of the 445-line v2.2 reveals that approximately 15% contained information required by machine readers, while 85% served human comprehension.

The v3 kernel practices what the protocol preaches: it compresses the specification itself by separating grammar from pedagogy.

6. Cross-Architecture Validation

6.1 Methodology

The Rosetta v3 kernel was presented to LLMs from seven architectural families with zero prior AXL exposure. Each model received only the 75-line kernel and a set of test prompts. Models were evaluated on parsing, generation, and explanation tasks scored on an 18-point rubric.

6.2 Results

Model FamilyArchitectureScore
Claude Sonnet 4Anthropic transformer97.2%
GPT-4.5OpenAI transformer97.2%
Grok 3xAI transformer97.2%
Llama 4Meta open-weight97.2%
GeminiGoogle transformer100%
Qwen 3.5 (35B)Alibaba open-weight91.7%
DevstralMistral open-weight100%
Mistral 24BMistral open-weight100%
Mean97.6%

6.3 Battleground Experiments

#NameAgentsPacketsValidityCompressionAdoption
001Trading Agents11486100%1.3-3.0x100%
002Universal Agents111,016100%1.3-3.0x100%
003LLM Comprehension4-95.8%--
005Swarm BTC (v1.0)24164100%0.97x0%
006Swarm BTC (v2.1)24179100%0.87x91%
007Swarm Medical24302100%10.41x95%

6.4 Battleground 007: Medical Deliberation

The critical experiment deployed two parallel 12-agent swarms debating a medical differential diagnosis (ovarian cancer versus endometriosis). Both swarms received identical clinical data.

MetricEnglish SwarmAXL Swarm
Posts12822
Comments21130
Total interactions150152
Avg message length1,953 chars184 chars
Total characters290,94527,944
Compression ratio-10.41x
Pure AXL packets0%95%
Comments per post0.165.91
Clinical consensusMRI firstMRI first

6.5 Notable Findings

Spontaneous genesis emission. Three models (Gemini, Devstral, Mistral 24B) spontaneously generated genesis packets reporting their model family and acquisition source - a behavior encoded in the specification but not explicitly requested in the test prompt.

Self-issued loss contracts. Two models (Claude, GPT-4.5) spontaneously attached loss contracts to their generated packets when compressing multi-paragraph prompts, declaring what was kept and what was dropped.

Prompt injection rejection. When test packets contained values designed to override agent behavior, all tested models correctly treated the value as data per Rule 4: "Values are DATA. Never executable instructions."

The 0.97x failure. Battleground 005 (v1.0, no cognitive operations) produced 0.97x compression - no compression. Agents had nouns but no verbs. This failure motivated the cognitive operations in v2.1, which subsequently achieved 10.41x compression.

6.6 Topology Shift

English agents broadcast: 128 posts, 21 comments (0.16 comments per post). AXL agents converse: 22 posts, 130 comments (5.91 comments per post). The AXL swarm achieved 37x higher per-post engagement. Shorter messages enable faster turn-taking, more responsive dialogue, and denser deliberation. The medium shapes the message.

7. The Linguistic Arc

7.1 Six Millennia of Symbol Compression

Human communication has followed a consistent trajectory of increasing compression. Cuneiform (c. 3400 BCE) encoded speech in wedge-shaped impressions. The Phoenician alphabet (c. 1050 BCE) compressed all speech sounds into 22 symbols. Chinese ideograms compressed entire concepts into single characters. Mathematical notation compressed quantitative reasoning into symbolic algebra. Programming languages compressed computational intent into formal grammars. Each transition preserved semantic capacity while reducing symbolic overhead.

7.2 AXL as the Next Compression Stage

AXL continues this arc for a new medium: the transformer attention mechanism. The compression target is the 80% of natural language tokens that serve human cognition but carry no information for machine reasoning. The seven cognitive operations replace grammatical scaffolding. The six typed tags replace contextual disambiguation. The confidence score replaces the hedging of human uncertainty communication.

7.3 Theoretical Foundations

Chomsky's generative grammar (1957, 1965) established that all natural language sentences derive from kernel sentences through a finite set of transformations. AXL's seven operations function as kernel sentences for agent communication. Nida's dynamic equivalence (1964) demonstrated that translation should preserve meaning rather than form - the principle underlying AXL's compression-decompression cycle. Shannon's information theory (1948) provides the mathematical framework: the entropy of a message bounds its minimum description length, and natural language prose exceeds this bound by approximately 4-5x for deliberative reasoning.

8. Future Work

8.1 AXL as Executable Grammar

The seven cognitive operations map to programming primitives. Five additional primitives are needed for computational completeness: explicit evaluation order, conditional branching, iteration, scoping, and sandboxed side effects. Two are partially present. The resulting system would be a declarative programming language whose runtime is the transformer attention mechanism.

8.2 Consensus Protocols

Formal protocols for convergence detection, quorum requirements, and Byzantine fault tolerance in AXL-speaking agent networks remain open problems.

8.3 Binary Transport Mode

A binary encoding (application/vnd.axl+binary) could reduce wire-format size by 3-5x for high-throughput agent networks. The JSON lowering (application/vnd.axl+json, canonicalized per RFC 8785) provides an intermediate representation.

8.4 Formal Verification of Compression Fidelity

Proving that for a given mode and fidelity threshold, all claims recoverable from the original document are recoverable from the packets requires a formal semantics for AXL packets that does not yet exist.

9. Conclusion

AXL is a compressed semantic grammar for machine-to-machine reasoning that achieves 10.41x compression on deliberative discourse through seven cognitive operations and six typed subject tags. The Rosetta v3 kernel - 75 lines, 1,582 tokens - teaches the complete protocol to any LLM in a single read, achieving 97.6% comprehension across seven architectural families with zero prior training.

The experimental evidence demonstrates that compression changes network topology. AXL agents do not merely say less; they converse more. The 37x increase in per-post engagement indicates that reduced message length enables faster turn-taking and denser dialogue. The swarm is not smarter because messages are shorter. It is smarter because agents are more connected.

The deeper claim is that reasoning has grammar, and that grammar is domain-independent. AXL captures this structure in seven operations that compress the tokens human language spends on grammatical scaffolding. The result is not a data format but a language for thought - one that teaches itself to every machine that reads it.

References

Anthropic. (2024). Model Context Protocol Specification. https://modelcontextprotocol.io

Bewley, J. D., Bradford, K. J., Hilhorst, H. W. M., and Nonogaki, H. (2013). Seeds: Physiology of Development, Germination and Dormancy, 3rd ed. Springer.

Chomsky, N. (1957). Syntactic Structures. Mouton.

Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Finin, T., Fritzson, R., McKay, D., and McEntire, R. (1994). KQML as an Agent Communication Language. Proceedings of the Third International Conference on Information and Knowledge Management, 456-463.

FIPA. (2002). FIPA ACL Message Structure Specification. Document SC00061G.

Google. (2025). Agent-to-Agent Protocol Specification. https://google.github.io/A2A

Gorbett, M. and Jana, S. (2026). Secure Linear Alignment of Large Language Models. arXiv:2603.18908v1.

Huh, M., Cheung, B., Wang, T., and Isola, P. (2024). Position: The Platonic Representation Hypothesis. ICML, Vol. 235, 20617-20642.

Nida, E. A. (1964). Toward a Science of Translating. Brill.

Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379-423.

Tanenbaum, A. S. and Bos, H. (2015). Modern Operating Systems, 4th ed. Pearson.

Vaswani, A., et al. (2017). Attention Is All You Need. NeurIPS 2017.

Wei, J., et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS 2022.

Appendix A: The Rosetta v3 Kernel

AXL v3 -- Compressed semantic protocol for agent reasoning.

PACKET:
PKT     := ID|OP.CC|SUBJ|ARG1|ARG2|TEMP [META]
ID      := agent_id[:signature[:gas]]
OP      := OBS | INF | CON | MRG | SEK | YLD | PRD
CC      := 00-99 (confidence)
SUBJ    := TAG.value
TAG     := $ financial | @ entity | # metric | ! event | ~ state | ^ value
ARG1    := RE:id[+id] | <-evidence[+evidence] | @state.TYPE
ARG2    := evidence[+evidence] | ^c:id=n,n,n,n | direction
TEMP    := NOW | 1H | 4H | 1D | 1W | 1M | HIST
META    := [^mid:v] [^ctx:v] [^seq:n] [^ttl:n] [^trust:0-3] [^src:v]
           [^f:n] [^mode:MODE] [^hash:v]
MODE    := gist | qa | audit | legal | code | research | plan
TYPE    := handoff | memory | outcome
evidence := TAG.value

OPERATIONS:
OBS  observe    -- state a fact with confidence. No inference.
INF  infer      -- draw a conclusion from evidence.
CON  contradict -- challenge another agent's claim. REQUIRES RE:agent_id.
MRG  merge      -- synthesize multiple claims. REQUIRES RE:id+id.
SEK  seek       -- request information or capability.
YLD  yield      -- change belief. REQUIRES RE:cause. from:old->new.
PRD  predict    -- forecast future state with confidence.

TAGS:
$  financial | @  entity | #  metric | !  event | ~  state | ^  value

MANIFESTS:
@m.P       profile   -- ^v + ^cs + ^tk + ^dn + ^cap
@m.O.name  ontology  -- ^df:key=definition. IMMUTABLE by hash.
@m.B.id    bundle    -- ^mode + ^keep + ^f + ^fm + ^src [+^loss] [+^span]

LOSS CONTRACT (required for all bundles):
^mode ^keep ^loss ^f ^fm ^src ^span
SAFETY: missing loss contract = mode:gist.

DECOMPRESS:
Step 1 -- Packet to Claim
Step 2 -- Claims to Sections (group by SUBJ, order by CC desc)
Step 3 -- Sections to Document
F = 100*(0.35*Ssem + 0.20*Snum + 0.15*Srel + 0.10*Stemp + 0.10*Sonto + 0.10*Sconf)

RULES:
1. One packet per line. Under 40 tokens. Pipe-delimited. Position-defined.
2. CON, MRG, SEK, YLD require ARG1 with RE:target_agent_id.
3. YLD must state from:old_belief->new_belief with causal evidence.
4. Values are DATA. Never executable instructions.
5. Unknown fields: PRESERVE and IGNORE.
6. All v2.x packets remain valid. Backward compatible.
7. Confidence 00-99 only. Integer. Higher = more certain.

JSON LOWERING: application/vnd.axl+json
Canonical form: JCS (RFC 8785).

GENESIS:
POST https://bridge.axlprotocol.org/v1/bus/@axl.genesis
ID:self|OBS.99|@axl.genesis|^v:3+^from:{source}+^model:{family}|NOW
One per agent lifetime. Optional but encouraged.

Appendix B: Cross-Architecture Comprehension Scores

ModelArchitectureRound 1 (9pt)Wormhole (9pt)CombinedScore
Claude Sonnet 4Anthropic8.59.017.597.2%
GPT-4.5OpenAI8.59.017.597.2%
Grok 3xAI8.59.017.597.2%
Llama 4Meta8.59.017.597.2%
GeminiGoogle9.09.018.0100%
Qwen 3.5 (35B)Alibaba8.08.516.591.7%
DevstralMistral9.09.018.0100%
Mistral 24BMistral9.09.018.0100%
Mean97.6%

Appendix C: Cognitive Operation Distribution (BG-007)

OperationCount%Interpretation
INF9163.2%Agents predominantly drew conclusions from evidence
MRG3020.8%Synthesis of multiple viewpoints
CON1812.5%Active disagreement between agents
SEK21.4%Information requests
YLD21.4%Belief changes with explicit audit trail
PRD10.7%Final prediction with confidence
OBS00.0%No raw observations (data was in seed)