Foundation Layer

The Primitives

The smallest building blocks that make low-energy, reliable world models practical. Six foundations — from representation to transport — designed from the beginning for sparsity, evidence, and energy discipline.

Core Architecture

Why We Needed New Primitives

Most AI today is built on dense vectors — long lists of numbers where almost every position is non-zero. That forces machines to move and process almost everything on every step: heavy memory traffic, constant multiply-adds, and high energy demand.

Dense systems can work in a few large data centres. They are the wrong starting point for world models that must run continuously, everywhere, and for years. Sparse Supernova was born from a simple requirement: keep the meaning, lose the waste.

✕ Dense AI

Most values are active. Everything is moved and processed on every step.

★ Sparse Supernova

Most values are inactive — by design. Only the bright points matter.

The energy cost in computing mainly comes from two things: moving data (reading and writing values in memory) and doing maths (multiply-add operations). In Sparse Supernova, the system ignores the zeros and computes only on the small set of active non-zeros — reducing memory traffic and operations over time.

Reference

At a Glance

# Primitive What It Does Why It Matters Energy Win
1 Sparse Signatures Represent the world State without dense embeddings Store / move less
2 Elevation-Shaped Hashing Decide what lights up Selective sparsity with control Keep density low
3 Sparse Distances & Anomalies Measure change / novelty Always-on monitoring Compute on non-zeros
4 Conformal Sparse Detection (USAD) Decide "is this unusual?" Lightweight safety layer Always-on, low cost
5 Universal Saturation Law (USL) Know when to stop Trust vs cost sizing Avoid over-build
6 Smart-Atom Router Move sparse safely Distributed world models Fewer bytes + governed
Primitive 1 Sparse Fingerprints

Sparse Signatures

A sparse signature is a compact "fingerprint" of a situation. Instead of filling a vector with thousands of non-zero numbers, we store only a small set of important features in a very large space. Most entries are exactly zero.

Think of it like a night sky: mostly dark, with a few bright points that matter.

Human View

A compact fingerprint that represents a situation using only a small set of important features in a very large space. Text, images, audio, and numerical streams can all be encoded into a common sparse space.

Architect View

  • High-dimensional space (large number of possible positions)
  • Only a tiny fraction of indices are non-zero; the rest are hard zeros
  • Each active index has an elevation encoding presence and importance
  • Text, images, audio, and numerical streams encoded into a common sparse space using deterministic rules

↳ Why We Needed It for World Models

World models carry state continuously. Dense embeddings make that state expensive to store, expensive to move, and expensive to compare. Sparse signatures make long-running state realistic: most of the world stays "off" until it matters.

Primitive 2 Deterministic Selection

Elevation-Shaped Hashing

If sparse signatures are a map, elevation-shaped hashing is how we decide which points appear — and how strong they are. We don't place features randomly; we shape importance so the system stays sparse while still capturing what matters.

Human View

Elevation is strength/importance: term strength in text, edge intensity in images, event strength in signals. This primitive shapes which features "light up" and how brightly, keeping sparsity high without losing structure.

Architect View

  • Deterministic hashing from features → positions in high-dimensional space
  • Elevation shaping from robust statistics and/or learned weights
  • Top-k selection that keeps only the strongest signals
  • Collision handling that favours more informative features
  • Strict density constraints so we never accidentally "turn everything on"

↳ Why We Needed It for World Models

World models fail in two ways: they either waste energy by activating too much, or they lose meaning by throwing away the wrong signals. Elevation-shaped hashing is the control that keeps sparsity high without losing structure.

Primitive 3 Sparse Comparisons

Sparse Distances & Anomalies

To understand change, you need to compare "now" to "before". Our sparse distance and anomaly measures do that using only the few active positions — not the whole vector.

Human View

A set of comparison tools that measure how different "now" is from "before" — using only the small number of active positions, not every single value. This enables continuous monitoring without overwhelming compute costs.

Architect View

  • Sparse cosine similarity / sparse distances over signatures
  • k-nearest-neighbour scoring in sparse space
  • Multi-timescale novelty traces (short / medium / long windows)
  • Computed on sparse structures, touching far fewer values per operation

↳ Why We Needed It for World Models

A world model is only useful if it can monitor change continuously. Dense comparisons are too expensive to run all the time. Sparse distances keep the monitoring always-on without dragging energy costs up with it.

Primitive 4 Statistical Guardrail

Conformal Sparse Detection (USAD)

This primitive answers: "Is this unusual enough that I should care?" — and it does it with a clear statistical control layer rather than a vague score.

Human View

A lightweight guardrail that can run everywhere the world model runs. It provides a clear yes/no answer with statistical rigour — not a vague anomaly score — so the system knows when something genuinely unusual has occurred.

Architect View

  • Universal anomaly detector built on sparse signatures
  • Robust statistics → k-NN nonconformity → conformal quantiles for thresholds
  • Controls false alarms when calibration assumptions hold
  • Small, dependency-light module that runs in browsers, edge devices, and servers

↳ Why We Needed It for World Models

If a world model runs everywhere, it needs a guardrail that can run everywhere too. USAD is designed to be always-on without turning safety into a second heavy system.

Primitive 5 Scaling Law

The Universal Saturation Law (USL)

USL is our law of scaling and agreement. It describes how agreement improves as you add effective dimension (or independent checks), and it gives a practical outcome: when to stop scaling so you don't waste energy.

Human View

A design tool that tells you: beyond a certain point, extra scale buys very little and costs a lot. Size for the trust you need, then stop. This is how we avoid the "scale until budgets run out" trap.

Architect View

  • Agreement rises with effective dimension (signature dimension, ensemble width, validator count, replica count, etc.)
  • A drift/ambiguity parameter captures how hard the environment is
  • Provides a stopping rule: beyond a certain point, extra scale buys little and costs a lot

↳ Why We Needed It for World Models

Most systems scale until budgets run out. World models need a more responsible approach: size for the trust you need, then stop. USL is a design tool for doing that deliberately.

Primitive 6 Transport Primitive

Smart-Atom Router

Sparse isn't only about how we represent the world. It's also about how we move information through a system. In real deployments, energy is burned in data movement: storage, bandwidth, and network hops. The Smart-Atom Router is our transport layer for sparse payloads — move only what matters, not everything.

Human View

If you represent the world sparsely but move it densely, you lose the benefit. The Smart-Atom Router keeps sparsity end-to-end: representation, comparison, monitoring, and transport — so the energy savings carry through the entire system.

Architect View

  • Real-time routing for sparse "atom" packets (including streaming modes)
  • Cryptographic integrity and provenance support
  • Receipt logging so flows are traceable and auditable
  • Built to treat bandwidth and energy as first-class constraints

↳ Why We Needed It for World Models

World models are distributed. If you represent the world sparsely but move it densely, you lose the benefit. The Smart-Atom Router keeps sparsity end-to-end: representation, comparison, monitoring, and transport.

Say less. Do more. Prove it.