Open Research Initiative

Open research into machine cognition

Published protocols for structured memory, topological deliberation, autonomous attention, and agent governance. Because the questions about machine understanding matter as much as the products.

Read the Protocols GitHub [&] Protocol Spec
Founding Thesis

Intelligence is not generation. It is structured accumulation.

Models generate answers. Systems accumulate intelligence. Durable intelligence requires memory, evidence, time, and interaction with the world. The ampersand represents composition: agent & memory & reasoning & time & space. Intelligence does not come from a model. It emerges from the system those components create. — The [&] thesis

OpenSentience is the research arm of Ampersand Box Design. While the [&] portfolio builds production infrastructure for agent cognition, OpenSentience publishes the theoretical foundations, empirical protocols, and open questions that guide that work. Every protocol below is grounded in implemented or in-progress code, not speculation.

Our position: the limiting factor in AI agent capability is not raw model intelligence. It is memory architecture, deliberation structure, temporal grounding, and governance. These are infrastructure problems, not model problems. They require engineering protocols, not larger parameters.

Published Protocols

Six protocols. One cognitive loop.

Each protocol addresses a specific gap in the agent cognition stack. Together they form a complete autonomous loop: an agent that knows what it knows, knows what it doesn't know, and decides what to do about it — without being asked.

OS-001 · Continual Learning Protocol

Graphonomous: Knowledge Graphs for Continual Agent Learning

A graph-backed memory engine where agents store episodic, semantic, and procedural knowledge as typed nodes with confidence scores and provenance chains. Multi-timescale consolidation inspired by hippocampal replay — fast memory promotes to slow memory, weak connections decay, strong patterns crystallize. Outcome-driven learning updates confidence across causal chains, not just individual nodes.

v0.1.9 · shipped &memory.graph SQLite + embeddings MCP server
OS-002 · Topological Routing Protocol

κ-Routing: When to Retrieve, When to Deliberate

The cyclicity invariant κ (kappa) detects irreducible feedback loops in a knowledge graph. When κ = 0, the subgraph is a DAG — retrieve context in one pass. When κ > 0, circular dependencies exist — iterate and deliberate before answering. κ determines not just whether to think harder, but how entangled the reasoning is. Proved on 1,926,351 finite systems with zero counterexamples. Fault-line edges (minimum cuts within SCCs) become the mechanical decomposition boundaries for deliberation.

spec complete &reason.deliberate Tarjan SCC bipartition enumeration
OS-003 · Deliberation Orchestrator Protocol

Topology-Driven Deliberation: Fault Lines as Prompt Boundaries

When κ > 0, fault-line edges become prompt boundaries. The Deliberator decomposes circular knowledge along those boundaries, runs focused reasoning passes on each partition, reconciles them, and writes conclusions back into the graph — reducing κ over time as uncertainty crystallizes into settled knowledge. Single-agent fast path; escalates to multi-agent formal argumentation (Deliberatic) only when convergence fails.

spec complete &reason.deliberate graph crystallization escalation path
OS-004 · Attention Engine Protocol

Proactive Attention: Self-Directed Cognition Without Queries

The missing ignition in a reactive system. The Attention Engine is a periodic loop that examines the knowledge graph's topology, coverage gaps, and active goals to decide what the system should reason about, learn about, or act on next — without waiting for a query. Three modes: Explore (what don't I know?), Plan (what should I do?), and Focus (where should I spend compute?). Not a 5th cognitive primitive — attention is meta-reasoning over the existing four.

spec complete survey → triage → dispatch heartbeat + event triggers autonomous goal generation
OS-005 · Model Tier Adaptation Protocol

Hardware-Adaptive Cognition: Same Topology, Different Depth

κ routing becomes more valuable on constrained hardware — it tells the system when to skip expensive inference entirely. Three tiers (local 8B, local 70B+, cloud frontier) with qualitatively different strategies: single-pass enrichment vs. multi-pass deliberation, demand-triggered vs. heartbeat attention, aggressive crystallization vs. fresh inference. The κ paradox: ROI of topological routing is highest when inference is most expensive.

spec complete local_small · local_large · cloud_frontier cost tracking
OS-006 · Agent Governance Shim Protocol

Thin Governance: Permissions, Audit, and Lifecycle for Any Runtime

A lightweight governance layer — not a full runtime — that wraps around any OTP-based agent system (Jido, Alloy, or raw GenServer). Provides the permission taxonomy (filesystem, network, tool invocation, graph access), audit trail, agent lifecycle states (installed → enabled → running), and three autonomy levels (observe, advise, act). Designed as a hex package dependency, not a daemon.

in development permissions model audit trail autonomy levels
OS-002 · Deep Dive

The κ invariant: topology as cognition signal

A simple graph-theoretic insight with far-reaching consequences for agent reasoning. The strongly connected component structure of a knowledge graph mechanically determines when retrieval is sufficient and when deliberation is required.

DAG Region

κ = 0
No circular dependencies. Context retrieval is a single-pass traversal. Route: fast. No LLM deliberation needed.

SCC Region

κ > 0
Irreducible feedback loops present. κ measures entanglement depth. Route: deliberate. Fault lines become prompt decomposition boundaries.
// The routing decision is trivial once κ is computed:
function route(topology_result):
    if topology_result.max_kappa == 0:
        return "fast"        // retrieve in one pass
    else:
        return "deliberate"  // decompose along fault lines

// Deliberation budget scales with κ:
budget.max_iterations  = min(κ + 1, 4)
budget.agent_count     = min(κ, 3)
budget.confidence_threshold = min(0.7 + 0.05 × κ, 0.95)

The key insight: the graph's structure mechanically determines the prompt structure. No human prompt engineering. The topology is the reasoning template. For κ = 1, one fault line generates two conditional assumption passes plus reconciliation. For κ = 2, two independent fault lines generate a 2×2 assumption matrix. The Deliberator writes conclusions back as new semantic nodes — the graph learns from its own reasoning. Over time, κ decreases as uncertainty crystallizes into settled knowledge.

The Cognitive Loop

How the protocols compose

Each protocol occupies a layer. Together they form the autonomous cognition loop described in the Attention Engine: sense → understand → think → act → learn → govern → decide what to do next.

Attention
Attention Engine Survey → Triage → Dispatch. Proactive. Decides what to reason about next.
Governance
Governance Shim Delegatic Permissions, autonomy levels, audit trail. Policy boundaries.
Deliberation
κ-Router Deliberator Deliberatic (escalation) Topology → routing → focused passes → crystallization.
Memory
Graphonomous Retriever Consolidator Learner Knowledge graph, embeddings, outcome learning, consolidation.
Intelligence
TickTickClock GeoFleetic Temporal patterns + spatial awareness. The when and where axes.
Adaptation
Model Tier Cost Tracker Hardware-adaptive budgets. Same topology, different depth.
The four cognitive primitives map to the fundamental axes of cognition: &memory → What. &reason → How (including meta-reasoning / attention). &time → When. &space → Where. There is no "why" primitive because "why" is answered by the composition of memory (what happened), reasoning (how it connects), and time (when it happened). Similarly, "what next" is answered by reasoning over memory and time — which is exactly what the Attention Engine does. — OS-004 §9, Architectural Notes
OS-006 · Governance Shim

Three autonomy levels, not a ladder

The governance shim is not a runtime. It's a thin permission and audit layer that wraps around any OTP-based agent system. The autonomy level is a governance decision, not a maturity metric. Some agents should stay at :observe forever.

:observe

Show me what you'd do, but don't do it. Safe default. New deployments, debugging, audit mode.

read-only · no mutations · no dispatches

:advise

Propose actions, I'll approve. Typical production mode. Human-in-the-loop.

proposals · approval gate · audit logged

:act

Do it, but stay within limits. High-trust, strict budget. Fully autonomous within governance bounds.

budget-capped · time-bounded · policy-enforced
// Permission taxonomy (thin governance, not full runtime)
permissions: [
  "filesystem:read:/data/knowledge",
  "tool:invoke:graphonomous/*",
  "tool:invoke:ticktickclock/detect",
  "graph:read:*",
  "graph:write:agent-owned",
  "event:subscribe:outcome.*",
  "event:publish:attention.proposal"
]

// Autonomy budget (from ampersand.json governance block)
autonomy: {
  level: "advise",
  model_tier: "local_small",
  budget: { max_actions_per_hour: 5, require_approval_for: ["act", "propose"] }
}
Theoretical Foundations

Grounded in cognitive science, not analogy

The four primitives are not arbitrary marketing categories. They map directly to established cognitive architecture models and neuroanatomical systems.

&

memory → Hippocampus + Neocortex + Basal Ganglia

Tulving's episodic/semantic distinction (1972). Atkinson-Shiffrin multi-store model. SOAR's semantic + episodic memory systems. ACT-R's declarative memory. Graphonomous implements multi-timescale consolidation inspired by hippocampal-neocortical replay.

&

reason → Prefrontal Cortex + Anterior Cingulate

BDI (Belief-Desire-Intention) model. SOAR's impasse resolution mechanism. ACT-R's production system. Kahneman's dual-process theory (fast/slow thinking). κ-routing implements the fast/slow distinction mechanically via graph topology.

&

time → Cerebellum + Hippocampus + Basal Ganglia

Temporal difference learning. ACT-R's temporal constraints on memory retrieval. Baddeley's phonological loop for temporal sequencing. TickTickClock implements temporal anomaly detection via Mamba SSM.

&

space → Hippocampus + Parietal Cortex + Entorhinal Grid Cells

O'Keefe & Nadel's Cognitive Map Theory (1978). Place cells (O'Keefe, Nobel 2014) and grid cells (Moser & Moser). SOAR's Spatial Visual System. GeoFleetic implements spatial awareness via delta-CRDTs for distributed state sync.

References & Further Reading

Standing on the work of others

Cognitive Architectures

Memory & Neuroscience

Agent Protocols & Deliberation

Industry & Surveys

Open Questions

What we don't know yet

These are genuine open research questions driving our work:

1. Does κ-driven deliberation actually improve answer quality? — The routing is proved; the product effect is not. We need controlled evaluations comparing raw retrieval, enriched retrieval, and deliberated retrieval across circular vs. acyclic knowledge regions.

2. What is the minimum model tier for effective single-pass deliberation? — Our hypothesis is that 8B models can benefit from topology-enriched context even without multi-pass reasoning. This needs empirical validation.

3. How should confidence decay interact with crystallized conclusions? — If a deliberated conclusion decays to low confidence, should it be re-deliberated automatically? What triggers re-examination?

4. Can attention-driven goal generation be safe on small models? — Autonomous PROPOSE mode risks hallucinated goals on weaker models. What guardrails prevent goal drift without eliminating autonomy?

5. What does "understanding" mean for a graph-based system? — If an agent's knowledge graph contains all the right relationships with high confidence, and it can navigate them to answer questions, does it "understand" the domain? This is the question OpenSentience exists to explore.