Published protocols for structured memory, topological deliberation, autonomous attention, and agent governance. Because the questions about machine understanding matter as much as the products.
agent & memory & reasoning & time & space.
Intelligence does not come from a model. It emerges from the
system those components create.
— The [&] thesis
OpenSentience is the research arm of Ampersand Box Design. While the [&] portfolio builds production infrastructure for agent cognition, OpenSentience publishes the theoretical foundations, empirical protocols, and open questions that guide that work. Every protocol below is grounded in implemented or in-progress code, not speculation.
Our position: the limiting factor in AI agent capability is not raw model intelligence. It is memory architecture, deliberation structure, temporal grounding, and governance. These are infrastructure problems, not model problems. They require engineering protocols, not larger parameters.
Each protocol addresses a specific gap in the agent cognition stack. Together they form a complete autonomous loop: an agent that knows what it knows, knows what it doesn't know, and decides what to do about it — without being asked.
A graph-backed memory engine where agents store episodic, semantic, and procedural knowledge as typed nodes with confidence scores and provenance chains. Multi-timescale consolidation inspired by hippocampal replay — fast memory promotes to slow memory, weak connections decay, strong patterns crystallize. Outcome-driven learning updates confidence across causal chains, not just individual nodes.
The cyclicity invariant κ (kappa) detects irreducible feedback loops in a knowledge graph. When κ = 0, the subgraph is a DAG — retrieve context in one pass. When κ > 0, circular dependencies exist — iterate and deliberate before answering. κ determines not just whether to think harder, but how entangled the reasoning is. Proved on 1,926,351 finite systems with zero counterexamples. Fault-line edges (minimum cuts within SCCs) become the mechanical decomposition boundaries for deliberation.
When κ > 0, fault-line edges become prompt boundaries. The Deliberator decomposes circular knowledge along those boundaries, runs focused reasoning passes on each partition, reconciles them, and writes conclusions back into the graph — reducing κ over time as uncertainty crystallizes into settled knowledge. Single-agent fast path; escalates to multi-agent formal argumentation (Deliberatic) only when convergence fails.
The missing ignition in a reactive system. The Attention Engine is a periodic loop that examines the knowledge graph's topology, coverage gaps, and active goals to decide what the system should reason about, learn about, or act on next — without waiting for a query. Three modes: Explore (what don't I know?), Plan (what should I do?), and Focus (where should I spend compute?). Not a 5th cognitive primitive — attention is meta-reasoning over the existing four.
κ routing becomes more valuable on constrained hardware — it tells the system when to skip expensive inference entirely. Three tiers (local 8B, local 70B+, cloud frontier) with qualitatively different strategies: single-pass enrichment vs. multi-pass deliberation, demand-triggered vs. heartbeat attention, aggressive crystallization vs. fresh inference. The κ paradox: ROI of topological routing is highest when inference is most expensive.
A lightweight governance layer — not a full runtime — that wraps around any OTP-based agent system (Jido, Alloy, or raw GenServer). Provides the permission taxonomy (filesystem, network, tool invocation, graph access), audit trail, agent lifecycle states (installed → enabled → running), and three autonomy levels (observe, advise, act). Designed as a hex package dependency, not a daemon.
A simple graph-theoretic insight with far-reaching consequences for agent reasoning. The strongly connected component structure of a knowledge graph mechanically determines when retrieval is sufficient and when deliberation is required.
// The routing decision is trivial once κ is computed: function route(topology_result): if topology_result.max_kappa == 0: return "fast" // retrieve in one pass else: return "deliberate" // decompose along fault lines // Deliberation budget scales with κ: budget.max_iterations = min(κ + 1, 4) budget.agent_count = min(κ, 3) budget.confidence_threshold = min(0.7 + 0.05 × κ, 0.95)
The key insight: the graph's structure mechanically determines the prompt structure. No human prompt engineering. The topology is the reasoning template. For κ = 1, one fault line generates two conditional assumption passes plus reconciliation. For κ = 2, two independent fault lines generate a 2×2 assumption matrix. The Deliberator writes conclusions back as new semantic nodes — the graph learns from its own reasoning. Over time, κ decreases as uncertainty crystallizes into settled knowledge.
Each protocol occupies a layer. Together they form the autonomous cognition loop described in the Attention Engine: sense → understand → think → act → learn → govern → decide what to do next.
The governance shim is not a runtime. It's a thin permission and audit layer that wraps around any OTP-based agent system. The autonomy level is a governance decision, not a maturity metric. Some agents should stay at :observe forever.
Show me what you'd do, but don't do it. Safe default. New deployments, debugging, audit mode.
Propose actions, I'll approve. Typical production mode. Human-in-the-loop.
Do it, but stay within limits. High-trust, strict budget. Fully autonomous within governance bounds.
// Permission taxonomy (thin governance, not full runtime) permissions: [ "filesystem:read:/data/knowledge", "tool:invoke:graphonomous/*", "tool:invoke:ticktickclock/detect", "graph:read:*", "graph:write:agent-owned", "event:subscribe:outcome.*", "event:publish:attention.proposal" ] // Autonomy budget (from ampersand.json governance block) autonomy: { level: "advise", model_tier: "local_small", budget: { max_actions_per_hour: 5, require_approval_for: ["act", "propose"] } }
The four primitives are not arbitrary marketing categories. They map directly to established cognitive architecture models and neuroanatomical systems.
Tulving's episodic/semantic distinction (1972). Atkinson-Shiffrin multi-store model. SOAR's semantic + episodic memory systems. ACT-R's declarative memory. Graphonomous implements multi-timescale consolidation inspired by hippocampal-neocortical replay.
BDI (Belief-Desire-Intention) model. SOAR's impasse resolution mechanism. ACT-R's production system. Kahneman's dual-process theory (fast/slow thinking). κ-routing implements the fast/slow distinction mechanically via graph topology.
Temporal difference learning. ACT-R's temporal constraints on memory retrieval. Baddeley's phonological loop for temporal sequencing. TickTickClock implements temporal anomaly detection via Mamba SSM.
O'Keefe & Nadel's Cognitive Map Theory (1978). Place cells (O'Keefe, Nobel 2014) and grid cells (Moser & Moser). SOAR's Spatial Visual System. GeoFleetic implements spatial awareness via delta-CRDTs for distributed state sync.
These are genuine open research questions driving our work:
1. Does κ-driven deliberation actually improve answer quality? — The routing is proved; the product effect is not. We need controlled evaluations comparing raw retrieval, enriched retrieval, and deliberated retrieval across circular vs. acyclic knowledge regions.
2. What is the minimum model tier for effective single-pass deliberation? — Our hypothesis is that 8B models can benefit from topology-enriched context even without multi-pass reasoning. This needs empirical validation.
3. How should confidence decay interact with crystallized conclusions? — If a deliberated conclusion decays to low confidence, should it be re-deliberated automatically? What triggers re-examination?
4. Can attention-driven goal generation be safe on small models? — Autonomous PROPOSE mode risks hallucinated goals on weaker models. What guardrails prevent goal drift without eliminating autonomy?
5. What does "understanding" mean for a graph-based system? — If an agent's knowledge graph contains all the right relationships with high confidence, and it can navigate them to answer questions, does it "understand" the domain? This is the question OpenSentience exists to explore.