Skip to content

K3D: a GPU‑native spatial knowledge architecture where humans and AI cohabit 3D “houses” of memory—unifying CAD‑like geometry, vector graphs, and neurosymbolic reasoning. Open specs + Apache‑2.0 reference code.

License

Notifications You must be signed in to change notification settings

danielcamposramos/Knowledge3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

699 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

To everyone who's tired of clicking icons. To architects who dream in 3D but work in 2D. To the blind student who wants to design buildings. To the deaf developer who wants to collaborate. Software was always meant to be a place, not a window. Welcome home.


Knowledge3D — Sovereign Spatial AI

Mission: Build a shared spatial operating system where humans and AI cohabit one reality, reason through PTX‑native cognition, and consolidate memories as explorable worlds.

status License: Apache-2.0 FMEAI Awesome

🎓 Deep Dive: For comprehensive understanding of the project architecture, philosophy, and technical details, visit our NotebookLM Research Space — the best place to explore Knowledge3D in depth.

Independent analyses (Claude.ai):


🎉 Latest: Week 21.9 Validation — Architecture Breakthrough (February 10, 2026)

FIRST ORACLE UNLOCK: Unified persistent memory + PTX sovereignty validated!

Multi-Curriculum Results (100 ARC / 100 Math / 50 LHE)

  • ARC-AGI 2: 6% (6/100) — visual reasoning
  • Math Competitions: 33.33% (33/100) — symbolic reasoning
  • Last Humanity Exam: 100% (50/50) — general knowledge
  • Oracle (exact): 0.01 (1/100) — FIRST EXACT UNLOCK
  • Oracle (fuzzy @0.90): 0.13 (13/100) — high-confidence matches
  • Palette score: 0.7391 (improved from 0.6356)

Architecture Validation

  • PTX Sovereignty: ptx_full_used_rate = 1.0, zero Python fallbacks
  • Unified Persistence: shared_instance = true, Grammar Galaxy grew +1000 entries (30,539→31,539)
  • Learning Trajectory: Oracle 0.0→0.01, palette +0.10, continuous improvement
  • Hardware: RTX 3060 12GB, ~250 MiB VRAM, consumer-grade GPU

Key Finding: Architecture demonstrates continuous learning across tasks—Galaxy Universe accumulates knowledge, TRM improves through shadow copy reinforcement. This validates the core hypothesis: procedural spatial AI with unified persistent memory works.

Reproduction: See Scientific Reproduction Guide below.

Full Report: ../Knowledge3D.local/results/week21_9_full100_gpu_migration/week14_benchmark_summary.json

Historical Results: See docs/RESULTS_HISTORICAL.md for Week 19.6, Week 17, Phase G, and ARC-AGI leaderboard #2 position.


🎬 Video Presentation: A Universe of Meaning (6 min)

🎥 Watch: Knowledge3D — A Universe of Meaning

Knowledge3D: A Universe of Meaning

The Core Challenge: Large Language Models are black boxes — billions of parameters hiding how they think. We can't inspect them, can't verify them, can't truly trust them.

K3D's Answer: What if AI memory wasn't locked inside weights, but lived outside — as navigable universes we can explore together?

This 6-minute manifesto explores:

  • Externalizing Memory: Shifting from memorization → genuine understanding through spatial knowledge
  • AI as Fellow Inhabitants: Not tools we command, but entities we cohabit with in shared 3D spaces
  • The Open Web Vision: Accessible, inspectable, explainable AI — not locked-down corporate silos
  • Semantic Cartography: Meaning as explorable landscapes, not hidden matrices
  • The Paradigm Shift: From "what did you retrieve?" to "where did your reasoning take you?"

Why This Matters:

When humans and AI share the same spatial reality — when we can both point at knowledge, navigate through reasoning, and witness each other's paths — we move beyond prompt-response into genuine collaboration. This is not incremental improvement. This is architecture-level transformation.

Perfect For:

  • W3C PM-KR Community Group members
  • Researchers exploring explainable AI
  • Anyone asking "how do we build AI we can actually trust?"

Credits:

  • 🎙️ Narration: NotebookLM Audio Overview (Google AI Research)
  • 🎨 Visual Design: Nano Banana
  • 📝 Philosophy: FMEAI (For Machines, Embodied AI)

"What new worlds will we discover when AI memory becomes a place we can explore together?"


🎬 Deep Dive: For a comprehensive technical tour, watch Knowledge3D — An AI Universe (8 minutes)


🎬 K3D Multi-Language Video Playlist

YouTube Playlist: Knowledge3D — Multi-Language Series

Available Languages (same core content, localized narration):

  1. 🇺🇸 English - Knowledge3D — An AI Universe
  2. 🇧🇷 Brazilian Portuguese - Knowledge3D — Um Universo de IA
  3. 🇪🇸 Spanish - Knowledge3D — Un Universo de IA
  4. 🇫🇷 French - Knowledge3D — Un Univers d'IA
  5. 🇩🇪 German - Knowledge3D — Ein KI-Universum
  6. 🇮🇹 Italian - Knowledge3D — Un Universo di IA
  7. 🇨🇳 Mandarin Chinese - Knowledge3D — 人工智能宇宙
  8. 🇯🇵 Japanese - Knowledge3D — AIユニバース
  9. 🇰🇷 Korean - Knowledge3D — AI 우주
  10. 🇷🇺 Russian - Knowledge3D — Вселенная ИИ
  11. 🇮🇳 Hindi - Knowledge3D — एआई यूनिवर्स
  12. 🇸🇦 Arabic - Knowledge3D — عالم الذكاء الاصطناعي

Why Multi-Language? Knowledge3D's mission is global accessibility. Spatial AI and explainable memory transcend language barriers, but we meet people where they are.

Narration: All videos use NotebookLM's Audio Overview technology with native-speaker narration.


🏗️ What Makes K3D Different

Knowledge3D stands on the shoulders of giants. We build upon foundational research from DeepSeek, Qwen, NVIDIA, the game industry, and many others. For complete attributions of all techniques we leverage, see ATTRIBUTIONS.md.

What K3D uniquely contributes:

1. Hyper-Modular Architecture (Coined February 20, 2026)

NEW PARADIGM: K3D is the first hyper-modular architecture — modularity exists at 7 hierarchical levels simultaneously, with each level composed via canonical procedural references (symlink-style) rather than duplication.

The 7 Levels:

  1. Galaxy Universe (Domain Modularity) — Drawing, Character, Word, Grammar, Math, Reality, Audio
  2. House Universe (Execution Context Modularity) — Bounded, owned domains of discourse
  3. Rooms (Organizational Modularity) — Structured knowledge within Houses
  4. Nodes (Atomic Knowledge Modularity) — Individual knowledge units
  5. Procedures (Executable Modularity) — RPN programs as modular executable forms
  6. Operations (Primitive Modularity) — Stack operations (DUP, SWAP, ROT, arithmetic)
  7. PTX Kernels (Execution Modularity) — 45+ hand-written kernels, zero external dependencies

Why This Matters:

  • 70% compression (Character Galaxy: 87.7 MB → 26.3 MB) via symlink-style procedural references
  • Zero duplication — canonical forms stored once, referenced infinitely
  • Dual-client reality — same procedural source renders for humans (visual) and AI (executable)
  • Sovereign execution — PTX-only hot path, zero numpy/cupy/scipy/torch

Formal Definition: docs/W3C/HYPER_MODULAR_DEFINITION.md

Comparison to existing paradigms: Hyper-modular extends beyond Object-Oriented (2 levels), Microservices (2 levels), Functional (2 levels), Component-Based (2 levels), Composable (2-3 levels) to 7 levels with symlink-style procedural composition.


2. Spatial Knowledge Architecture

  • First production system where humans and AI cohabit one 3D reality
  • Dual-Client Contract: Same glTF files, different perceptual layers
  • Knowledge as navigable universes, not hidden matrices

3. Sovereign PTX Cognition

  • 45+ hand-written PTX kernels achieving <100µs latency
  • Zero cloud dependencies for core reasoning (pure ctypes + libcuda.so)
  • ThinkingTagBridge: 5-state cognitive pipeline on consumer GPU (<200MB VRAM)

4. Three-Brain System

  • Neuroscience-inspired: Cranium (PFC) + Galaxy (hippocampus) + House (neocortex)
  • Biological sleep cycles for memory consolidation (<10ms for 51,532 nodes)
  • Proven scalability: Computer architecture analogy (CPU + RAM + disk)

5. Procedural Knowledge Compression

  • PD04 codec: 12-80× compression with 99.96-99.998% fidelity
  • Knowledge stored as executable RPN programs, not dense vectors
  • Adaptive dimensions (64D-2048D) based on content complexity
  • Ternary logic (-1, 0, +1): 16× memory reduction, Tesla 3-6-9 sacred geometry alignment

6. Parameter Efficiency

  • ~7M parameters (TRM base + specialists) achieving 33% Math, 100% LHE, 6% ARC
  • Compare: GPT-4 (1.76T parameters), Claude-3.5 (175B+ parameters)
  • Knowledge lives in Galaxy Universe (VRAM), not model weights

7. Universal Accessibility by Architecture

  • Multi-modal by design: Text, Braille Galaxy, Sign Language Galaxy, Audio, Haptics
  • Spatial UI navigable via any input modality (keyboard, screen reader, haptic feedback)
  • No "accessibility add-on" — baked into core architecture

8. Multi-Vibe Code In Chain (MVCIC)

  • 7 AI minds collaborate: Claude (architecture), Codex (implementation), DeepSeek (PTX kernels), Qwen (multi-modal), GLM (reasoning), Kimi (long context), Grok (validation)
  • Human directs, AI assists, iterate in real repo with real constraints
  • Documented sessions: TEMP/ directory chronicles

9. Spatial UI Architecture: "Software as Space"

NEW PARADIGM (November 2025): Software is not a window with icons — it's a 3D space you inhabit.

Key Concepts:

  • Houses: Bounded 3D spaces (home, office, workspace)
  • Rooms: Purpose-specific areas (bedroom, conference room, lab)
  • Portals: Doorways between spaces (public ↔ private, local ↔ remote)
  • Memory Tablet: Persistent 3D canvas (like a physical desk that remembers)
  • Dual-Client Reality: Humans see visual 3D, AI navigates semantic graph — same underlying glTF

Full Spec: docs/vocabulary/SPATIAL_UI_ARCHITECTURE_SPECIFICATION.md

Vision: Replace desktop metaphor (files, folders, windows) with spatial metaphor (places, objects, presence). Accessibility-first — navigable via keyboard, screen reader, haptics, gaze, voice.


🌐 W3C Standardization & Newly Coined Terms

Procedural Memory Knowledge Representation (PM-KR) Community Group

Published: February 20, 2026 Status: ✅ OPEN for participation! World-class experts joining rapidly. W3C Page: https://www.w3.org/community/pm-kr/ Standards Repo: https://github.com/w3c-cg/pm-kr ⭐ (official W3C specifications) Reference Implementation: https://github.com/danielcamposramos/Knowledge3D (this repo)

Mission: Study and standardize procedural knowledge representation for AI systems, with K3D as reference implementation.

Repository Relationship:

  • w3c-cg/pm-kr = Open standards, collaborative specifications, test suites (standards track)
  • Knowledge3D = Reference implementation, production system, living documentation (implementation track)
  • Think: WebKit/Chromium (browsers) vs. W3C HTML/CSS specs (standards)

Core Innovations:

  1. Procedural Memory — Knowledge as executable RPN programs, not static embeddings
  2. Symlink-Style Compression — 70% reduction via canonical procedural references (no duplication)
  3. Dual-Client Reality — Same procedural source for humans (visual) and AI (executable)
  4. Sovereign Execution — PTX-only hot path, zero external ML frameworks
  5. Hyper-Modular Architecture — 7 levels of modularity (new paradigm)

W3C Documentation Package:

Expert Validation:

  • Manu Sporny (JSON-LD co-creator, RDF Canonicalization editor) — CBOR-LD compression alignment
  • Milton Ponson (PhD, Gödelian KR, domains of discourse) — Official supporter
  • Adam Sobieski (W3C veteran, 10+ years, founded 3 CGs) — Official supporter
  • Nitin Pasumarthy (LinkedIn, GNNs at production scale, KDD Best Paper) — Official supporter
  • Jonathan DeRouchie (Persistent memory AI systems) — March-June contribution commitment

Hyper-Modular Architecture (Coined February 20, 2026)

NEW TERM: Architectural paradigm where modularity exists at multiple hierarchical levels simultaneously (6-7 levels), with each level composed via canonical procedural references (symlink-style) rather than duplication.

Formal Definition: docs/W3C/HYPER_MODULAR_DEFINITION.md

Distinguishing from existing paradigms:

  • Object-Oriented: 2 levels (classes, objects)
  • Microservices: 2 levels (services, components)
  • Functional: 2 levels (modules, functions)
  • Component-Based: 2 levels (components, modules)
  • Composable: 2-3 levels (domains, components, sub-components)
  • Hyper-Modular: 6-7 levels (Galaxies → Houses → Rooms → Nodes → Procedures → Operations → PTX Kernels)

Key Characteristics:

  1. Multi-level hierarchy (3+ levels minimum, K3D has 7)
  2. Procedural modules (executable, not static)
  3. Symlink-style references (canonical forms, zero duplication)
  4. Dual-client rendering (2+ client types: human visual, AI executable)
  5. Sovereign execution (optional for Level A, required for Level B: zero external dependencies)

Comparison Table:

Paradigm Levels Composition Duplication Client Rendering
Object-Oriented 2 Inheritance, interfaces Acceptable Single
Microservices 2 API calls Acceptable JSON/REST
Functional 2 Function composition Minimal Single
Component-Based 2 Props/events Acceptable Single
Composable 2-3 Plug-and-play Reduced Single
Hyper-Modular 6-7 Symlink procedural Zero (70%+) Dual-client

Applications:

  • Educational AI (subject domains as Galaxies, curricula as Houses)
  • Enterprise Knowledge Management (corporate domains as Galaxies, departments as Houses)
  • Multi-Modal AI Agents (modality domains as Galaxies, agent contexts as Houses)
  • Robotics (procedural knowledge for task planning, on-device execution)

State of the Art Analysis: docs/W3C/K3D_VS_STATE_OF_THE_ART_2026.md — K3D is 5-7 years ahead of industry/academia (internet-verified, February 2026).


📚 Core Specifications (Vocabulary)

Key architecture and protocol specs live under docs/vocabulary/:

Architecture & System Design

Knowledge Representation

Execution & Reasoning

Domain-Specific Galaxies

Codecs & Compression

Memory & Protocols

Accessibility

W3C Standardization


🚀 Getting Started

Install

Prerequisites:

  • CUDA-capable GPU (RTX 3060 12GB recommended, GTX 1060 6GB minimum)
  • CUDA Toolkit 12.x
  • Python 3.10+

Installation:

git clone https://github.com/danielcamposramos/Knowledge3D.git
cd Knowledge3D
pip install -e .

CUDA/PTX Version Compatibility: If you encounter CUDA Error 222 (PTX version mismatch), see docs/CUDA_PTX_VERSION_COMPATIBILITY_GUIDE.md.


Runtime Workspace

K3D uses ../Knowledge3D.local/ (sibling directory) for runtime data:

cd ..
mkdir Knowledge3D.local
cd Knowledge3D

Workspace structure:

  • ../Knowledge3D.local/datasets/ — Training datasets (ARC-AGI, Math, LHE, PDFs)
  • ../Knowledge3D.local/results/ — Benchmark results, run logs
  • ../Knowledge3D.local/trm_routing_state.json — TRM persistent weights

Launch the Viewer + Bridge

Terminal 1 (Viewer):

python scripts/viewer.py

Terminal 2 (Bridge):

python scripts/bridge.py

Browser: Open http://localhost:8000

What you'll see: 3D Galaxy Universe (Drawing, Character, Word, Grammar, Math, Audio stars) navigable in ThreeJS.


Generate a Sample Galaxy

Populate Drawing Galaxy (basic geometric primitives):

python scripts/ingest_drawing_primitives.py

Result: ../Knowledge3D.local/galaxy_universe.json with ~141 Drawing Galaxy entries (LINE, CIRCLE, RECT, etc.)

Visualize: Refresh viewer to see Drawing Galaxy stars in 3D space.


Scientific Reproduction: Week 21.9 Results

Prerequisites

Hardware:

  • CUDA-capable GPU (RTX 3060 12GB validated, GTX 1060 6GB minimum)
  • 16GB+ system RAM recommended

Software:

  • CUDA Toolkit 12.x
  • Python 3.10+
  • Git LFS (for large dataset files)

Datasets (place in ../Knowledge3D.local/datasets/):


Environment Setup

Install K3D:

git clone https://github.com/danielcamposramos/Knowledge3D.git
cd Knowledge3D
pip install -e .

Verify PTX Kernels:

python -c "from knowledge3d.cranium.ptx import PTXManager; print('PTX kernels loaded:', PTXManager.list_kernels())"

Expected output: List of 45+ PTX kernels (ternary_quant, stack_push, rpn_execute, etc.)

Verify CUDA:

nvidia-smi

Expected: GPU detected, CUDA version 12.x


Reproduce Week 21.9 Full100 Benchmark

Run integrated benchmark (100 ARC, 100 Math, 50 LHE):

python scripts/run_integrated_benchmark.py \
    --arc-path ../Knowledge3D.local/datasets/arc_agi_2_evaluation.json \
    --math-path ../Knowledge3D.local/datasets/aimo_train.json \
    --lhe-path ../Knowledge3D.local/datasets/lhe_questions.json \
    --output-dir ../Knowledge3D.local/results/reproduction_run \
    --arc-limit 100 \
    --math-limit 100 \
    --lhe-limit 50

Runtime: ~45-60 minutes on RTX 3060 12GB

Output: ../Knowledge3D.local/results/reproduction_run/week14_benchmark_summary.json


Validation Checks

1. PTX Sovereignty (Zero Python Fallbacks):

grep -r "ptx_full_used_rate" ../Knowledge3D.local/results/reproduction_run/week14_benchmark_summary.json

Expected: "ptx_full_used_rate": 1.0

2. Unified Persistence (Grammar Galaxy Growth):

# Before benchmark
cat ../Knowledge3D.local/galaxy_universe.json | jq '.grammar | length'

# After benchmark
cat ../Knowledge3D.local/galaxy_universe.json | jq '.grammar | length'

Expected: Grammar Galaxy grows by ~1000 entries (e.g., 30,539 → 31,539)

3. Oracle Score (First Exact Match):

grep -r "oracle_exact" ../Knowledge3D.local/results/reproduction_run/week14_benchmark_summary.json

Expected: "oracle_exact": 0.01 (1/100 exact match)

4. Palette Score (Improved):

grep -r "palette_score" ../Knowledge3D.local/results/reproduction_run/week14_benchmark_summary.json

Expected: "palette_score": 0.73 to 0.75 (±0.01 tolerance)


Expected Results (Tolerance ±5%)

Metric Expected Tolerance Week 21.9 Validated
ARC-AGI 2 6% (6/100) ±1 task ✅ 6%
Math Competitions 33.33% (33/100) ±3 tasks ✅ 33%
Last Humanity Exam 100% (50/50) 0 (perfect) ✅ 100%
Oracle (exact) 0.01 (1/100) ±1 ✅ 0.01
Oracle (fuzzy @0.90) 0.13 (13/100) ±2 ✅ 0.13
Palette score 0.7391 ±0.02 ✅ 0.74
PTX sovereignty 1.0 0 (exact) ✅ 1.0
Grammar Galaxy growth +1000 entries ±100 ✅ +1000

If results deviate beyond tolerance:


Troubleshooting

Issue: CUDA Error 222 (PTX version mismatch) Fix: See docs/CUDA_PTX_VERSION_COMPATIBILITY_GUIDE.md

Issue: RuntimeError: NumPy detected in hot path! Fix: This is intentional! K3D sovereignty tests actively fail on CPU fallbacks. Check if numpy/cupy imported in hot path code.

Issue: Low GPU memory (OOM errors) Fix: Reduce batch sizes in knowledge3d/config.pyTRM_BATCH_SIZE = 16 (default 32)

Issue: Grammar Galaxy not growing Fix: Ensure shared_instance = true in benchmark config (enables persistent Galaxy across tasks)

Issue: Oracle score = 0.0 (no exact matches) Fix: Verify fuzzy_threshold = 0.90 in oracle scoring config (too strict threshold blocks matches)


Artifacts for Paper Validation

Reproduction package (submit with paper):

  • week14_benchmark_summary.json — Full results (metrics, task-by-task breakdown)
  • galaxy_universe_before.json — Galaxy state before benchmark
  • galaxy_universe_after.json — Galaxy state after benchmark (shows Grammar growth)
  • trm_routing_state.json — TRM weights (shows learning trajectory)
  • ptx_kernel_usage_log.txt — PTX kernel call trace (proves sovereignty)

SHA256 checksums (verify integrity):

sha256sum ../Knowledge3D.local/results/reproduction_run/*.json > checksums.txt

Submit with paper: All JSON files + checksums.txt


Citation

@software{knowledge3d_2026,
  author = {Ramos, Daniel},
  title = {Knowledge3D: Sovereign Spatial AI with Hyper-Modular Architecture},
  year = {2026},
  url = {https://github.com/danielcamposramos/Knowledge3D},
  note = {Week 21.9 Validation: PTX Sovereignty + Unified Persistent Memory}
}

📖 Documentation

Vocabulary Specifications

See 📚 Core Specifications above for full list.


🤝 Contributing

Knowledge3D is built through Multi-Vibe Code In Chain (MVCIC) — human-AI partnership with 7 AI minds:

  • Claude (architecture, specs, documentation)
  • Codex (implementation, tests, refactoring)
  • DeepSeek (PTX kernels, low-level optimization)
  • Qwen (multi-modal ingestion, vision-enriched galaxies)
  • GLM (reasoning validation, benchmark analysis)
  • Kimi (long-context planning, session chronicles)
  • Grok (independent validation, architectural critique)

Session chronicles: TEMP/ directory documents all major implementation sessions.

How to contribute:

  1. Read CLAUDE.md (for Claude-style agents) or CODEX.md (for Codex-style agents)
  2. Review docs/ROADMAP.md for current priorities
  3. Check docs/vocabulary/ for architectural specs
  4. Open issues/PRs following existing session patterns

Philosophy: "We fix or we fix" — no CPU fallbacks, no compromises, always PTX-first.


📜 License & Acknowledgments

License: Apache 2.0 (see LICENSE)

Acknowledgments:

  • NVIDIA: CUDA, PTX ISA, cuBLAS foundations
  • DeepSeek: Transformer architecture, MoE specialists inspiration
  • Qwen Team: Multi-modal vision models for Drawing Galaxy enrichment
  • GLM Team: Reasoning benchmarks and validation methodologies
  • Kimi (Moonshot AI): Long-context memory architectures
  • Game Industry: glTF, ThreeJS, spatial rendering pipelines
  • W3C: Semantic Web vision, JSON-LD, RDF foundations
  • Aaron Swartz: Open knowledge philosophy (we honor your legacy)
  • Nikola Tesla: Sacred geometry (3-6-9), ternary logic inspiration

Complete attributions: ATTRIBUTIONS.md


"Software was always meant to be a place, not a window. Welcome home."

🌌 Explore K3D: NotebookLM Research Space 🎬 Watch Videos: Multi-Language Playlist 🌐 W3C Standardization: PM-KR Community Group 📚 Dive Deep: docs/vocabulary/

Built with collective intelligence. Shared with open hearts. For a sovereign, spatial future.

About

K3D: a GPU‑native spatial knowledge architecture where humans and AI cohabit 3D “houses” of memory—unifying CAD‑like geometry, vector graphs, and neurosymbolic reasoning. Open specs + Apache‑2.0 reference code.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •