You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Codette Adapter Training Lab

Codette is an experimental AI research system for recursive reasoning, multi-perspective cognition, and ethical AI alignment, created by Jonathan Harrison.

This repository contains the complete training pipeline, inference server, and 8 trained LoRA adapters for the Codette cognitive architecture running on Llama 3.1 8B.

πŸš€ Latest Status (Session 2026-03-19) β€” LIVE & TESTED

βœ… Agent LLM Integration Complete

All 6 reasoning agents now use real LLM inference via trained LoRA adapters:

  • Newton (physics reasoning) β†’ newton adapter
  • Quantum (probabilistic thinking) β†’ quantum adapter
  • DaVinci (creative invention) β†’ davinci adapter
  • Philosophy (conceptual reasoning) β†’ philosophy adapter
  • Empathy (emotional intelligence) β†’ empathy adapter
  • Ethics (moral reasoning) β†’ philosophy adapter

Result: Agents generate domain-specific, LLM-backed reasoning instead of templates.

βœ… GPU Acceleration Active

  • Model load: ~8-10 seconds (GPU vs 40s CPU)
  • Inference: 2-4 sec/query (GPU vs 15-20s CPU)
  • Full eval: ~2-3 minutes (GPU vs 7-10 minutes CPU)
  • 35/35 layers offloaded to GPU via llama.cpp

βœ… Phase 6 Stability Verified

All control mechanism patches tested and working:

  • Patch 2: Conflict capping (23 β†’ 10 conflicts/round)
  • Patch 4: Gamma authority (threshold 0.3, prevents collapse)
  • Patch 5: Domain-aware gating (2-3 agents/domain, not all 6)

βœ… First Eval Results

Q1: "What is the speed of light in vacuum?"
  Agent modes: βœ“ LLM βœ“ LLM βœ“ LLM βœ“ LLM βœ“ LLM βœ“ LLM (all agents using GPU)
  Domain detection: physics β†’ 2 agents active (Newton, Quantum)
  Conflicts: 23 detected β†’ 10 capped (Patch 2)
  Gamma: 0.38 β†’ intervention triggered (Patch 4)
  GPU: βœ“ ENABLED (35 layers offloaded)

Model Weights

All 8 adapters are included in two formats:

Format Directory Size Use Case
GGUF (f16) adapters/*.gguf ~924 MB llama.cpp inference with hot-swap
PEFT SafeTensors adapters_peft/*/ ~79 MB HuggingFace / transformers fine-tuning

Base model required: meta-llama/Llama-3.1-8B-Instruct (or any Llama-3.1-8B variant with hidden_size=4096)

Key Metrics

Metric Value Context
Phase Coherence (Gamma) 0.9835 11-agent convergence
AEGIS Ethical Alignment (Eta) 0.961 6-framework ethical governance
Cocoon Coherence 0.994 Memory state stability
Memory Phase Stability 0.969 Cross-session persistence
Tension Decay 91.2% 200-agent embodied simulation

Cognitive Subsystems (10 active)

Subsystem Module Purpose
Reasoning Forge reasoning_forge/forge_engine.py 6-agent multi-perspective debate + synthesis
Epistemic Metrics reasoning_forge/epistemic_metrics.py RC+xi tension/coherence tracking
Quantum Spiderweb reasoning_forge/quantum_spiderweb.py 5D belief propagation + attractor detection
Cocoon Sync reasoning_forge/cocoon_sync.py Fernet-encrypted federated state sync
AEGIS reasoning_forge/aegis.py 6-framework ethical governance (utilitarian, deontological, virtue, care, ubuntu, indigenous)
Nexus Signal Engine reasoning_forge/nexus.py Pre-corruption detection via entropy + FFT + intent vectors
Living Memory reasoning_forge/living_memory.py Emotionally-tagged memory cocoons with SHA-256 anchors
Guardian reasoning_forge/guardian.py 3-layer protection (sanitizer + ethical anchor + trust calibrator)
Resonant Continuity reasoning_forge/resonant_continuity.py Psi_r wavefunction: emotion x energy x frequency x intent
Perspective Registry reasoning_forge/perspective_registry.py 12 perspectives (8 LoRA-backed + 4 prompt-only with fallback)

Architecture

codette-training-lab/
β”œβ”€β”€ dataset_engine/          # Dataset generation pipeline
β”‚   β”œβ”€β”€ template_registry.py # Rich template pools per adapter
β”‚   β”œβ”€β”€ answer_generator.py  # Structured educational answer generation
β”‚   β”œβ”€β”€ dataset_generator.py # Main generator with dedup + validation
β”‚   └── templates/           # JSON template definitions
β”‚
β”œβ”€β”€ reasoning_forge/         # Multi-agent reasoning dataset refinement
β”‚   β”œβ”€β”€ agents/              # Newton, Quantum, Ethics, Philosophy, DaVinci, Empathy
β”‚   β”œβ”€β”€ critic_agent.py      # Quality evaluation agent
β”‚   β”œβ”€β”€ synthesis_engine.py  # Multi-perspective synthesis
β”‚   β”œβ”€β”€ problem_generator.py # Reasoning problem generation
β”‚   └── forge_engine.py      # Orchestrator
β”‚
β”œβ”€β”€ training/                # LoRA training scripts
β”‚   β”œβ”€β”€ train_adapter.py     # Single adapter training (4-bit LoRA)
β”‚   β”œβ”€β”€ train_all_adapters.py# Sequential multi-adapter training
β”‚   β”œβ”€β”€ merge_adapters.py    # Merge LoRA into base model
β”‚   └── configs/             # Training hyperparameters
β”‚
β”œβ”€β”€ evaluation/              # Benchmarks and quality assurance
β”‚   β”œβ”€β”€ reasoning_metrics.py # Multi-dimensional scoring
β”‚   β”œβ”€β”€ benchmark_runner.py  # Automated evaluation
β”‚   β”œβ”€β”€ dataset_validator.py # Dataset quality checks
β”‚   β”œβ”€β”€ failure_analyzer.py  # Weakness detection
β”‚   └── prompts/             # Benchmark test sets
β”‚
β”œβ”€β”€ observatory/             # Experiment tracking and monitoring
β”‚   β”œβ”€β”€ metrics_logger.py    # Training run logging
β”‚   β”œβ”€β”€ performance_tracker.py # Improvement trends
β”‚   β”œβ”€β”€ dataset_quality_monitor.py
β”‚   └── dashboard.py         # ASCII status dashboard
β”‚
β”œβ”€β”€ research/                # Source research documents
β”‚   β”œβ”€β”€ papers/              # Published manuscripts
β”‚   β”œβ”€β”€ frameworks/          # RC+xi, quantum equations, perspectives
β”‚   └── experiments/         # Cocoon simulations, logs
β”‚
β”œβ”€β”€ datasets/                # Generated training datasets (JSONL)
β”œβ”€β”€ adapters/                # Trained LoRA adapters
β”œβ”€β”€ scripts/                 # Pipeline orchestration
β”‚   β”œβ”€β”€ run_full_pipeline.py # End-to-end pipeline
β”‚   └── hf_job.yaml          # HuggingFace job config
└── configs/                 # System configuration
    β”œβ”€β”€ adapter_registry.yaml
    └── pipeline_config.yaml

Adapters

Adapter Domain Target Examples System Prompt
Newton Analytical physics reasoning 3000 Newtonian analytical precision
DaVinci Creative invention thinking 2500 Creative inventiveness
Empathy Emotional understanding 2500 Deep empathy and EQ
Philosophy Conceptual reasoning 2000 Philosophical depth
Quantum Probabilistic thinking 2000 Quantum probabilistic thinking
RC+xi Recursive cognition 3000 RC+xi framework reasoning
Multi-Perspective Synthesis across lenses 2500 Multi-perspective synthesis
Systems AI architecture 2000 System architecture design

Training Pipeline

research documents
      ↓
dataset extraction (template-based generation)
      ↓
synthetic reasoning expansion (counterexamples, variations)
      ↓
dataset validation (dedup, quality filter)
      ↓
reasoning forge (multi-agent critique + refinement)
      ↓
adapter training (4-bit LoRA on Llama 3.1 8B)
      ↓
benchmark evaluation (multi-dimensional reasoning metrics)
      ↓
observatory logging (track improvement over time)

Quick Start

Install dependencies

pip install -r requirements.txt

Generate all datasets

python -m dataset_engine.generate_all

Run full pipeline

python scripts/run_full_pipeline.py --all

Generate + validate only

python scripts/run_full_pipeline.py --generate --validate

Train a single adapter

python -m training.train_adapter \
  --dataset datasets/newton_reasoning.jsonl \
  --adapter-name newton \
  --output-dir adapters/newton

Run benchmarks

python -m evaluation.benchmark_runner --prompts evaluation/prompts/reasoning_tests.json

View dashboard

python -m observatory.dashboard

Dataset Format

All datasets use chat-format JSONL:

{
  "messages": [
    {"role": "system", "content": "You are Codette, a recursive multi-perspective reasoning AI."},
    {"role": "user", "content": "Explain the conservation of momentum using a real-world example."},
    {"role": "assistant", "content": "Conservation of momentum states that in a closed system..."}
  ]
}

Reasoning Forge

The Reasoning Forge refines training data through multi-agent debate:

concept β†’ problem generator β†’ agent analysis β†’ critic evaluation β†’ synthesis β†’ training example

Agents: Newton (physics), Quantum (probability), Ethics (alignment), Philosophy (meaning), DaVinci (creativity), Empathy (emotion)

Each agent analyzes from its perspective, the critic scores quality, and the synthesis engine produces a unified multi-perspective response.

Base Model

  • Model: meta-llama/Llama-3.1-8B-Instruct
  • Method: QLoRA (4-bit quantization)
  • LoRA config: rank=16, alpha=32, target=q/k/v/o projections

Research Background

Codette implements the RC+xi (Recursive Convergence + Epistemic Tension) framework for structured multi-perspective reasoning. The system coordinates 11 reasoning perspectives in parallel before synthesizing a final response.

Key research documents in research/:

  • RC+xi Framework specification
  • Quantum Cosmic Multicore experiment
  • Codette Research Equations (8 core quantum mathematics)
  • Multi-perspective reasoning architecture

Inference & Evaluation

Interactive Web UI

Launch the real-time multi-perspective reasoning UI:

# Launch web interface (default port 5000)
python inference/codette_server.py

# Or use the batch file (Windows)
codette_web.bat

Features:

  • Real-time adapter hot-swap (0ms switching via llama.cpp LoRA)
  • Real LLM-backed agents (not templates) generating domain-specific reasoning
  • GPU acceleration (35 layers offloaded)
  • Quantum spiderweb visualization
  • Live AEGIS ethical alignment tracking
  • Memory cocoon emotional profiling

Evaluation & Testing

Standard Evaluation (4 conditions Γ— 25 questions):

python evaluation/run_evaluation_sprint.py --questions 5

Real-Time Agent Thinking (see agents reasoning in real-time):

python evaluation/run_evaluation_verbose.py --questions 1

Shows:

  • Agent mode: βœ“ LLM (real inference) or βœ— TEMPLATE (fallback)
  • System prompts used
  • Token generation
  • Domain detection and agent gating
  • Conflict detection and capping
  • Gamma coherence monitoring
  • Final synthesis

Verbose Logs with CODETTE_VERBOSE=1:

CODETTE_VERBOSE=1 python evaluation/run_evaluation_verbose.py

Shows each agent's thinking step-by-step.

LoRA Configuration

method: QLoRA (4-bit NF4 quantization)
rank: 16
alpha: 32
dropout: 0.05
target_modules: [q_proj, k_proj, v_proj, o_proj]
total_training_examples: 20,500

RC+xi Framework

The core theoretical framework β€” Recursive Convergence + Epistemic Tension β€” coordinates 11 reasoning perspectives:

  1. Newton (analytical physics) β†’ newton adapter
  2. DaVinci (creative invention) β†’ davinci adapter
  3. Empathy (emotional intelligence) β†’ empathy adapter
  4. Philosophy (conceptual reasoning) β†’ philosophy adapter
  5. Quantum (probabilistic thinking) β†’ quantum adapter
  6. RC+xi Consciousness β†’ consciousness adapter
  7. Multi-Perspective Synthesis β†’ multi_perspective adapter
  8. Systems Architecture β†’ systems_architecture adapter
  9. Human Intuition β†’ prompt-only (fallback: empathy)
  10. Resilient Kindness β†’ prompt-only (fallback: empathy)
  11. AEGIS Ethics β†’ prompt-only (fallback: consciousness)

Requirements

  • Python 3.10+
  • PyTorch 2.1+ (CUDA, ROCm, or XPU backend)
  • 16GB+ RAM (CPU training) or GPU with 8GB+ VRAM
  • llama.cpp with GGUF support (for inference server)
  • ~1-3 hours per adapter (CPU) or 20-40 min (A10/A100 GPU)

Hardware Tested

  • Intel Arc 140V (8GB) β€” PyTorch 2.10.0+xpu, native XPU backend
  • NVIDIA GPUs via CUDA (A10, A100, RTX series)
  • CPU-only mode supported

License

MIT β€” Research project by Jonathan Harrison. Experimental AI development.

Downloads last month
1
GGUF
Model size
13.6M params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Raiff1982/codette-training-lab

Adapter
(1788)
this model

Space using Raiff1982/codette-training-lab 1

Evaluation results