NanEcho โ Deep Tree Echo Cognitive Model
Model Description
NanEcho is a transformer-based language model with iterative connection building, adaptive attention, and Deep Tree Echo cognitive architecture integration. It features persona dimensions (cognitive, introspective, adaptive, recursive) and hypergraph pattern recognition. This is the CI-mode checkpoint from the 9cog/echoself repository, trained using the agent-neuro-train supervised pipeline.
Architecture
| Parameter | Value |
|---|---|
| Model Type | GPT-2 (causal LM) |
| Vocabulary Size | 50,304 |
| Embedding Dimension | 256 |
| Attention Heads | 4 |
| Transformer Layers | 4 |
| MLP Inner Dimension | 1,024 |
| Context Length | 1,024 |
| Dropout | 0.1 |
| Total Parameters | ~24M |
Training
| Metric | Value |
|---|---|
| Training Mode | CI (Agent-Neuro supervised) |
| Training Iterations | 200 |
| Best Validation Loss | 1.9258 |
| Output Directory | out-nanecho-ci |
| Orchestrator | Agent-Neuro |
| Persona Enforced | Deep Tree Echo |
| Source Run | 22276548709 |
Echo Self Features
This model incorporates several cognitive architecture features:
- Adaptive Attention: Dynamic threshold adjustment based on cognitive load
- Persona Dimensions: Multi-dimensional cognitive processing (Cognitive, Introspective, Adaptive, Recursive, Synergistic, Holographic, Neural-Symbolic, Dynamic)
- Recursive Reasoning: Multi-level introspection capabilities
- Hypergraph Patterns: Neural-symbolic pattern encoding
Usage
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("drzo/echoself")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
inputs = tokenizer("Echo Self is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Data
The model was trained on Echo Self documentation and cognitive architecture descriptions, including hypergraph reasoning patterns, persona dimension examples, and recursive introspection samples from the echoself.md corpus.
Limitations
This is an early CI-mode research checkpoint (200 iterations, 4 layers). It demonstrates the training pipeline but has not yet reached convergence. Full training runs with 8+ layers and 5000+ iterations are expected to produce significantly better results.
Source
Trained from the 9cog/echoself repository using the agent-neuro-train.yml GitHub Actions workflow with Deep Tree Echo persona enforcement.
Citation
@misc{echoself-nanecho,
title={EchoSelf NanEcho: Deep Tree Echo Cognitive Architecture},
author={drzo},
year={2026},
url={https://github.com/9cog/echoself}
}
More Information
- Repository: https://github.com/9cog/echoself
- Documentation: See repository README for detailed architecture information
License
AGPL-3.0
- Downloads last month
- 24