This file implements the Persistent Memory Logic Loop (PMLL) based on the Recursive Transformer Model, integrating memory-augmented attention and recursive processing for the nanochat application.
# Add PMLL.py: Persistent Memory Logic Loop Implementation
## Overview
This PR adds a new `PMLL.py` module implementing the Persistent Memory Logic Loop based on the Recursive Transformer Model white paper. The module provides memory-augmented attention capabilities for the existing nanochat GPT implementation.
## Key Features
- MemoryBlock class for efficient tensor storage and confidence tracking
- AttentionFlower module for multi-petal memory routing
- Merkle tree verification system for memory integrity
- Temporal decay and consensus computation
- Async recursive reconsideration of deferred memories
- Integration with safetensors for state persistence
- Compatible with nanochat's GPT implementation
## Implementation Details
- Adds lattice-based structure for tensor routing and memory compression
- Integrates with GPT model's attention mechanism
- Supports temporal knowledge graph management
- Implements recursive reconsideration logic with Merkle tree verification
- Provides async memory processing and consensus computation
- Includes state persistence with safetensors checkpointing
## Usage
The PMLL module can be used to augment transformer models like the GPT in nanochat for persistent memory and recursive reconsideration:
1. Initialize PMLLLattice with config
2. Set external embedder (e.g., sentence-transformers)
3. Use in GPT's attention mechanism for memory-augmented processing
## Dependencies
- torch, numpy for tensor operations
- safetensors for checkpointing
- Async support for memory processing
- External embedder requirement (e.g., sentence-transformers)
## Testing
Please test the integration with:
- Memory block creation and persistence
- Attention routing through multiple petals
- Merkle tree verification of memory chains
- Temporal decay calculations
- State saving/loading with safetensors