Manifestro
Open Source Library·v1.0 — Stable

DREAM

NEURAL

A new family of bio-inspired neural networks.
Continuous-time. Surprise-driven. Adaptive.
Not a model. A library.

DREAM-NN is an open-source PyTorch library implementing a new family of networks inspired by how biological brains learn — through adaptation, prediction error, and dynamic time constants.

Unlike traditional RNNs (LSTM, GRU) with fixed dynamics, DREAM cells continuously adapt during both training and inference — reacting faster to surprising inputs and stabilizing around familiar patterns.

pip install dreamnn

Core Mechanisms

01

Surprise-Driven Plasticity

Synaptic updates are modulated by prediction error. High surprise triggers rapid learning; expected patterns stabilize into memory. Mimics biological neuromodulation.

02

Liquid Time-Constants

Each neuron's integration speed adapts dynamically. The network accelerates to capture novelty and slows to preserve long-term dependencies — all without retraining.

03

Hebbian Fast Weights

Low-rank weight decomposition (U × Vᵀ) enables high-speed, in-context adaptation during inference. Only U updates — V stays fixed. Efficient meta-learning in practice.

04

Sleep Consolidation

During low-surprise periods, memory is stabilized. The model consolidates frequently-used patterns — analogous to memory consolidation during biological sleep.

Mathematical Core

Continuous-Time
Dynamics.

State evolution follows a differential equation, discretized via Euler integration. The time constant τ is not fixed — it adapts based on surprise, controlling how fast the network integrates new information.

Full Documentation
State Update Rule
τ · dh/dt = −h + h_target(x, h, error)
↓ Euler discretization
hₜ₊₁ = (1 − Δt/τ) · hₜ + (Δt/τ) · h_target
Surprise Gate & Adaptive τ
surprise = σ( (‖error‖ − τ_eff) / γ )
τ = τ_system / (1 + surprise × scale)
Hebbian Fast Weight Update
dU = −λ(U − U_target) + η · surprise · (h ⊗ error @ V)

Quick Start

Drop-in
PyTorch Module.

DREAM is designed to be used just like nn.LSTM or nn.GRU. Replace and gain continuous adaptive dynamics with no extra training overhead.

from dream import DREAM
import torch
# Works like nn.LSTM
model = DREAM(
input_dim=64,
hidden_dim=128,
rank=8,
)
x = torch.randn(32, 50, 64)
output, state = model(x)

Explore the
Architecture.

Full API reference, benchmarks, and configuration guides are available on the documentation portal.