vitali-sialedchyk/stability-first-ai

Solving catastrophic forgetting with Recursive Time architecture, Active Sleep (generative replay), and Temporal LoRA. Proving the "Lazarus Effect" in neural networks.

31
/ 100
Emerging

This project offers new ways for AI models to learn new things without forgetting old knowledge. It takes a model that might be struggling with 'catastrophic forgetting' (where learning something new erases old skills) and helps it remember or even recover lost information. The output is a more stable, efficient, and adaptable AI model, especially useful for those working with large language models, image recognition, or complex AI systems.

Use this if your AI models struggle to retain previously learned information when trained on new data, or if you need to recover lost capabilities from a damaged or pruned model without retraining from scratch.

Not ideal if you are looking for a plug-and-play solution for simple, one-off training tasks where catastrophic forgetting is not a significant concern.

AI-model-development machine-learning-engineering large-language-models computer-vision model-optimization
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Jan 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vitali-sialedchyk/stability-first-ai"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.