tum-pbs/ConFIG
[ICLR2025 Spotlight] Official implementation of Conflict-Free Inverse Gradients Method
This helps researchers and engineers who train neural networks with multiple competing objectives, such as in physics-informed neural networks or multi-task learning. It takes individual loss functions and their gradients as input and provides a single, conflict-free update direction, leading to more stable and effective model training. Scientists and engineers working on complex simulations or models will find this useful.
104 stars.
Use this if you are training a neural network where different loss terms, like physics equations and boundary conditions, conflict and prevent your model from converging effectively.
Not ideal if your neural network has a single loss function or if its multiple loss terms are already well-aligned and do not exhibit conflict.
Stars
104
Forks
10
Language
Python
License
MIT
Category
Last pushed
Nov 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tum-pbs/ConFIG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aimagelab/mammoth
An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of...
LAMDA-CL/PyCIL
PyCIL: A Python Toolbox for Class-Incremental Learning
GMvandeVen/continual-learning
PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR,...
LAMDA-CL/LAMDA-PILOT
🎉 PILOT: A Pre-trained Model-Based Continual Learning Toolbox
mmasana/FACIL
Framework for Analysis of Class-Incremental Learning with 12 state-of-the-art methods and 3 baselines.