archinetai/difformer-pytorch

Diffusion based transformer, in PyTorch (Experimental).

38
/ 100
Emerging

This is a tool for machine learning practitioners and researchers who are experimenting with novel deep learning architectures. It helps in building models that can generate sequences or embeddings by de-masking corrupted inputs. Developers would use this to train models on tokenized data or raw embeddings to predict missing information.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning researcher or developer exploring advanced generative models for sequential data or embeddings, particularly if you are interested in diffusion-based transformers.

Not ideal if you are looking for a ready-to-use solution for a specific application like natural language processing or image generation without custom model development.

deep-learning-research generative-models sequence-modeling neural-networks machine-learning-engineering
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 7 / 25

How are scores calculated?

Stars

24

Forks

2

Language

Python

License

MIT

Last pushed

Sep 13, 2022

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/archinetai/difformer-pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.