yinboc/trans-inr

Transformers as Meta-Learners for Implicit Neural Representations, in ECCV 2022

36
/ 100
Emerging

This project helps machine learning researchers efficiently reconstruct images and synthesize novel views of objects from limited data. It takes in collections of images, such as face datasets (CelebA), object categories (Imagenette), or multi-view captures for 3D objects, and outputs highly detailed visual representations. It's designed for computer vision scientists and researchers working on advanced image and 3D reconstruction tasks.

160 stars. No commits in the last 6 months.

Use this if you need to generate high-fidelity images or novel object views with minimal training data, leveraging meta-learning with Transformers.

Not ideal if you are looking for a general-purpose image editing tool or a solution for common image classification tasks without a focus on implicit neural representations.

computer-vision-research image-reconstruction 3d-view-synthesis meta-learning implicit-neural-representations
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

160

Forks

11

Language

Python

License

BSD-3-Clause

Last pushed

Aug 13, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yinboc/trans-inr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.