yinboc/trans-inr
Transformers as Meta-Learners for Implicit Neural Representations, in ECCV 2022
This project helps machine learning researchers efficiently reconstruct images and synthesize novel views of objects from limited data. It takes in collections of images, such as face datasets (CelebA), object categories (Imagenette), or multi-view captures for 3D objects, and outputs highly detailed visual representations. It's designed for computer vision scientists and researchers working on advanced image and 3D reconstruction tasks.
160 stars. No commits in the last 6 months.
Use this if you need to generate high-fidelity images or novel object views with minimal training data, leveraging meta-learning with Transformers.
Not ideal if you are looking for a general-purpose image editing tool or a solution for common image classification tasks without a focus on implicit neural representations.
Stars
160
Forks
11
Language
Python
License
BSD-3-Clause
Category
Last pushed
Aug 13, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yinboc/trans-inr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/LIMoE
Implementation of the "the first large-scale multimodal mixture of experts models." from the...
dohlee/chromoformer
The official code implementation for Chromoformer in PyTorch. (Lee et al., Nature Communications. 2022)
ahans30/goldfish-loss
[NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs
bloomberg/MixCE-acl2023
Implementation of MixCE method described in ACL 2023 paper by Zhang et al.
ibnaleem/mixtral.py
A Python module for running the Mixtral-8x7B language model with customisable precision and...