ji-code25/Point-Transformer-Diffusion
Point Transformer Diffusion is a novel generative model for 3D point cloud generation, which integrates the classical diffusion model and a local self-attention network.
This project helps researchers and developers in 3D computer vision to create realistic new 3D shapes from scratch. You input a category like "car" or "chair," and it generates a detailed 3D point cloud representing a novel object of that type. This is ideal for those working on synthetic data generation, virtual environment creation, or exploring new design possibilities for 3D objects.
No commits in the last 6 months.
Use this if you need to generate high-quality, novel 3D point cloud data for objects like cars, chairs, or airplanes, without starting from an existing 3D model.
Not ideal if you need to reconstruct 3D objects from images or want to manipulate existing 3D models rather than creating new ones.
Stars
24
Forks
2
Language
Python
License
MIT
Category
Last pushed
Aug 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ji-code25/Point-Transformer-Diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model