chen742/PiPa

Official Implementation of PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation

33
/ 100
Emerging

This project helps computer vision practitioners train models to accurately identify and outline objects in real-world images, even when the models were initially trained on synthetic data like video game screenshots. It takes in labeled images from a source domain (e.g., simulated environments) and unlabeled real-world images from a target domain, producing a refined model that can precisely segment objects in the real-world images. This tool is ideal for researchers or engineers working on computer vision tasks who need to deploy models from virtual training environments to actual environments without extensive manual labeling of real-world data.

100 stars. No commits in the last 6 months.

Use this if you need to adapt a semantic segmentation model trained on synthetic, labeled data to perform accurately on unlabeled real-world images.

Not ideal if you already have abundant labeled real-world data for training your semantic segmentation model from scratch.

computer-vision image-segmentation domain-adaptation model-deployment synthetic-data
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

100

Forks

15

Language

Python

License

Last pushed

Jul 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/chen742/PiPa"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.