james-oldfield/PoS-subspaces
[NeurIPS'23] Parts of Speech–Grounded Subspaces in Vision-Language Models
This project helps researchers and developers working with Vision-Language Models like CLIP to better understand and control how these models interpret images. It takes image representations and associated text descriptions, then separates the visual information into distinct components based on parts of speech (e.g., nouns for objects, adjectives for appearance). This allows users to extract or manipulate specific visual attributes more precisely.
No commits in the last 6 months.
Use this if you are a researcher or developer who needs to disentangle different visual aspects (like object vs. style) within your vision-language model embeddings for tasks like controlled image generation or improved classification.
Not ideal if you are looking for an off-the-shelf application to directly edit images or perform general-purpose image classification without delving into model embeddings.
Stars
29
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/james-oldfield/PoS-subspaces"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...