wkentaro/osam
Get up and running with SAM1-3, EfficientSAM, YOLO-World, and other promptable vision models locally.
This tool helps developers and machine learning engineers easily integrate advanced image segmentation and object detection capabilities into their applications. You input an image and a specific prompt (like a point, bounding box, or text), and it outputs a segmented mask or detected objects. This is ideal for those who need to quickly experiment with or deploy powerful vision models locally.
Available on PyPI.
Use this if you need to run and experiment with state-of-the-art promptable vision models like SAM or YOLO-World directly on your local machine or server.
Not ideal if you are a non-technical end-user looking for a graphical user interface to perform image editing or analysis.
Stars
81
Forks
14
Language
Python
License
MIT
Category
Last pushed
Jan 28, 2026
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/wkentaro/osam"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
gradio-app/gradio
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
opengeos/segment-geospatial
A Python package for segmenting geospatial data with the Segment Anything Model (SAM)
juglab/EmbedSeg
Code Implementation for EmbedSeg, an Instance Segmentation Method for Microscopy Images
lartpang/awesome-segmentation-saliency-dataset
A collection of some datasets for segmentation / saliency detection. Welcome to PR...:smile:
coolzhao/Geo-SAM
A QGIS plugin tool using Segment Anything Model (SAM) to accelerate segmenting or delineating...