sebasmos/QuantumVE

Vision Transformer embeddings enable scalable quantum SVMs with real-world accuracy gains.

43
/ 100
Emerging

This project helps quantum machine learning researchers overcome scalability and accuracy challenges when using quantum support vector machines (QSVMs). It takes image data and applies Vision Transformer (ViT) embeddings, which are then used to train QSVMs. The outcome is significantly improved classification accuracy compared to traditional methods, even with practical qubit numbers, making quantum machine learning more viable for real-world image analysis tasks.

Use this if you are a quantum machine learning researcher or practitioner looking to enhance the performance and scalability of quantum SVMs for image classification by leveraging advanced classical embeddings.

Not ideal if you are not working with quantum computing or if your primary focus is on classical machine learning without an interest in quantum advantage.

quantum-machine-learning image-classification quantum-algorithms deep-learning computational-physics
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

9

Forks

6

Language

Jupyter Notebook

License

Last pushed

Nov 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sebasmos/QuantumVE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.