squaresLab/VarCLR
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
VarCLR helps software developers understand code by measuring the semantic similarity between variable names. You input one or more variable names, and it outputs a numerical score indicating how semantically similar they are. This is useful for developers who need to refactor code, understand legacy systems, or maintain consistent naming conventions.
No commits in the last 6 months.
Use this if you need to programmatically determine how semantically similar different variable names are within your codebase.
Not ideal if you are not a software developer or if your primary need is not focused on analyzing variable name semantics in source code.
Stars
40
Forks
5
Language
Python
License
MIT
Category
Last pushed
Jan 04, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/squaresLab/VarCLR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mims-harvard/ClinVec
ClinVec: Unified Embeddings of Clinical Codes Enable Knowledge-Grounded AI in Medicine
NYUMedML/DeepEHR
Chronic Disease Prediction Using Medical Notes
mims-harvard/SHEPHERD
SHEPHERD: Few shot learning for phenotype-driven diagnosis of patients with rare genetic diseases
biocentral/biocentral_server
Compute functionality for biocentral.
nomic-ai/contrastors
Train Models Contrastively in Pytorch