twitter-research/lmsoc

Code for reproducing our paper: LMSOC: An Approach for Socially Sensitive Pretraining

27
/ 100
Experimental

This project helps natural language processing (NLP) researchers and engineers improve how language models understand social context. It takes social context data, like geographical location or time, and integrates it into language model pretraining. The output is a language model that produces more contextually accurate predictions. NLP practitioners developing advanced language models would use this.

No commits in the last 6 months.

Use this if you are an NLP researcher or engineer looking to enhance your language models' ability to understand and generate language that is sensitive to social, geographical, or temporal contexts.

Not ideal if you are looking for a pre-trained, production-ready language model or a tool for general text analysis without a focus on social context integration.

natural-language-processing language-model-development social-context-analysis AI-ethics-in-NLP computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Oct 22, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/twitter-research/lmsoc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.