MalteHB/-l-ctra

Ælæctra was created as part of a Cognitive Science bachelor thesis, in the attempt to enhance the Danish NLP community with a more efficient Transformer-based language model.

32
/ 100
Emerging

This project offers a specialized language model designed to process Danish text more efficiently than existing options. It takes raw Danish text as input and helps identify and extract specific entities within that text, such as names, locations, or organizations. Developers and researchers working with Danish natural language processing applications would find this model particularly useful.

No commits in the last 6 months.

Use this if you need a language model specifically optimized for Danish text analysis that is computationally less demanding than other large models.

Not ideal if your primary concern is absolute state-of-the-art accuracy on all NLP tasks, as it may score slightly lower than larger models, or if you are not working with Danish text.

Danish NLP text analysis named entity recognition computational linguistics language models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

28

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 31, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/MalteHB/-l-ctra"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.