• Stars
    star
    136
  • Rank 267,670 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 4 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Simple multilingual lemmatizer for Python, especially useful for speed and efficiency

Simplemma: a simple multilingual lemmatizer for Python

Python package License Python versions Code Coverage Code style: black Reference DOI: 10.5281/zenodo.4673264

Purpose

Lemmatization is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form. Unlike stemming, lemmatization outputs word units that are still valid linguistic forms.

In modern natural language processing (NLP), this task is often indirectly tackled by more complex systems encompassing a whole processing pipeline. However, it appears that there is no straightforward way to address lemmatization in Python although this task can be crucial in fields such as information retrieval and NLP.

Simplemma provides a simple and multilingual approach to look for base forms or lemmata. It may not be as powerful as full-fledged solutions but it is generic, easy to install and straightforward to use. In particular, it does not need morphosyntactic information and can process a raw series of tokens or even a text with its built-in tokenizer. By design it should be reasonably fast and work in a large majority of cases, without being perfect.

With its comparatively small footprint it is especially useful when speed and simplicity matter, in low-resource contexts, for educational purposes, or as a baseline system for lemmatization and morphological analysis.

Currently, 49 languages are partly or fully supported (see table below).

Installation

The current library is written in pure Python with no dependencies:

pip install simplemma

  • pip3 where applicable
  • pip install -U simplemma for updates

Usage

Word-by-word

Simplemma is used by selecting a language of interest and then applying the data on a list of words.

>>> import simplemma
# get a word
myword = 'masks'
# decide which language to use and apply it on a word form
>>> simplemma.lemmatize(myword, lang='en')
'mask'
# grab a list of tokens
>>> mytokens = ['Hier', 'sind', 'Vaccines']
>>> for token in mytokens:
>>>     simplemma.lemmatize(token, lang='de')
'hier'
'sein'
'Vaccines'
# list comprehensions can be faster
>>> [simplemma.lemmatize(t, lang='de') for t in mytokens]
['hier', 'sein', 'Vaccines']

Chaining languages

Chaining several languages can improve coverage, they are used in sequence:

>>> from simplemma import lemmatize
>>> lemmatize('Vaccines', lang=('de', 'en'))
'vaccine'
>>> lemmatize('spaghettis', lang='it')
'spaghettis'
>>> lemmatize('spaghettis', lang=('it', 'fr'))
'spaghetti'
>>> lemmatize('spaghetti', lang=('it', 'fr'))
'spaghetto'

Greedier decomposition

For certain languages a greedier decomposition is activated by default as it can be beneficial, mostly due to a certain capacity to address affixes in an unsupervised way. This can be triggered manually by setting the greedy parameter to True.

This option also triggers a stronger reduction through a further iteration of the search algorithm, e.g. "angekündigten" → "angekündigt" (standard) → "ankündigen" (greedy). In some cases it may be closer to stemming than to lemmatization.

# same example as before, comes to this result in one step
>>> simplemma.lemmatize('spaghettis', lang=('it', 'fr'), greedy=True)
'spaghetto'
# German case described above
>>> simplemma.lemmatize('angekündigten', lang='de', greedy=True)
'ankündigen' # 2 steps: reduction to infinitive verb
>>> simplemma.lemmatize('angekündigten', lang='de', greedy=False)
'angekündigt' # 1 step: reduction to past participle

is_known()

The additional function is_known() checks if a given word is present in the language data:

>>> from simplemma import is_known
>>> is_known('spaghetti', lang='it')
True

Tokenization

A simple tokenization function is included for convenience:

>>> from simplemma import simple_tokenizer
>>> simple_tokenizer('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.')
['Lorem', 'ipsum', 'dolor', 'sit', 'amet', ',', 'consectetur', 'adipiscing', 'elit', ',', 'sed', 'do', 'eiusmod', 'tempor', 'incididunt', 'ut', 'labore', 'et', 'dolore', 'magna', 'aliqua', '.']
# use iterator instead
>>> simple_tokenizer('Lorem ipsum dolor sit amet', iterate=True)

The functions text_lemmatizer() and lemma_iterator() chain tokenization and lemmatization. They can take greedy (affecting lemmatization) and silent (affecting errors and logging) as arguments:

>>> from simplemma import text_lemmatizer
>>> sentence = 'Sou o intervalo entre o que desejo ser e os outros me fizeram.'
>>> text_lemmatizer(sentence, lang='pt')
# caveat: desejo is also a noun, should be desejar here
['ser', 'o', 'intervalo', 'entre', 'o', 'que', 'desejo', 'ser', 'e', 'o', 'outro', 'me', 'fazer', '.']
# same principle, returns a generator and not a list
>>> from simplemma import lemma_iterator
>>> lemma_iterator(sentence, lang='pt')

Caveats

# don't expect too much though
# this diminutive form isn't in the model data
>>> simplemma.lemmatize('spaghettini', lang='it')
'spaghettini' # should read 'spaghettino'
# the algorithm cannot choose between valid alternatives yet
>>> simplemma.lemmatize('son', lang='es')
'son' # valid common name, but what about the verb form?

As the focus lies on overall coverage, some short frequent words (typically: pronouns and conjunctions) may need post-processing, this generally concerns a few dozens of tokens per language.

The current absence of morphosyntactic information is both an advantage in terms of simplicity and an impassable frontier regarding lemmatization accuracy, e.g. disambiguation between past participles and adjectives derived from verbs in Germanic and Romance languages. In most cases, simplemma often does not change such input words.

The greedy algorithm seldom produces invalid forms. It is designed to work best in the low-frequency range, notably for compound words and neologisms. Aggressive decomposition is only useful as a general approach in the case of morphologically-rich languages, where it can also act as a linguistically motivated stemmer.

Bug reports over the issues page are welcome.

Language detection

Language detection works by providing a text and tuple lang consisting of a series of languages of interest. Scores between 0 and 1 are returned.

The lang_detector() function returns a list of language codes along with scores and adds "unk" at the end for unknown or out-of-vocabulary words. The latter can also be calculated by using the function in_target_language() which returns a ratio.

# import necessary functions
>>> from simplemma import in_target_language, lang_detector
# language detection
>>> lang_detector('"Exoplaneta, též extrasolární planeta, je planeta obíhající kolem jiné hvězdy než kolem Slunce."', lang=("cs", "sk"))
[("cs", 0.75), ("sk", 0.125), ("unk", 0.25)]
# proportion of known words
>>> in_target_language("opera post physica posita (τὰ μετὰ τὰ φυσικά)", lang="la")
0.5

The greedy argument (extensive in past software versions) triggers use of the greedier decomposition algorithm described above, thus extending word coverage and recall of detection at the potential cost of a lesser accuracy.

Supported languages

The following languages are available using their BCP 47 language tag, which is usually the ISO 639-1 code but if no such code exists, a ISO 639-3 code is used instead:

Available languages (2022-01-20)
Code Language Forms (10³) Acc. Comments
ast Asturian 124    
bg Bulgarian 204    
ca Catalan 579    
cs Czech 187 0.89 on UD CS-PDT
cy Welsh 360    
da Danish 554 0.92 on UD DA-DDT, alternative: lemmy
de German 675 0.95 on UD DE-GSD, see also German-NLP list
el Greek 181 0.88 on UD EL-GDT
en English 131 0.94 on UD EN-GUM, alternative: LemmInflect
enm Middle English 38    
es Spanish 665 0.95 on UD ES-GSD
et Estonian 119   low coverage
fa Persian 12   experimental
fi Finnish 3,199   see this benchmark
fr French 217 0.94 on UD FR-GSD
ga Irish 372    
gd Gaelic 48    
gl Galician 384    
gv Manx 62    
hbs Serbo-Croatian 656   Croatian and Serbian lists to be added later
hi Hindi 58   experimental
hu Hungarian 458    
hy Armenian 246    
id Indonesian 17 0.91 on UD ID-CSUI
is Icelandic 174    
it Italian 333 0.93 on UD IT-ISDT
ka Georgian 65    
la Latin 843    
lb Luxembourgish 305    
lt Lithuanian 247    
lv Latvian 164    
mk Macedonian 56    
ms Malay 14    
nb Norwegian (Bokmål) 617    
nl Dutch 250 0.92 on UD-NL-Alpino
nn Norwegian (Nynorsk) 56    
pl Polish 3,211 0.91 on UD-PL-PDB
pt Portuguese 924 0.92 on UD-PT-GSD
ro Romanian 311    
ru Russian 595   alternative: pymorphy2
se Northern Sámi 113    
sk Slovak 818 0.92 on UD SK-SNK
sl Slovene 136    
sq Albanian 35    
sv Swedish 658   alternative: lemmy
sw Swahili 10   experimental
tl Tagalog 32   experimental
tr Turkish 1,232 0.89 on UD-TR-Boun
uk Ukrainian 370   alternative: pymorphy2

Low coverage mentions means one would probably be better off with a language-specific library, but simplemma will work to a limited extent. Open-source alternatives for Python are referenced if possible.

Experimental mentions indicate that the language remains untested or that there could be issues with the underlying data or lemmatization process.

The scores are calculated on Universal Dependencies treebanks on single word tokens (including some contractions but not merged prepositions), they describe to what extent simplemma can accurately map tokens to their lemma form. See eval/ folder of the code repository for more information.

This library is particularly relevant as regards the lemmatization of less frequent words. Its performance in this case is only incidentally captured by the benchmark above. In some languages, a fixed number of words such as pronouns can be further mapped by hand to enhance performance.

Speed

Orders of magnitude given for reference only, measured on an old laptop to give a lower bound:

  • Tokenization: > 1 million tokens/sec
  • Lemmatization: > 250,000 words/sec

Using the most recent Python version (i.e. with pyenv) can make the package run faster.

Roadmap

  • [-] Add further lemmatization lists
  • [ ] Grammatical categories as option
  • [ ] Function as a meta-package?
  • [ ] Integrate optional, more complex models?

Credits and licenses

Software under MIT license, for the linguistic information databases see licenses folder.

The surface lookups (non-greedy mode) use lemmatization lists derived from various sources, ordered by relative importance:

Contributions

See this list of contributors to the repository.

Feel free to contribute, notably by filing issues for feedback, bug reports, or links to further lemmatization lists, rules and tests.

Contributions by pull requests ought to follow the following conventions: code style with black, type hinting with mypy, included tests with pytest.

Other solutions

See lists: German-NLP and other awesome-NLP lists.

For a more complex and universal approach in Python see universal-lemmatizer.

References

To cite this software:

Reference DOI: 10.5281/zenodo.4673264

Barbaresi A. (year). Simplemma: a simple multilingual lemmatizer for Python [Computer software] (Version version number). Berlin, Germany: Berlin-Brandenburg Academy of Sciences. Available from https://github.com/adbar/simplemma DOI: 10.5281/zenodo.4673264

This work draws from lexical analysis algorithms used in:

More Repositories

1

trafilatura

Python & Command-line tool to gather text and metadata on the Web: Crawling, scraping, extraction, output as CSV, JSON, HTML, MD, TXT, XML
Python
3,298
star
2

German-NLP

Curated list of open-access/open-source/off-the-shelf resources and tools developed with a particular focus on German
449
star
3

htmldate

Fast and robust date extraction from web pages, with Python or on the command-line
Python
117
star
4

courlan

Clean, filter and sample URLs to optimize data collection – Python & command-line – Deduplication, spam, content and language filters
Python
109
star
5

geokelone

integrates spatial and textual data processing tools into a modular software package which features preprocessing, geocoding, disambiguation and visualization
Python
5
star
6

german-reddit

Extraction of a German Reddit Corpus
Python
3
star
7

tweets-tools

Diverse tools used with Twitter data
Python
2
star
8

flux-toolchain

Filtering and Language-identification for URL Crawling Seeds (FLUCS) a.k.a. FLUX-Toolchain
Perl
2
star
9

jlcl-style

Experiments to modernize the LaTeX class of the JLCL
TeX
1
star
10

trafilatura_gui

Python
1
star
11

toponyms

Old prototype for toponym extraction in historical texts written in German
1
star
12

url-compressor

A fast pattern-based URL compression for lists of links
Pascal
1
star
13

zeitcrawler

Automatically exported from code.google.com/p/zeitcrawler
Java
1
star
14

vardial-experiments

Experiments conducted on the occasion of the VarDial shared tasks
Python
1
star
15

microblog-explorer

Perform crawls of social networks (identi.ca, reddit, friendfeed) to gather internal and external links and identify their language
Python
1
star
16

coronakorpus

Material zum Aufbau eines deutschsprachigen COVID-19-Webkorpus / Building a corpus in German dedicated to coronavirus
1
star