• Stars
    star
    727
  • Rank 62,306 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Japanese NLP Library using spaCy as framework based on Universal Dependencies

GiNZA logo

GiNZA NLP Library

TweetDownloads

An Open Source Japanese NLP Library, based on Universal Dependencies

Please read the Important changes before you upgrade GiNZA.

日本語ページはこちら

License

GiNZA NLP Library and GiNZA Japanese Universal Dependencies Models are distributed under the MIT License. You must agree and follow the MIT License to use GiNZA NLP Library and GiNZA Japanese Universal Dependencies Models.

Explosion / spaCy

spaCy is the key framework of GiNZA.

spaCy LICENSE PAGE

Works Applications Enterprise / Sudachi/SudachiPy - SudachiDict - chiVe

SudachiPy provides high accuracies for tokenization and pos tagging.

Sudachi LICENSE PAGE, SudachiPy LICENSE PAGE, SudachiDict LEGAL PAGE, chiVe LICENSE PAGE

Hugging Face / transformers

The GiNZA v5 Transformers model (ja_ginza_electra) is trained by using Hugging Face Transformers as a framework for pretrained models.

transformers LICENSE PAGE

Training Datasets

UD Japanese BCCWJ r2.8

The parsing model of GiNZA v5 is trained on a part of UD Japanese BCCWJ r2.8 (Omura and Asahara:2018). This model is developed by National Institute for Japanese Language and Linguistics, and Megagon Labs.

GSK2014-A (2019) BCCWJ edition

The named entity recognition model of GiNZA v5 is trained on a part of GSK2014-A (2019) BCCWJ edition (Hashimoto, Inui, and Murakami:2008). We use two of the named entity label systems, both Sekine's Extended Named Entity Hierarchy and extended OntoNotes5. This model is developed by National Institute for Japanese Language and Linguistics, and Megagon Labs.

mC4

The GiNZA v5 Transformers model (ja_ginza_electra) is trained by using transformers-ud-japanese-electra-base-discriminator which is pretrained on more than 200 million Japanese sentences extracted from mC4.

Contains information from mC4 which is made available under the ODC Attribution License.

@article{2019t5,
    author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
    title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
    journal = {arXiv e-prints},
    year = {2019},
    archivePrefix = {arXiv},
    eprint = {1910.10683},
}

Runtime Environment

This project is developed with Python>=3.6 and pip for it. We do not recommend to use Anaconda environment because the pip install step may not work properly.

Please also see the Development Environment section below.

Runtime set up

1. Install GiNZA NLP Library with Transformer-based Model

Uninstall previous version of ginza and ja_ginza_electra packages:

$ pip uninstall ginza ja_ginza_electra

Then, install the latest version of ginza and ja_ginza_electra:

$ pip install -U ginza ja_ginza_electra

The package of ja_ginza_electra does not include pytorch_model.bin due to PyPI's archive size restrictions. This large model file will be automatically downloaded at the first run time, and the locally cached file will be used for subsequent runs.

If you need to install ja_ginza_electra along with pytorch_model.bin at the install time, you can specify direct link for GitHub release archive as follows:

$ pip install -U ginza https://github.com/megagonlabs/ginza/releases/download/latest/ja_ginza_electra-latest-with-model.tar.gz

If you hope to accelarate the transformers-based models by using GPUs with CUDA support, you can install spacy by specifying the CUDA version as follows:

pip install -U "spacy[cuda110]"

And you need to install a version of pytorch that is consistent with the CUDA version.

2. Install GiNZA NLP Library with Standard Model

Uninstall previous version:

$ pip uninstall ginza ja_ginza

Then, install the latest version of ginza and ja_ginza:

$ pip install -U ginza ja_ginza

Execute ginza command

Run ginza command from the console, then input some Japanese text. After pressing enter key, you will get the parsed results with CoNLL-U Syntactic Annotation format.

$ ginza
銀座でランチをご一緒しましょう。
# text = 銀座でランチをご一緒しましょう。
1	銀座	銀座	PROPN	名詞-固有名詞-地名-一般	_	6	obl	_	SpaceAfter=No|BunsetuBILabel=B|BunsetuPositionType=SEM_HEAD|NP_B|Reading=ギンザ|NE=B-GPE|ENE=B-City
2	で	で	ADP	助詞-格助詞	_	1	case	_	SpaceAfter=No|BunsetuBILabel=I|BunsetuPositionType=SYN_HEAD|Reading=デ
3	ランチ	ランチ	NOUN	名詞-普通名詞-一般	_	6	obj	_	SpaceAfter=No|BunsetuBILabel=B|BunsetuPositionType=SEM_HEAD|NP_B|Reading=ランチ
4	を	を	ADP	助詞-格助詞	_	3	case	_	SpaceAfter=No|BunsetuBILabel=I|BunsetuPositionType=SYN_HEAD|Reading=ヲ
5	ご	ご	NOUN	接頭辞	_	6	compound	_	SpaceAfter=No|BunsetuBILabel=B|BunsetuPositionType=CONT|Reading=ゴ
6	一緒	一緒	VERB	名詞-普通名詞-サ変可能	_	0	root	_	SpaceAfter=No|BunsetuBILabel=I|BunsetuPositionType=ROOT|Reading=イッショ
7	し	する	AUX	動詞-非自立可能	_	6	aux	_	SpaceAfter=No|BunsetuBILabel=I|BunsetuPositionType=SYN_HEAD|Inf=サ行変格,連用形-一般|Reading=シ
8	ましょう	ます	AUX	助動詞	_	6	aux	_	SpaceAfter=No|BunsetuBILabel=I|BunsetuPositionType=SYN_HEAD|Inf=助動詞-マス,意志推量形|Reading=マショウ
9	。	。	PUNCT	補助記号-句点	_	6	punct	_	SpaceAfter=No|BunsetuBILabel=I|BunsetuPositionType=CONT|Reading=。

ginzame command provides tokenization function like MeCab. The output format of ginzame is almost same as mecab, but the last pronunciation field is always '*'.

$ ginzame
銀座でランチをご一緒しましょう。
銀座	名詞,固有名詞,地名,一般,*,*,銀座,ギンザ,*
で	助詞,格助詞,*,*,*,*,で,デ,*
ランチ	名詞,普通名詞,一般,*,*,*,ランチ,ランチ,*
を	助詞,格助詞,*,*,*,*,を,ヲ,*
ご	接頭辞,*,*,*,*,*,御,ゴ,*
一緒	名詞,普通名詞,サ変可能,*,*,*,一緒,イッショ,*
し	動詞,非自立可能,*,*,サ行変格,連用形-一般,為る,シ,*
ましょう	助動詞,*,*,*,助動詞-マス,意志推量形,ます,マショウ,*
。	補助記号,句点,*,*,*,*,。,。,*
EOS

The format of spaCy's JSON is available by specifying -f 3 or -f json for ginza command.

$ ginza -f json
銀座でランチをご一緒しましょう。
[
 {
  "paragraphs": [
   {
    "raw": "銀座でランチをご一緒しましょう。",
    "sentences": [
     {
      "tokens": [
       {"id": 1, "orth": "銀座", "tag": "名詞-固有名詞-地名-一般", "pos": "PROPN", "lemma": "銀座", "head": 5, "dep": "obl", "ner": "B-City"},
       {"id": 2, "orth": "で", "tag": "助詞-格助詞", "pos": "ADP", "lemma": "で", "head": -1, "dep": "case", "ner": "O"},
       {"id": 3, "orth": "ランチ", "tag": "名詞-普通名詞-一般", "pos": "NOUN", "lemma": "ランチ", "head": 3, "dep": "obj", "ner": "O"},
       {"id": 4, "orth": "を", "tag": "助詞-格助詞", "pos": "ADP", "lemma": "を", "head": -1, "dep": "case", "ner": "O"},
       {"id": 5, "orth": "ご", "tag": "接頭辞", "pos": "NOUN", "lemma": "ご", "head": 1, "dep": "compound", "ner": "O"},
       {"id": 6, "orth": "一緒", "tag": "名詞-普通名詞-サ変可能", "pos": "VERB", "lemma": "一緒", "head": 0, "dep": "ROOT", "ner": "O"},
       {"id": 7, "orth": "し", "tag": "動詞-非自立可能", "pos": "AUX", "lemma": "する", "head": -1, "dep": "advcl", "ner": "O"},
       {"id": 8, "orth": "ましょう", "tag": "助動詞", "pos": "AUX", "lemma": "ます", "head": -2, "dep": "aux", "ner": "O"},
       {"id": 9, "orth": "。", "tag": "補助記号-句点", "pos": "PUNCT", "lemma": "。", "head": -3, "dep": "punct", "ner": "O"}
      ]
     }
    ]
   }
  ]
 }
]

If you want to use cabocha -f1 (lattice style) like output, add -f 1 or -f cabocha option to ginza command. This option's format is almost same as cabocha -f1 but the func_index field (after the slash) is slightly different. Our func_index field indicates the boundary where the 自立語 ends in each 文節 (and the 機能語 might start from there). And the functional token filter is also slightly different between cabocha -f1 and ' ginza -f cabocha.

$ ginza -f cabocha
銀座でランチをご一緒しましょう。
* 0 2D 0/1 0.000000
銀座	名詞,固有名詞,地名,一般,,銀座,ギンザ,*	B-City
で	助詞,格助詞,*,*,,で,デ,*	O
* 1 2D 0/1 0.000000
ランチ	名詞,普通名詞,一般,*,,ランチ,ランチ,*	O
を	助詞,格助詞,*,*,,を,ヲ,*	O
* 2 -1D 0/2 0.000000
ご	接頭辞,*,*,*,,ご,ゴ,*	O
一緒	名詞,普通名詞,サ変可能,*,,一緒,イッショ,*	O
し	動詞,非自立可能,*,*,サ行変格,連用形-一般,する,シ,*	O
ましょう	助動詞,*,*,*,助動詞-マス,意志推量形,ます,マショウ,*	O
。	補助記号,句点,*,*,,。,。,*	O
EOS

Multi-processing (Experimental)

We added -p NUM_PROCESS option from GiNZA v3.0. Please specify the number of analyzing processes to NUM_PROCESS. You might want to use all the cpu cores for GiNZA, then execute ginza -p 0. The memory requirement is about 130MB/process (to be improved).

Coding example

Following steps shows dependency parsing results with sentence boundary 'EOS'.

import spacy
nlp = spacy.load('ja_ginza_electra')
doc = nlp('銀座でランチをご一緒しましょう。')
for sent in doc.sents:
    for token in sent:
        print(
            token.i,
            token.orth_,
            token.lemma_,
            token.norm_,
            token.morph.get("Reading"),
            token.pos_,
            token.morph.get("Inflection"),
            token.tag_,
            token.dep_,
            token.head.i,
        )
    print('EOS')

User Dictionary

The user dictionary files should be set to userDict field of sudachi.json in the installed package directory ofja_ginza_dict package.

Please read the official documents to compile user dictionaries with sudachipy command. SudachiPy - User defined Dictionary Sudachi User Dictionary Construction (Japanese Only)

Releases

version 5.x

ginza-5.1.2

  • 2022-03-12
  • Migrate to spaCy v3.4

ginza-5.1.1

  • 2022-03-12
  • Improvements
    • auto deploy for pypi by @nimiusrd in #184
    • modify github actions: trigger by tagging, stop uploading test pypi by @r-terada in #233

ginza-5.1.0

  • 2021-12-10, Euclase
  • Important changes
    • Upgrade: spaCy v3.2 and Sudachi.rs(SudachiPy v0.6.2)
    • Change token information fields #208 #209
      • doc.user_data["reading_forms"][token.i] -> token.morph.get("Reading")
      • doc.user_data["inflections"][token.i] -> token.morph.get("Inflection")
      • force_using_normalized_form_as_lemma(True) -> token.norm_
    • All spaCy models, including non-Japanese, are now available with the ginza command #217
      • Download and analyze the model at once by specifying the model name in the following form #219
      • ginza -m en_core_web_md
    • Change ginza --require_gpu and ginza -g to take a gpu_id argument
      • The default gpu_id value is -1 which uses only CPUs
    • ginza -f json option always analyze the line which starts with # regardless the option value of -c. #215
  • Improvements
    • Batch analysis processing speeds up by 50-60% in GPU environment and 10-40% in CPU environment
    • Improved processing efficiency of parallel execution options (ginza -p {n_process} and ginzame) of ginza command #204
    • add tests #198 #210 #214
    • add benchmark #207 #220

ginza-5.0.3

  • 2021-10-15
  • Bug fix
    • Bunsetu span should not cross the sentence boundary #195

ginza-5.0.2

  • 2021-09-06
  • Bug fix
    • Command Line -s option and set_split_mode() not working in v5.0.x #185

ginza-5.0.1

  • 2021-08-26
  • Bug fix
    • ginzame not woriking in ginza ver. 5 #179
    • Command Line -d option not working in v5.0.0 #178
  • Improvement
    • accept ja-ginza and ja-ginza-electra for -m option of ginza command

ginza-5.0.0

  • 2021-08-26, Demantoid
  • Important changes
    • Upgrade spaCy to v3
      • Release transformer-based ja-ginza-electra model
      • Improve UPOS accuracy of the standard ja-ginza model by adding morphologizer to the tail of spaCy pipleline
    • Need to insrtall analysis model along with ginza package
      • High accuracy model (>=16GB memory needed)
        • pip install -U ginza ja-ginza-electra
      • Speed oriented model
        • pip install -U ginza ja-ginza
    • Change component names of CompoundSplitter and BunsetuRecognizer to compound_splitter and bunsetu_recognizer respectively
    • Also see spaCy v3 Backwards Incompatibilities
  • Improvements
    • Add command line options
      • -n
        • Force using SudachiPy's normalized_form as Token.lemma_
      • -m (ja_ginza|ja_ginza_electra)
        • Select model package
    • Revise ENE category name
      • Degital_Game to Digital_Game

version 4.x

ginza-4.0.6

  • 2021-06-01
  • Bug fix
    • Issue #160: IndexError: list assignment index out of range for empty string

ginza-4.0.5

  • 2020-10-01
  • Improvements
    • Add -d option, which disables spaCy's sentence separator, to ginza command line tool

ginza-4.0.4

  • 2020-09-11
  • Improvements
    • ginza command line tool works correctly without BunsetuRecognizer in the pipeline

ginza-4.0.3

  • 2020-09-10
  • Improve bunsetu head identification accuracy over inconsistent deps in ent spans

ginza-4.0.2

  • 2020-09-04
  • Improvements
    • Serialization of CompoundSplitter for nlp.to_disk()
    • Bunsetu span detection accuracy

ginza-4.0.1

  • 2020-08-30
  • Debug
    • Add type arguments for singledispatch register annotations (for Python 3.6)

ginza-4.0.0

  • 2020-08-16, Chrysoberyl
  • Important changes
    • Replace Japanese model with spacy.lang.ja of spaCy v2.3
      • Replace values of Token.lemma_ with the output of SudachiPy's Morpheme.dictionary_form()
    • Replace ja_ginza_dict with official SudachiDict-core package
      • You can deleteja_ginza_dict package safety
    • Change options and misc field contents of output of command line tool
      • delete use_sentence_separator(-s)
      • NE(OntoNotes) BI labels as B-GPE
      • Add subfields: Reading, Inf(inflection) and ENE(Extended NE)
    • Obsolete Token._.* and add some entries for Doc.user_data[] and accessors
      • inflections (ginza.inflection(Token))
      • reading_forms (ginza.reading_form(Token))
      • bunsetu_bi_labels (ginza.bunsetu_bi_label(Token))
      • bunsetu_position_types (ginza.bunsetu_position_type(Token))
      • bunsetu_heads (ginza.is_bunsetu_head(Token))
    • Change pipeline architecture
      • JapaneseCorrector was obsoleted
      • Add CompoundSplitter and BunsetuRecognizer
    • Upgrade UD_JAPANESE-BCCWJ to v2.6
    • Change word2vec to chiVe mc90
  • API Changes
    • Add bunsetu-unit APIs (from ginza import *)
      • bunsetu(Token)
      • phrase(Token)
      • sub_phrases(Token)
      • phrases(Span)
      • bunsetu_spans(Span)
      • bunsetu_phrase_spans(Span)
      • bunsetu_head_list(Span)
      • bunsetu_head_tokens(Span)
      • bunsetu_bi_labels(Span)
      • bunsetu_position_types(Span)

version 3.x

ginza-3.1.2

  • 2020-02-12
  • Debug
    • Fix: degrade of cabocha mode

ginza-3.1.1

  • 2020-01-19
  • API Changes
    • Extension fields
      • The values of Token._.sudachi field would be set after calling SudachipyTokenizer.set_enable_ex_sudachi(True), to avoid serializtion errors
import spacy
import pickle
nlp = spacy.load('ja_ginza')
doc1 = nlp('This example will be serialized correctly.')
doc1.to_bytes()
with open('sample1.pickle', 'wb') as f:
    pickle.dump(doc1, f)

nlp.tokenizer.set_enable_ex_sudachi(True)
doc2 = nlp('This example will cause a serialization error.')
doc2.to_bytes()
with open('sample2.pickle', 'wb') as f:
    pickle.dump(doc2, f)

ginza-3.1.0

  • 2020-01-16
  • Important changes
    • Distribute ja_ginza_dict from PyPI
  • API Changes
    • commands
      • ginza and ginzame
        • add -i option to initialize the files of ja_ginza_dict

ginza-3.0.0

  • 2020-01-15, Benitoite
  • Important changes
    • Distribute ginza and ja_ginza from PyPI
      • Simple installation; pip install ginza, and run ginza
      • The model package, ja_ginza, is also available from PyPI.
    • Model improvements
      • Change NER training data-set to GSK2014-A (2019) BCCWJ edition
        • Improved accuracy of NER
        • token.ent_type_ value is changed to Sekine's Extended Named Entity Hierarchy
          • Add ENE7 attribute to the last field of the output of ginza
        • Move OntoNotes5 -based label to token._.ne
          • We extended the OntoNotes5 named entity labels with PHONE, EMAIL, URL, and PET_NAME
      • Overall accuracy is improved by executing spacy pretrain over 100 epochs
        • Multi-task learning of spacy train effectively working on UD Japanese BCCWJ
      • The newest SudachiDict_core-20191224
    • ginzame
      • Execute sudachipy by multiprocessing.Pool and output results with mecab like format
      • Now sudachipy command requires additional SudachiDict package installation
  • Breaking API Changes
    • commands
      • ginza (ginza.command_line.main_ginza)
        • change option mode to sudachipy_mode
        • drop options: disable_pipes and recreate_corrector
        • add options: hash_comment, parallel, files
        • add mecab to the choices for the argument of -f option
        • add parallel NUM_PROCESS option (EXPERIMENTAL)
        • add ENE7 attribute to conllu miscellaneous field
          • ginza.ent_type_mapping.ENE_NE_MAPPING is used to convert ENE7 label to NE
      • add ginzame (ginza.command_line.main_ginzame)
        • a multi-process tokenizer providing mecab like output format
    • spaCy field extensions
      • add token._.ne for ner label
    • ginza/sudachipy_tokenizer.py
      • change SudachiTokenizer to SudachipyTokenizer
      • use SUDACHI_DEFAULT_SPLIT_MODE instead of SUDACHI_DEFAULT_SPLITMODE or SUDACHI_DEFAULT_MODE
  • Dependencies
    • upgrade spacy to v2.2.3
    • upgrade sudachipy to v0.4.2

version 2.x

ginza-2.2.1

  • 2019-10-28
  • Improvements
    • JapaneseCorrector can merge the as_* type dependencies completely
  • Bug fixes
    • command line tool failed at the specific situations

ginza-2.2.0

  • 2019-10-04, Ametrine
  • Important changes
    • split_mode has been set incorrectly to sudachipy.tokenizer from v2.0.0 (#43)
      • This bug caused split_mode incompatibility between the training phase and the ginza command.
      • split_mode was set to 'B' for training phase and python APIs, but 'C' for ginza command.
      • We fixed this bug by setting the default split_mode to 'C' entirely.
      • This fix may cause the word segmentation incompatibilities during upgrading GiNZA from v2.0.0 to v2.2.0.
  • New features
    • Add -f and --output-format option to ginza command:
    • Add custom token fields:
      • bunsetu_index : bunsetu index starting from 0
      • reading: reading of token (not a pronunciation)
      • sudachi: SudachiPy's morpheme instance (or its list when then tokens are gathered by JapaneseCorrector)
  • Performance improvements
    • Tokenizer
      • Use latest SudachiDict (SudachiDict_core-20190927.tar.gz)
      • Use Cythonized SudachiPy (v0.4.0)
    • Dependency parser
      • Apply spacy pretrain command to capture the language model from UD-Japanese BCCWJ, UD_Japanese-PUD and KWDLC.
      • Apply multitask objectives by using -pt 'tag,dep' option of spacy train
    • New model file
      • ja_ginza-2.2.0.tar.gz

ginza-2.0.0

  • 2019-07-08
  • Add ginza command
    • run ginza from the console
  • Change package structure
    • module package as ginza
    • language model package as ja_ginza
    • spacy.lang.ja is overridden by ginza
  • Remove sudachipy related directories
    • SudachiPy and its dictionary are installed via pip during ginza installation
  • User dictionary available
  • Token extension fields
    • Added
      • token._.bunsetu_bi_label, token._.bunsetu_position_type
    • Remained
      • token._.inf
    • Removed
      • pos_detail (same value is set to token.tag_)

version 1.x

ja_ginza_nopn-1.0.2

  • 2019-04-07
  • Set depending token index of root as 0 to meet with conllu format definitions

ja_ginza_nopn-1.0.1

  • 2019-04-02
  • Add new Japanese era 'reiwa' to system_core.dic.

ja_ginza_nopn-1.0.0

  • 2019-04-01
  • First release version

Development Environment

Development set up

1. Clone from github

$ git clone 'https://github.com/megagonlabs/ginza.git'

2. Run python setup.py

For normal environment:

$ python setup.py develop

3. Set up system.dic

Copy system.dic from installed package directory of ja_ginza_dict to ./ja_ginza_dict/sudachidict/.

Training models

The analysis model of GiNZA is trained by spacy train command.

$ python -m spacy train ja ja_ginza-4.0.0 corpus/ja_ginza-ud-train.json corpus/ja_ginza-ud-dev.json -b ja_vectors_chive_mc90_35k/ -ovl 0.3 -n 100 -m meta.json.ginza -V 4.0.0

Run tests

Ginza uses the pytest framework for testing, and you can run the tests via setup.py without install test requirements explicitly. Some tests depends on the ginza default models (ja-ginza, ja-ginza-electra), so install them before the tests is needed.

$ pip install ja-ginza ja-ginza-electra
$ pip install -e .
# full test
$ python setup.py test
# test single file
$ python setup.py test --addopts ginza/tests/test_analyzer.py

More Repositories

1

HappyDB

A corpus of 100,000 happy moments
354
star
2

ditto

Code for the paper "Deep Entity Matching with Pre-trained Language Models"
Python
233
star
3

bunkai

Sentence boundary disambiguation tool for Japanese texts (日本語文境界判定器)
Python
177
star
4

sato

Code and data for Sato https://arxiv.org/abs/1911.06311.
Python
107
star
5

jrte-corpus

Japanese Realistic Textual Entailment Corpus (NLP 2020, LREC 2020)
Python
75
star
6

opiniondigest

OpinionDigest: A Simple Framework for Opinion Summarization (ACL 2020)
Python
56
star
7

vecscan

Python
49
star
8

SubjQA

A question-answering dataset with a focus on subjective information
40
star
9

t5-japanese

Codes to pre-train Japanese T5 models
Python
39
star
10

ruler

Data Programming by Demonstration (DPBD) for Document Classification
Jupyter Notebook
36
star
11

tagruler

Data programming by demonstration for information extraction and span annotation
JavaScript
35
star
12

coop

☘️ Code for Convex Aggregation for Opinion Summarization (Iso et al; Findings of EMNLP 2021)
Python
31
star
13

doduo

Annotating Columns with Pre-trained Language Models
Python
25
star
14

asdc

Accommodation Search Dialog Corpus (宿泊施設探索対話コーパス)
Python
23
star
15

instruction_ja

Japanese instruction data (日本語指示データ)
Python
21
star
16

rotom

Code for the paper "Rotom: A Meta-Learned Data Augmentation Framework for Entity Matching, Data Cleaning, Text Classification, and Beyond"
Roff
21
star
17

cocosum

🥥 Code & Data for Comparative Opinion Summarization via Collaborative Decoding (Iso et al; Findings of ACL 2022)
Python
20
star
18

ebe-dataset

Evidence-based Explanation Dataset (AACL-IJCNLP 2020)
PLSQL
17
star
19

ginza-transformers

Use custom tokenizers in spacy-transformers
Python
17
star
20

teddy

Code and data for Teddy https://arxiv.org/abs/2001.05171.
Python
15
star
21

zett

🙈 Code for Zero-shot Triplet Extraction by Template Infilling (Kim et al; IJCNLP-AACL 2023)
Python
15
star
22

machamp

The dataset for the paper "Machamp: A Generalized Entity Matching Benchmark" published in CIKM 2021
14
star
23

starmie

Resources for PVLDB 2023 submission
Python
14
star
24

meganno-client

Python
7
star
25

sudowoodo

The source code of the Sudowoodo paper in ICDE 2023
Jupyter Notebook
7
star
26

explainit

Python
5
star
27

desuwa

Feature annotator to morphemes and phrases based on KNP rule files (pure-Python)
Emacs Lisp
5
star
28

react-jupyter-cookiecutter

Python
5
star
29

xatu

🕊️ Code and Data for XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates (Zhang et al; LREC-COLING 2024)
Python
4
star
30

magneton

Repository of the Magneton framework for authoring interaction-aware and customizable widgets.
TypeScript
4
star
31

emu

Enhancing Multilingual Sentence Embeddings with Semantic Specialization (AAAI '20)
4
star
32

learnit

A Tool for Machine Learning Beginners
Python
4
star
33

leam

Source code and demo for Leam
Jupyter Notebook
3
star
34

minun

Evaluating Counterfactual Explanations for Entity Matching
Python
3
star
35

llm-longeval

💵 Code for Less is More for Long Document Summary Evaluation by LLMs (Wu, Iso et al; EACL 2024)
Python
3
star
36

jrte-corpus_example

Example codes for Japanese Realistic Textual Entailment Corpus
Python
3
star
37

Tyrogue

Jupyter Notebook
2
star
38

qa-summarization

Ting-Yao's intern project
Python
2
star
39

pilota

✈ SCUD generator (解釈文生成器)
Python
1
star
40

quasi_japanese_reviews

Quasi Japanese Reviews (擬似レビューデータ)
Python
1
star
41

MCR

1
star
42

witqa

1
star