• Stars
    star
    300
  • Rank 138,870 (Top 3 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created almost 7 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Simple Solution for Multi-Criteria Chinese Word Segmentation

multi-criteria-cws

Codes and corpora for paper "Effective Neural Solution for Multi-Criteria Word Segmentation" (accepted & forthcoming at SCI-2018).

Dependency

  • Python3
  • dynet

Quick Start

Run following command to prepare corpora, split them into train/dev/test sets etc.:

python3 convert_corpus.py 

Then convert a corpus $dataset into pickle file:

./script/make.sh $dataset
  • $dataset can be one of the following corpora: pku, msr, as, cityu, sxu, ctb, zx, cnc, udc and wtb.
  • $dataset can also be a joint corpus like joint-sighan2005 or joint-10in1.
  • If you have access to sighan2008 corpora, you can also make joint-sighan2008 as your $dataset.

Finally, one command performs both training and test on the fly:

./script/train.sh $dataset

Performance

sighan2005

sighan2005

sighan2008

sighan2008

10-in-1

Since SIGHAN bakeoff 2008 datasets are proprietary and difficult to obtain, we decide to conduct additional experiments on more freely available datasets, for the public to test and verify the efficiency of our method. We applied our solution on 6 additional freely available datasets together with the 4 sighan2005 datasets.

10in1

Corpora

In this section, we will briefly introduce those corpora used in this paper.

10 corpora in this repo

Those 10 corpora are either from official sighan2005 website, or collected from open-source project, or from researchers' homepage. Licenses are listed in following table.

licence

sighan2008

As sighan2008 corpora are proprietary, we are unable to distribute them. If you have a legal copy, you can replicate our scores following these instructions.

Firstly, link the sighan2008 to data folder in this project.

ln -s /path/to/your/sighan2008/data data/sighan2008

Then, use HanLP for Traditional Chinese to Simplified Chinese conversion, as shown in the following Java code snippets:

        BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(
            "data/sighan2008/ckip_seg_truth&resource/ckip_truth_utf16.seg"
        ), "UTF-16"));
        String line;
        BufferedWriter bw = IOUtil.newBufferedWriter(
            "data/sighan2008/ckip_seg_truth&resource/ckip_truth_utf8.seg");
        while ((line = br.readLine()) != null)
        {
            for (String word : line.split("\\s"))
            {
                if (word.length() == 0) continue;
                bw.write(HanLP.convertToSimplifiedChinese(word));
                bw.write(" ");
            }
            bw.newLine();
        }
        br.close();
        bw.close();

You need to repeat this for the following 4 files:

  1. ckip_train_utf16.seg
  2. ckip_truth_utf16.seg
  3. cityu_train_utf16.seg
  4. cityu_truth_utf16.seg

Then, uncomment following codes in convert_corpus.py:

    # For researchers who have access to sighan2008 corpus, use official corpora please.
    print('Converting sighan2008 Simplified Chinese corpus')
    datasets = 'ctb', 'ckip', 'cityu', 'ncc', 'sxu'
    convert_all_sighan2008(datasets)
    print('Combining those 8 sighan corpora to one joint corpus')
    datasets = 'pku', 'msr', 'as', 'ctb', 'ckip', 'cityu', 'ncc', 'sxu'
    make_joint_corpus(datasets, 'joint-sighan2008')
    make_bmes('joint-sighan2008')

Finally, you are ready to go:

python3 convert_corpus.py
./script/make.sh joint-sighan2008
./script/train.sh joint-sighan2008

Acknowledgments

  • Thanks for those friends who helped us with the experiments.
  • Credits should also be given to those generous researchers who shared their corpora with the public, as listed in license table. Your datasets indeed helped those small groups (like us) without any funding.
  • Model implementation modified from a Dynet-1.x version by rguthrie3.

More Repositories

1

HanLP

中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格转换 语义相似度 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
Python
33,681
star
2

pyhanlp

中文分词
Python
3,122
star
3

AhoCorasickDoubleArrayTrie

An extremely fast implementation of Aho Corasick algorithm based on Double Array Trie.
Java
946
star
4

CS224n

CS224n: Natural Language Processing with Deep Learning Assignments Winter, 2017
Python
673
star
5

Viterbi

An implementation of HMM-Viterbi Algorithm 通用的维特比算法实现
Java
369
star
6

hanlp-lucene-plugin

HanLP中文分词Lucene插件,支持包括Solr在内的基于Lucene的系统
Java
296
star
7

TextRank

TextRank算法提取关键词的Java实现
Java
201
star
8

LDA4j

A Java implemention of LDA(Latent Dirichlet Allocation)
Java
195
star
9

TreebankPreprocessing

Python scripts preprocessing Penn Treebank and Chinese Treebank
Python
162
star
10

ID-CNN-CWS

Source codes and corpora of paper "Iterated Dilated Convolutions for Chinese Word Segmentation"
Python
136
star
11

MainPartExtractor

主谓宾提取器的Java实现(对斯坦福的代码失去兴趣,不再维护)
Java
136
star
12

neural_net

反向传播神经网络及应用
Python
82
star
13

udacity-deep-learning

Assignments for Udacity Deep Learning class with TensorFlow in PURE Python, not IPython Notebook
Python
66
star
14

AveragedPerceptronPython

Clone of "A Good Part-of-Speech Tagger in about 200 Lines of Python" by Matthew Honnibal
Python
49
star
15

MaxEnt

这是一个最大熵的简明Java实现,提供提供训练与预测接口。训练算法采用GIS训练算法,附带示例训练集和一个天气预测的Demo。
Java
46
star
16

text-classification-svm

The missing SVM-based text classification module implementing HanLP's interface
Java
46
star
17

IceNAT

IceNAT
Java
32
star
18

BERT-token-level-embedding

Generate BERT token level embedding without pain
Python
28
star
19

sub-character-cws

Sub-Character Representation Learning
Python
26
star
20

HanLPAndroidDemo

HanLP Android Demo
Java
22
star
21

maxent_iis

最大熵-IIS(Improved Iterative Scaling)训练算法的Java实现
Java
18
star
22

gohanlp

Golang RESTful Client for HanLP
Go
13
star
23

iparser

Yet another dependency parser, integrated with tokenizer, tagger and visualization tool.
Python
11
star
24

DeepBiaffineParserMXNet

An experimental implementation of biaffine parser using MXNet
Python
10
star
25

OpenCC-to-HanLP

无损转换OpenCC词典为HanLP格式
Python
9
star
26

tmsvm

Python
1
star
27

bolt_splits

Split Broad Operational Language Translation corpus into train/dev/test set
Python
1
star