• Stars
    star
    173
  • Rank 212,294 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 7 years ago
  • Updated over 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Can neural networks transliterate Romaji into Japanese correctly?

Neural Japanese Transliteration—can you do better than SwiftKey™ Keyboard?

Abstract

In this project, I examine how well neural networks can convert Roman letters into the Japanese script, i.e., Hiragana, Katakana, or Kanji. The accuracy evaluation results for 896 Japanese test sentences outperform the SwiftKeyâ„¢ keyboard, a well-known smartphone multilingual keyboard, by a small margin. It seems that neural networks can learn this task easily and quickly.

Requirements

  • NumPy >= 1.11.1
  • TensorFlow == 1.2
  • regex (Enables us to use convenient regular expression posix)
  • janome (for morph analysis)
  • romkan (for converting kana to romaji)

Background

  • The modern Japanese writing system employs three scripts: Hiragana, Katakana, and Chinese characters (kanji in Japanese).
  • Hiragana and Katakana are phonetic, while Chinese characters are not.
  • In the digital environment, people mostly type Roman alphabet (a.k.a. Romaji) to write Japanese. Basically, they rely on the suggestion the transliteration engine returns. Therefore, how accurately an engine can predict the word(s) the user has in mind is crucial with respect to a Japanese keyboard.
  • Look at the animation on the right. You are to type "nihongo", then the machine shows 日本語 on the suggestion bar.

Problem Formulation

I frame the problem as a seq2seq task.

Inputs: nihongo。
Outputs: 日本語。

Data

  • For training, we used Leipzig Japanese Corpus.
  • For evaluation, 896 Japanese sentences were collected separately. See data/test.csv.

Model Architecture

I adopted the encoder and the first decoder architecture of Tacotron, a speech synthesis model.

Contents

  • hyperparams.py contains hyperparameters. You can change the value if necessary.
  • annotate.py makes Romaji-Japanese parallel sentences.
  • prepro.py defines and makes vocabulary and training data.
  • modules.py has building blocks for networks.
  • networks.py has encoding and decoding networks.
  • data_load.py covers some functions regarding data loading.
  • utils.py has utility functions.
  • train.py is about training.
  • eval.py is about evaluation.

Training

  • STEP 1. Download Leipzig Japanese Corpus and extract jpn_news_2005-2008_1M-sentences.txt to data/ folder.
  • STEP 2. Adjust hyperparameters in hyperparams.py if necessary.
  • STEP 3. Run python annotate.py.
  • STEP 4. Run python prepro.py. Or download the preprocessed files.
  • STEP 5. Run train.py. Or download the pretrained files.

Testing

  • STEP 1. Run eval.py.
  • STEP 2. Install the latest SwiftKey keyboard app and manually test it for the same sentences. (Don't worry. You don't have to because I've done it:))

Results

The training curve looks like this.

The evaluation metric is CER (Character Error Rate). Its formula is

  • edit distance / # characters = CER.

The following is the results after 13 epochs, or 79,898 global steps. Details are available in results/*.csv.

Proposed (Greedy decoding) Proposed (Beam decoding) SwiftKey 6.4.8.57
1595/12057=0.132 1517/12057=0.125 1640/12057=0.136

More Repositories

1

transformer

A TensorFlow Implementation of the Transformer: Attention Is All You Need
Python
4,126
star
2

nlp_tasks

Natural Language Processing Tasks and References
3,018
star
3

wordvectors

Pre-trained word vectors of 30+ languages
Python
2,199
star
4

tacotron

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model
Python
1,818
star
5

numpy_exercises

Numpy exercises.
Python
1,672
star
6

dc_tts

A TensorFlow Implementation of DC-TTS: yet another text-to-speech model
Python
1,147
star
7

sudoku

Can Neural Networks Crack Sudoku?
Python
821
star
8

g2p

g2p: English Grapheme To Phoneme Conversion
Python
734
star
9

tensorflow-exercises

TensorFlow Exercises - focusing on the comparison with NumPy.
Python
535
star
10

deepvoice3

Tensorflow Implementation of Deep Voice 3
Python
452
star
11

css10

CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages
HTML
440
star
12

neural_chinese_transliterator

Can CNNs transliterate Pinyin into Chinese characters correctly?
Python
330
star
13

pytorch_exercises

Jupyter Notebook
312
star
14

bert_ner

Ner with Bert
Python
278
star
15

word_prediction

Word Prediction using Convolutional Neural Networks
Python
251
star
16

nlp_made_easy

Explains nlp building blocks in a simple manner.
Jupyter Notebook
247
star
17

g2pC

g2pC: A Context-aware Grapheme-to-Phoneme Conversion module for Chinese
Python
231
star
18

g2pK

g2pK: g2p module for Korean
Python
216
star
19

expressive_tacotron

Tensorflow Implementation of Expressive Tacotron
Python
196
star
20

speaker_adapted_tts

Making a TTS model with 1 minute of speech samples within 10 minutes
184
star
21

tacotron_asr

Speech Recognition Using Tacotron
Python
165
star
22

quasi-rnn

Character-level Neural Translation using Quasi-RNNs
Python
134
star
23

label_smoothing

Corrupted labels and label smoothing
Jupyter Notebook
127
star
24

bert-token-embeddings

Jupyter Notebook
97
star
25

mtp

Multi-lingual Text Processing
95
star
26

cross_vc

Cross-lingual Voice Conversion
Python
94
star
27

name2nat

name2nat: a Python package for nationality prediction from a name
Python
89
star
28

pron_dictionaries

pronunciation dictionaries for multiple languages
Python
79
star
29

msg_reply

a simple message reply suggestion system
Python
78
star
30

word_ordering

Can neural networks order a scramble of words correctly?
Python
74
star
31

kss

Python
70
star
32

neural_tokenizer

Tokenize English sentences using neural networks.
Python
64
star
33

bytenet_translation

A TensorFlow Implementation of Machine Translation In Neural Machine Translation in Linear Time
Python
60
star
34

KoParadigm

KoParadigm: Korean Inflectional Paradigm Generator
Python
54
star
35

specAugment

Tensor2tensor experiment with SpecAugment
Python
46
star
36

vq-vae

A Tensorflow Implementation of VQ-VAE Speaker Conversion
Python
43
star
37

lm_finetuning

Language Model Fine-tuning for Moby Dick
Python
42
star
38

texture_generation

An Implementation of 'Texture Synthesis Using Convolutional Neural Networks' with Kylberg Texture Dataset
Python
33
star
39

integer_sequence_learning

RNN Approaches to Integer Sequence Learning--the famous Kaggle competition
Python
27
star
40

cjk_trans

Pre-trained Machine Translation Models of Korean from/to ECJ
27
star
41

h2h_converter

Convert Sino-Korean words written in Hangul to Chinese characters, which is called hanja in Korean, using neural networks
Python
25
star
42

up_and_running_with_Tensorflow

A simple tutorial of TensorFlow + TensorFlow / NumPy exercises
Jupyter Notebook
13
star
43

neurobind

Yet Another Model Using Neural Networks for Predicting Binding Preferences of for Test DNA Sequences
Python
11
star
44

kollocate

Collocation Search of Korean
Python
9
star
45

kyubyong

9
star
46

WhereAmI

Where Am I? - If you want to meet me.
5
star
47

spam_detection

Spam Dectection Under Semi-supervised settings
5
star
48

helo_word

A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning
Python
2
star