• Stars
    star
    365
  • Rank 116,851 (Top 3 %)
  • Language
    Python
  • License
    Other
  • Created over 10 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

English word segmentation, written in pure-Python, and based on a trillion-word corpus.

Python Word Segmentation

WordSegment is an Apache2 licensed module for English word segmentation, written in pure-Python, and based on a trillion-word corpus.

Based on code from the chapter "Natural Language Corpus Data" by Peter Norvig from the book "Beautiful Data" (Segaran and Hammerbacher, 2009).

Data files are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. This module contains only a subset of that data. The unigram data includes only the most common 333,000 words. Similarly, bigram data includes only the most common 250,000 phrases. Every word and phrase is lowercased with punctuation removed.

Features

  • Pure-Python
  • Fully documented
  • 100% Test Coverage
  • Includes unigram and bigram data
  • Command line interface for batch processing
  • Easy to hack (e.g. different scoring, new data, different language)
  • Developed on Python 2.7
  • Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6 and PyPy, PyPy3
  • Tested on Windows, Mac OS X, and Linux
  • Tested using Travis CI and AppVeyor CI
https://ci.appveyor.com/api/projects/status/github/grantjenks/python-wordsegment?branch=master&svg=true

Quickstart

Installing WordSegment is simple with pip:

$ pip install wordsegment

You can access documentation in the interpreter with Python's built-in help function:

>>> import wordsegment
>>> help(wordsegment)

Tutorial

In your own Python programs, you'll mostly want to use segment to divide a phrase into a list of its parts:

>>> from wordsegment import load, segment
>>> load()
>>> segment('thisisatest')
['this', 'is', 'a', 'test']

The load function reads and parses the unigrams and bigrams data from disk. Loading the data only needs to be done once.

WordSegment also provides a command-line interface for batch processing. This interface accepts two arguments: in-file and out-file. Lines from in-file are iteratively segmented, joined by a space, and written to out-file. Input and output default to stdin and stdout respectively.

$ echo thisisatest | python -m wordsegment
this is a test

If you want to run WordSegment as a kind of server process then use Python's -u option for unbuffered output. You can also set PYTHONUNBUFFERED=1 in the environment.

>>> import subprocess as sp
>>> wordsegment = sp.Popen(
        ['python', '-um', 'wordsegment'],
        stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.STDOUT)
>>> wordsegment.stdin.write('thisisatest\n')
>>> wordsegment.stdout.readline()
'this is a test\n'
>>> wordsegment.stdin.write('workswithotherlanguages\n')
>>> wordsegment.stdout.readline()
'works with other languages\n'
>>> wordsegment.stdin.close()
>>> wordsegment.wait()  # Process exit code.
0

The maximum segmented word length is 24 characters. Neither the unigram nor bigram data contain words exceeding that length. The corpus also excludes punctuation and all letters have been lowercased. Before segmenting text, clean is called to transform the input to a canonical form:

>>> from wordsegment import clean
>>> clean('She said, "Python rocks!"')
'shesaidpythonrocks'
>>> segment('She said, "Python rocks!"')
['she', 'said', 'python', 'rocks']

Sometimes its interesting to explore the unigram and bigram counts themselves. These are stored in Python dictionaries mapping word to count.

>>> import wordsegment as ws
>>> ws.load()
>>> ws.UNIGRAMS['the']
23135851162.0
>>> ws.UNIGRAMS['gray']
21424658.0
>>> ws.UNIGRAMS['grey']
18276942.0

Above we see that the spelling gray is more common than the spelling grey.

Bigrams are joined by a space:

>>> import heapq
>>> from pprint import pprint
>>> from operator import itemgetter
>>> pprint(heapq.nlargest(10, ws.BIGRAMS.items(), itemgetter(1)))
[('of the', 2766332391.0),
 ('in the', 1628795324.0),
 ('to the', 1139248999.0),
 ('on the', 800328815.0),
 ('for the', 692874802.0),
 ('and the', 629726893.0),
 ('to be', 505148997.0),
 ('is a', 476718990.0),
 ('with the', 461331348.0),
 ('from the', 428303219.0)]

Some bigrams begin with <s>. This is to indicate the start of a bigram:

>>> ws.BIGRAMS['<s> where']
15419048.0
>>> ws.BIGRAMS['<s> what']
11779290.0

The unigrams and bigrams data is stored in the wordsegment directory in the unigrams.txt and bigrams.txt files respectively.

User Guide

References

WordSegment License

Copyright 2018 Grant Jenks

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

More Repositories

1

python-sortedcontainers

Python Sorted Container Types: Sorted List, Sorted Dict, and Sorted Set
Python
3,494
star
2

free-python-games

Free Python Games
Python
3,408
star
3

python-diskcache

Python disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python.
Python
2,331
star
4

blue

The slightly less uncompromising Python code formatter.
Python
389
star
5

python-pattern-matching

Python pattern matching like functional languages.
Python
162
star
6

py-tree-sitter-languages

Binary Python wheels for all tree sitter languages.
Python
149
star
7

python-sortedcollections

Python Sorted Collections Library
Python
102
star
8

python-runstats

Python module for computing statistics and regression in a single pass.
Python
96
star
9

python-tribool

Python implementation of Tribool data type.
Python
20
star
10

gpt-prompt-notes

GPT Prompt Notes
15
star
11

scikit-sequitur

SciKit Sequitur is an Apache2 licensed Python module for inferring compositional hierarchies from sequences.
Python
9
star
12

largefile

Python large file utilities inspired by GNU Coreutils and functional programming.
Python
7
star
13

django-modelqueue

Task queue based on Django models.
Python
7
star
14

django-replay

Django application that records and replays web requests.
Python
4
star
15

django-rrweb

Django application to record and replay browser sessions.
Python
3
star
16

prioritydict

PriorityDict is an Apache2 licensed implementation of a dictionary which maintains key-value pairs in value sort order.
Python
3
star
17

advent-of-code

Advent of Code https://adventofcode.com/
Python
2
star
18

python-ivenv

Interactive virtual environments for Python.
Python
2
star
19

syntaxforest

Python
2
star
20

django-dblog

Django Database Logs
Python
2
star
21

dotemacs

Emacs Lisp
2
star
22

python-kmp

Python implementation of Knuth-Morris-Pratt algorithm for sequence search.
Python
2
star
23

python-appstore

User-oriented front-end for pip.
Python
2
star
24

python-fount

Python
2
star
25

emacs-starbuck

Emacs Lisp
1
star
26

csvcompare

CSV comparison and diff tools.
Python
1
star
27

python-shims

Shims is an Apache2 licensed Python module with patching and mocking utilities.
Python
1
star
28

python-templates

Python templating library with templates included.
Python
1
star
29

python-qemu-python

C
1
star
30

SynDiff

Syntactic differencing tool.
1
star
31

python-prique

Python
1
star
32

django-codemirror6

Django CodeMirror 6
JavaScript
1
star
33

Sequitur

Sequitur is a program for identifying structure in long sequences.
1
star
34

jupyter-nbconvert-blue

Use "blue" to format Python cells in Jupyter notebooks.
1
star