• Stars
    star
    111
  • Rank 304,328 (Top 7 %)
  • Language
    Python
  • License
    Other
  • Created about 4 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[Deprecated] This library has been renamed to "Stanza". Latest development at: https://github.com/stanfordnlp/stanza

StanfordNLP: A Python NLP Library for Many Human Languages

Travis Status PyPI Version Python Versions

⚠️ Note ⚠️

All development, issues, ongoing maintenance, and support have been moved to our new GitHub repository as the toolkit is being renamed as Stanza since version 1.0.0. Please visit our new website for more information. You can still download stanfordnlp via pip, but newer versions of this package will be made available as stanza. This repository is kept for archival purposes.

The Stanford NLP Group's official Python NLP library. It contains packages for running our latest fully neural pipeline from the CoNLL 2018 Shared Task and for accessing the Java Stanford CoreNLP server. For detailed information please visit our official website.

References

If you use our neural pipeline including the tokenizer, the multi-word token expansion model, the lemmatizer, the POS/morphological features tagger, or the dependency parser in your research, please kindly cite our CoNLL 2018 Shared Task system description paper:

@inproceedings{qi2018universal,
 address = {Brussels, Belgium},
 author = {Qi, Peng  and  Dozat, Timothy  and  Zhang, Yuhao  and  Manning, Christopher D.},
 booktitle = {Proceedings of the {CoNLL} 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies},
 month = {October},
 pages = {160--170},
 publisher = {Association for Computational Linguistics},
 title = {Universal Dependency Parsing from Scratch},
 url = {https://nlp.stanford.edu/pubs/qi2018universal.pdf},
 year = {2018}
}

The PyTorch implementation of the neural pipeline in this repository is due to Peng Qi and Yuhao Zhang, with help from Tim Dozat and Jason Bolton.

This release is not the same as Stanford's CoNLL 2018 Shared Task system. The tokenizer, lemmatizer, morphological features, and multi-word term systems are a cleaned up version of the shared task code, but in the competition we used a Tensorflow version of the tagger and parser by Tim Dozat, which has been approximately reproduced in PyTorch (though with a few deviations from the original) for this release.

If you use the CoreNLP server, please cite the CoreNLP software package and the respective modules as described here ("Citing Stanford CoreNLP in papers"). The CoreNLP client is mostly written by Arun Chaganty, and Jason Bolton spearheaded merging the two projects together.

Issues and Usage Q&A

To ask questions, report issues or request features, please use the GitHub Issue Tracker.

Setup

StanfordNLP supports Python 3.6 or later. We strongly recommend that you install StanfordNLP from PyPI. If you already have pip installed, simply run:

pip install stanfordnlp

this should also help resolve all of the dependencies of StanfordNLP, for instance PyTorch 1.0.0 or above.

If you currently have a previous version of stanfordnlp installed, use:

pip install stanfordnlp -U

Alternatively, you can also install from source of this git repository, which will give you more flexibility in developing on top of StanfordNLP and training your own models. For this option, run

git clone https://github.com/stanfordnlp/stanfordnlp.git
cd stanfordnlp
pip install -e .

Running StanfordNLP

Getting Started with the neural pipeline

To run your first StanfordNLP pipeline, simply following these steps in your Python interactive interpreter:

>>> import stanfordnlp
>>> stanfordnlp.download('en')   # This downloads the English models for the neural pipeline
# IMPORTANT: The above line prompts you before downloading, which doesn't work well in a Jupyter notebook.
# To avoid a prompt when using notebooks, instead use: >>> stanfordnlp.download('en', force=True)
>>> nlp = stanfordnlp.Pipeline() # This sets up a default neural pipeline in English
>>> doc = nlp("Barack Obama was born in Hawaii.  He was elected president in 2008.")
>>> doc.sentences[0].print_dependencies()

The last command will print out the words in the first sentence in the input string (or Document, as it is represented in StanfordNLP), as well as the indices for the word that governs it in the Universal Dependencies parse of that sentence (its "head"), along with the dependency relation between the words. The output should look like:

('Barack', '4', 'nsubj:pass')
('Obama', '1', 'flat')
('was', '4', 'aux:pass')
('born', '0', 'root')
('in', '6', 'case')
('Hawaii', '4', 'obl')
('.', '4', 'punct')

Note: If you are running into issues like OSError: [Errno 22] Invalid argument, it's very likely that you are affected by a known Python issue, and we would recommend Python 3.6.8 or later and Python 3.7.2 or later.

We also provide a multilingual demo script that demonstrates how one uses StanfordNLP in other languages than English, for example Chinese (traditional)

python demo/pipeline_demo.py -l zh

See our getting started guide for more details.

Access to Java Stanford CoreNLP Server

Aside from the neural pipeline, this project also includes an official wrapper for acessing the Java Stanford CoreNLP Server with Python code.

There are a few initial setup steps.

  • Download Stanford CoreNLP and models for the language you wish to use
  • Put the model jars in the distribution folder
  • Tell the python code where Stanford CoreNLP is located: export CORENLP_HOME=/path/to/stanford-corenlp-full-2018-10-05

We provide another demo script that shows how one can use the CoreNLP client and extract various annotations from it.

Online Colab Notebooks

To get your started, we also provide interactive Jupyter notebooks in the demo folder. You can also open these notebooks and run them interactively on Google Colab. To view all available notebooks, follow these steps:

  • Go to the Google Colab website
  • Navigate to File -> Open notebook, and choose GitHub in the pop-up menu
  • Note that you do not need to give Colab access permission to your github account
  • Type stanfordnlp/stanfordnlp in the search bar, and click enter

Trained Models for the Neural Pipeline

We currently provide models for all of the treebanks in the CoNLL 2018 Shared Task. You can find instructions for downloading and using these models here.

Batching To Maximize Pipeline Speed

To maximize speed performance, it is essential to run the pipeline on batches of documents. Running a for loop on one sentence at a time will be very slow. The best approach at this time is to concatenate documents together, with each document separated by a blank line (i.e., two line breaks \n\n). The tokenizer will recognize blank lines as sentence breaks. We are actively working on improving multi-document processing.

Training your own neural pipelines

All neural modules in this library, including the tokenizer, the multi-word token (MWT) expander, the POS/morphological features tagger, the lemmatizer and the dependency parser, can be trained with your own CoNLL-U format data. Currently, we do not support model training via the Pipeline interface. Therefore, to train your own models, you need to clone this git repository and set up from source.

For detailed step-by-step guidance on how to train and evaluate your own models, please visit our training documentation.

LICENSE

StanfordNLP is released under the Apache License, Version 2.0. See the LICENSE file for more details.

More Repositories

1

dspy

DSPy: The framework for programming—not prompting—foundation models
Python
11,014
star
2

CoreNLP

CoreNLP: A Java suite of core NLP tools for tokenization, sentence segmentation, NER, parsing, coreference, sentiment analysis, etc.
Java
9,470
star
3

stanza

Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Python
7,059
star
4

GloVe

Software in C and data files for the popular GloVe model for distributed word representations, a.k.a. word vectors or embeddings
C
6,705
star
5

cs224n-winter17-notes

Course notes for CS224N Winter17
TeX
1,579
star
6

treelstm

Tree-structured Long Short-Term Memory networks (http://arxiv.org/abs/1503.00075)
Lua
878
star
7

pyreft

ReFT: Representation Finetuning for Language Models
Python
687
star
8

python-stanford-corenlp

Python interface to CoreNLP using a bidirectional server-client interface.
Python
513
star
9

string2string

String-to-String Algorithms for Natural Language Processing
Jupyter Notebook
494
star
10

mac-network

Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Python
487
star
11

pyvene

Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Python
479
star
12

phrasal

A large-scale statistical machine translation system written in Java.
Java
207
star
13

spinn

SPINN (Stack-augmented Parser-Interpreter Neural Network): fast, batchable, context-aware TreeRNNs
Python
205
star
14

coqa-baselines

The baselines used in the CoQA paper
Python
174
star
15

cocoa

Framework for learning dialogue agents in a two-player game setting.
Python
155
star
16

stanza-old

Stanford NLP group's shared Python tools.
Python
141
star
17

chirpycardinal

Stanford's Alexa Prize socialbot
Python
129
star
18

wge

Workflow-Guided Exploration: sample-efficient RL agent for web tasks
Python
104
star
19

pdf-struct

Logical structure analysis for visually structured documents
Python
63
star
20

cs224n-web

http://cs224n.stanford.edu
HTML
62
star
21

edu-convokit

Edu-ConvoKit: An Open-Source Framework for Education Conversation Data
Jupyter Notebook
43
star
22

ColBERT-QA

Code for Relevance-guided Supervision for OpenQA with ColBERT (TACL'21)
41
star
23

stanza-train

Model training tutorials for the Stanza Python NLP Library
Python
37
star
24

phrasenode

Mapping natural language commands to web elements
Python
37
star
25

color-describer

Code for Learning to Generate Compositional Color Descriptions
OpenEdge ABL
27
star
26

contract-nli-bert

A baseline system for ContractNLI (https://stanfordnlp.github.io/contract-nli/)
Python
25
star
27

python-corenlp-protobuf

Python bindings for Stanford CoreNLP's protobufs.
Python
21
star
28

stanza-resources

21
star
29

miniwob-plusplus-demos

Demos for the MiniWoB++ benchmark
17
star
30

multi-distribution-retrieval

Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval
Python
13
star
31

huggingface-models

Scripts for pushing models to huggingface repos
Python
11
star
32

sentiment-treebank

Updated version of SST
Python
9
star
33

nlp-meetup-demo

Java
8
star
34

plot-data

datasets for plotting
Jupyter Notebook
7
star
35

en-worldwide-newswire

NER dataset built from foreign newswire
6
star
36

plot-interface

Web interface for the plotting project
JavaScript
4
star
37

contract-nli

ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
HTML
4
star
38

pdf-struct-models

A repository for hosting models for https://github.com/stanfordnlp/pdf-struct
HTML
2
star
39

wob-data

Data for QAWoB and FlightWoB web interaction benchmarks from the World of Bits paper (Shi et al., 2017).
Python
2
star
40

pdf-struct-dataset

Dataset for pdf-struct (https://github.com/stanfordnlp/pdf-struct)
HTML
1
star
41

handparsed-treebank

Extra hand parsed data for training models
Perl
1
star
42

coqa

CoQA -- A Conversational Question Answering Challenge
Shell
1
star
43

chirpy-parlai-blenderbot-fork

A fork of ParlAI supporting Chirpy Cardinal's custom neural generator
Python
1
star
44

nn-depparser

A re-implementation of nndep using PyTorch.
Python
1
star