• Stars
    star
    408
  • Rank 105,946 (Top 3 %)
  • Language
    Jupyter Notebook
  • Created almost 6 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Course repo for Applied Natural Language Processing (Spring 2019)

Course materials for Applied Natural Language Processing (Spring 2019). Syllabus: http://people.ischool.berkeley.edu/~dbamman/info256.html

Notebook Description
1.words/EvaluateTokenizationForSentiment.ipynb The impact of tokenization choices on sentiment classification.
1.words/ExploreTokenization.ipynb Different methods for tokenizing texts (whitespace, NLTK, spacy, regex)
1.words/TokenizePrintedBooks.ipynb Design a better tokenizer for printed books
2.distinctive_terms/ChiSquare.ipynb Find distinctive terms using the Chi-square test
2.distinctive_terms/CompareCorpora.ipynb Find distinctive terms using the Mann-Whitney rank sums test
3.dictionaries/DictionaryTimeSeries.ipynb Plot sentiment over time using human-defined dictionaries
4.classification/CheckData_TODO.ipynb Gather data for classification
4.classification/FeatureExploration_TODO.ipynb Feature engineering for text classification
4.classification/FeatureWeights_TODO.ipynb Analyze feature weights for text classification
4.classification/Hyperparameters_TODO.ipynb Explore hyperparameter choices on classification accuracy
5.text_regression/Regularization.ipynb Linear regression with L1/L2 regularization for box office prediction
6.tests/BootstrapConfidenceIntervals.ipynb Estimate confidence intervals with the bootstrap
6.tests/ParametricTest.ipynb Hypothesis testing with parametric (normal) tests
6.tests/PermutationTest.ipynb Hypothesis testing with non-parametric (permutation) tests
7.embeddings/DistributionalSimilarity.ipynb Explore distributional hypothesis to build high-dimensional, sparse representations for words
7.embeddings/TFIDF.ipynb Explore distributional hypothesis to build high-dimensional, sparse representations for words (with TF IDF scaling)
7.embeddings/TurneyLittman2003.ipynb Use word embeddings to implement the method of Turney and Littman (2003) for calculating the semantic orientation of a term defined by proximity to other terms in two polar dictionaries.
7.embeddings/WordEmbeddings.ipynb Explore word embeddings using Gensim
8.neural/MLP.ipynb MLP for text classification (keras)
8.neural/ExploreMLP.ipynb Explore MLP for your data (keras)
8.neural/CNN.ipynb CNN for text classification (keras)
8.neural/LSTM.ipynb LSTM for text classification (keras)
8.neural/Attention.ipynb Attention over word embeddings for document classification (keras)
8.neural/AttentionLSTM.ipynb Attention over LSTM output for text classification (keras)
9.annotation/IAAMetrics.ipynb Calculate inter-annotator agreement (Cohen's kappa, Krippendorff's alpha)
10.wordnet/ExploreWordNet.ipynb Explore WordNet synsets with a simple method for finding in a text all mentions of all hyponyms of a given node in the WordNet hierarchy (e.g., finding all buildings in a text).
10.wordnet/Lesk.ipynb Implement the Lesk algorithm for WSD using word embeddings
10.wordnet/Retrofitting.ipynb Explore retrofit word vectors
11.pos/KeyphraseExtraction.ipynb Keyphrase extraction with tf-idf and POS filtering
11.pos/POS_tagging.ipynb Understand the Penn Treebank POS tags through tagged texts
12.ner/ExtractingSocialNetworks.ipynb Extract social networks from literary texts
12.ner/SequenceLabelingBiLSTM.ipynb BiLSTM + sequence labeling for Twitter NER
12.ner/ToponymResolution.ipynb Extract place names from text, geolocate them and visualize on map
13.mwe/JustesonKatz95.ipynb Implement Justeson and Katz (1995) for identifying MWEs using POS tag patterns
14.syntax/SyntacticRelations.ipynb Explore dependency parsing by identifying the actions and objects that are characteristically associated with male and female characters.
15.coref/CorefSetup.ipynb Install neuralcoref for coreference resolution
15.coref/ExtractTimeline.ipynb Use coreference resolution for the task of timeline generation: for a given biography on Wikipedia, can you extract all of the events associated with the people mentioned and create one timeline for each person?
16.ie/DependencyPatterns.ipynb Measuring common dependency paths between two entities that hold a given relation to each other 
16.ie/EntityLinking.ipynb Explore named entity disambiguation and entity linking to Wikipedia pages. 
17.clustering/TopicModeling_TODO.ipynb Explore topic modeling to discover broad themes in a collection of movie summaries.

More Repositories

1

litbank

Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.
Python
341
star
2

book-nlp

Natural language processing pipeline for book-length documents (archival Java version; for current Python version, see: https://github.com/booknlp/booknlp)
Java
309
star
3

latin-bert

Latin BERT
Shell
56
star
4

anlp21

Data and code to support "Applied Natural Language Processing" (INFO 256, Fall 2021, UC Berkeley)
Jupyter Notebook
55
star
5

characterRelations

32
star
6

geoSGLM

Code for learning geographically-informed word embeddings
Java
22
star
7

anlp23

Data and code to support "Applied Natural Language Processing" (INFO 256, Fall 2023, UC Berkeley)
Jupyter Notebook
16
star
8

akkadian-morph-analyzer

Morphological Analyzer for Akkadian
Python
14
star
9

ACL2013_Personas

Java
14
star
10

book-segmentation

Labeled segmentation for the document structure of printed books
Python
13
star
11

ACL2019-literary-events

Python
13
star
12

lrec2020-coref

Code and data to support Bamman et al. (2020), "A Dataset of Literary Coreference" (LREC)
Python
10
star
13

nlp23

Jupyter Notebook
9
star
14

nlp22

Data and code to support INFO 159/259 (Natural Language Processing), Spring 2022
Jupyter Notebook
8
star
15

comphumF20

Jupyter Notebook
8
star
16

NAACL2019-literary-entities

Code to support Bamman et al. (2019), "An Annotated Dataset of Literary Entities" (NAACL)
Python
8
star
17

anlp24

Data and code to support "Applied Natural Language Processing" (INFO 256, Fall 2024, UC Berkeley)
Jupyter Notebook
6
star
18

jcdl2017

Manually annotated dates of first publication for 2,706 books in the HathiTrust Digital Library
5
star
19

dds

Material for "Deconstructing Data Science"
Jupyter Notebook
5
star
20

nlp21

Code and data to support INFO 159/259 (Natural Language Processing)
Jupyter Notebook
3
star
21

latex

LaTex examples
3
star
22

cl-coref-annotator

Command-line tool for coreference annotation
Python
3
star
23

latin-texts

2
star