• Stars
    star
    129
  • Rank 270,185 (Top 6 %)
  • Language
    Python
  • License
    GNU Affero Genera...
  • Created over 3 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Stanford's Alexa Prize socialbot

chirpycardinal

Codebase for chirpy cardinal

Getting Started

  • If you'd like to run the bot locally, start here
  • To chat with chirpy on our web server, start here
  • For a general overview of the codebase, start here

How the code is organized

agent: When you run chirpycardinal, you will create an agent. Agents manage data storage, logging, user message input, bot message output, connections to remote modules, and calls to the handler. Three agents are provided:

  • local_agent.py: an agent that stores data and runs remote modules locally.
  • remote_non_persistent_agent.py: an agent that runs modules remotely, but stores data in memory.
  • remote_psql_persistent_agent.py: an agent that runs modules remotely and stores data in postgres. To use this agent, you will need to set up your own postgres instance.

servers: Contains the code needed to run chirpycardinal servers

  • servers/local/shell_chat.py: script to build docker modules locally and run chat in a loop.
  • servers/local/local_callable_manager.py defines the LocalCallableManager class, which is used to run docker containers locally
  • servers/local/local_callable_config.json defines the ports, dockerfiles, and urls associated with each container

chirpy: This directory contains the bot’s response generators, remote modules, and dialog management. The core logic of the bot is here. Code in this directory is invariant of agent specifications.

chirpy/annotators When a user utterance is input, all annotators are run on it and their results are stored in state, so that they can be used by the response generators. Annotations include dialog act and user emotion, among others.

chirpy/core The bot’s core logic components. Highlighted files are:

  • dialog_manager.py: this contains the function get_response_and_prompt, which runs all response generators, ranks their responses, and returns the highest ranking response and prompt, and the function execute_turn which loads the rg states from the previous turn, updates the state based on the response and prompt chosen by get_response_and_prompt and then returns the bot’s next utterance
  • handler.py deserializes the state, runs the NLP pipeline, updates the state based on it, calls dialog manager’s execute_turn, and then serializes the state
  • response_priority.py: defines which RGs have the highest priority for tiebreaking if multiple RGs return responses with the same confidence level
  • priority_ranking_strategy.py Logic for ranking responses and prompts
  • state.py: The State class defines what should be stored in each state and contains functions for serializing/deserializing the state.
  • user_attributes.py: The UserAttributes class defines which user attributes should be recorded and contains functions for serializing/deserializing user attributes.
  • regex: the regex directory contains code for creating and testing regular expressions that can be used by the bot. New regexes should be added to templates.py

chirpy/response_generators: Contains all response generators used by the bot. More detail can be found in the Creating a Response Generator section

docker: This is where the dockerfiles, configs, and lambda functions of each remote module are defined.

scrapers: Scrape data from Twitter and Reddit, so that it can be stored in elastic-search

test: Integration tests for chirpy. These can be run with the command sh local_test_integ.sh

wiki-es-dump: Processes and stores raw wiki files for use by the response generators. wiki-setup.md contains detailed instructions for this step.

Creating an Agent

Agents manage the bot’s data storage, logging, message input/output, and connections to remote modules. The agent class provided, local_agent.py stores data locally and inputs/outputs messages as text. By defining your own agent, you can alter any of these components, for example storing data in a Redis instance, or inputting messages as audio.

Highlighted features of the LocalAgent are: init function, which initializes

  • last_state and current_state dicts These are serialized/deserialized by the functions in chirpy/core/state.py. If you change their attributes in your agent, then you should also update state.py
  • user_attributes dict, which contains
    • user_id: unique identifier for the user
    • session_id: unique identifier for the current session
    • user_timezone: the user’s timezone (if available) which is used by response generators to create time-specific responses, e.g. “good morning!”
    • turn_num: the number of the current turn persist function
  • Manages storage of the state and user_attributes. If you want to store things non-locally, you would make this change here should_launch function
  • Determine whether to launch the bot, for example based on specific commands should_end_session function
  • Determine whether to end the conversation, which may also be based on specific commands or heuristics process_utterance function
  • Retrieve the current state, previous state, and user attributes from your storage
  • Call handler.execute() on the current state, previous state, and user attributes, which returns updated states and a response
  • Persist the updated states in your storage
  • Return the response and current state

Creating a new Response Generator

To create a new response generator, you will need to

  1. Define a new class for your response generator
  2. Add your response generator to the handler
  3. (optional) Structure dialogue using treelets

Defining a Response Generator class

You will need to create a new class for your response generator. To do this,

  1. Create a file my_new_response_generator.py in chirpy/response_generators which defines a MyNewResponseGenerator class
  2. Set the class’s name attribute to be 'NEW_NAME’
  3. Define the following functions of your class:
  • init_state (returns a State object) which contains the state for your response generator which stores - - information about the response generator, e.g. topics discussed
  • get_entity (returns an UpdateEntity object). This is used to override the entity linker, in cases where the response generator has a better contextual understanding of what the new entity should be.
  • get_response (returns a ResponseGeneratorResult) based on the user’s utterance, annotations, and the response generator’s state. If the response generator doesn’t have any suitable responses, this returns an emptyResult object
  • get_prompt (returns a PromptResult) based on the user’s utterance, annotations, and the response generator’s state. If the response generator doesn’t have any suitable prompts, this returns an emptyPrompt object
  • update_state_if_chosen: updates the response generator’s conditional state if the response generator is chosen. For example, this might mean adding its response to a list of questions asked
  • update_state_if_not_chosen: updates the response generator’s conditional state if the response generator was not chosen. For example, by setting the current topic to be None.

Adding a Response Generator to the Handler

In order for your response generator to be called, it needs to be added to a) your handler and b) the response priority list. To do this,

  1. Add MyNewResponseGenerator to your handler’s list response_generator_classes in your agent. If you’re using the local agent, you would add this to local_agent.py
  2. Using the name you declared in your response generator class, set the following in response_priority.py:
  • TiebreakPriority: how your response generator should rank if other response generators return equally high-priority responses
  • FORCE_START_PROMPT_DIST, CURRENT_TOPIC_PROMPT_DIST, CONTEXTUAL_PROMPT_DIST, and GENERIC_PROMPT_DIST, which determine the likelihood of a response generator’s prompt being chosen for the given prompt types. For detail about what different response and prompt types mean, see response_priority.py

Using Treelets

If your response generator has scripted components, then you may want to use treelets. Treelets handle branching options of a scripted response generator. Based on a user’s response, one treelet can determine which treelet should go next. This value is stored in the response_generator’s conditional_state. To see an example of how this works in code, look at categories_response_generator.py, categories/treelets/introductory_treelet.py, and categories/treelets/handle_answer_treelet.py.

Running Chirpy Locally

Clone Repository

git clone https://github.com/stanfordnlp/chirpycardinal.git

Set CHIRPY_HOME environment variable

  1. cd into the chirpycardinal directory2
  2. Run pwd to get the absolute path to this directory, e.g. /Users/username/Documents/chirpycardinal
  3. Add the following 2 lines to ~/.bash_profile:
  • export CHIRPY_HOME=/Users/username/Documents/chirpycardinal
  • export PATH=$CHIRPY_HOME/bin:$PATH
  1. Run source ~/.bash_profile

Set up ElasticSearch Indices and Postgres database

  1. cd into wiki-es-dump/ where the below scripts are located
  2. Follow the instructions in wiki-setup.md to
  • Install dependencies
  • Run scripts and set up the indices
  1. Set up the twitter opinions database (Skip this step if you don't need the opinions resonse generator

Configure credential environment variables

Configure the credentials for your es index as environment variables Step 1: copy the following into your ~/.bash_profile export ES_PASSWORD= your_password export ES_USER=your_username export ES_REGION=your_region export ES_HOST=your_host export ES_SCHEME=https export ES_PORT=your_port

Step 2: run source ~/.bash_profile

Replace credential in chirpy/core/es_config.json

“url”: your_es_url

Set up the chirpy environment

  1. Make a new conda env: conda create --name chirpy python=3.7
  2. Install pip3 --v19.0 or higher
  3. cd into your new directory
  4. run conda activate chirpy
  5. run pip3 install -r requirements.txt

Install docker, pull images

Install docker Pull images from our dockerhub repositories

docker pull openchirpy/questionclassifier
docker pull openchirpy/dialogact
docker pull openchirpy/g2p
docker pull openchirpy/stanfordnlp
docker pull openchirpy/corenlp
docker pull openchirpy/gpt2ed
docker pull openchirpy/convpara

These images contain the model files as well. The images are large and can a while to download. We would recommend having 24G of disk space allocated to docker (otherwise it'll complain about the disk space being full).

Run the text agent

Run python3 -m servers.local.shell_chat To end your conversation, say “stop” If the docker images don't exist (you didn't download them in the above step), the script will attempt to build them which might take a while.

Building your own docker images

Depending on which docker module you want to rebuild you would have to download one of the following models. Then run the respective Dockerfile to build there. There are issues with the python package versioning. Huggingface transformers has gotten breaking changes since we wrote the code, so the code needs to be updated, but that will likely not happen immedietly but might happen with next release.

Download and store models

  1. Add a model/ directory to docker/dialogact, docker/emotionclassifier, docker/gpt2ed, and docker/questionclassifier
  2. Download and unzip models in this folder, and move them into the chirpycardinal repo
  • dialog-act.zip should go to docker/dialogact/model
  • emotion-classifier.zip should go to docker/emotionclassifier/model
  • gpt2ed.zip should go to docker/gpt2ed/model. Once unzipped, rename to gpt2ed
  • question-classifier.zip should go to docker/questionclassifier/model

License

The code is licensed under GNU AGPLv3. There is an exception for currently participating Alexa Prize Teams to whom it is licensed under GNU GPLv3.

More Repositories

1

dspy

DSPy: The framework for programming—not prompting—foundation models
Python
11,014
star
2

CoreNLP

CoreNLP: A Java suite of core NLP tools for tokenization, sentence segmentation, NER, parsing, coreference, sentiment analysis, etc.
Java
9,470
star
3

stanza

Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Python
7,059
star
4

GloVe

Software in C and data files for the popular GloVe model for distributed word representations, a.k.a. word vectors or embeddings
C
6,705
star
5

cs224n-winter17-notes

Course notes for CS224N Winter17
TeX
1,579
star
6

treelstm

Tree-structured Long Short-Term Memory networks (http://arxiv.org/abs/1503.00075)
Lua
878
star
7

pyreft

ReFT: Representation Finetuning for Language Models
Python
687
star
8

python-stanford-corenlp

Python interface to CoreNLP using a bidirectional server-client interface.
Python
513
star
9

string2string

String-to-String Algorithms for Natural Language Processing
Jupyter Notebook
494
star
10

mac-network

Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Python
487
star
11

pyvene

Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Python
479
star
12

phrasal

A large-scale statistical machine translation system written in Java.
Java
207
star
13

spinn

SPINN (Stack-augmented Parser-Interpreter Neural Network): fast, batchable, context-aware TreeRNNs
Python
205
star
14

coqa-baselines

The baselines used in the CoQA paper
Python
174
star
15

cocoa

Framework for learning dialogue agents in a two-player game setting.
Python
155
star
16

stanza-old

Stanford NLP group's shared Python tools.
Python
141
star
17

stanfordnlp

[Deprecated] This library has been renamed to "Stanza". Latest development at: https://github.com/stanfordnlp/stanza
Python
111
star
18

wge

Workflow-Guided Exploration: sample-efficient RL agent for web tasks
Python
104
star
19

pdf-struct

Logical structure analysis for visually structured documents
Python
63
star
20

cs224n-web

http://cs224n.stanford.edu
HTML
62
star
21

edu-convokit

Edu-ConvoKit: An Open-Source Framework for Education Conversation Data
Jupyter Notebook
43
star
22

ColBERT-QA

Code for Relevance-guided Supervision for OpenQA with ColBERT (TACL'21)
41
star
23

stanza-train

Model training tutorials for the Stanza Python NLP Library
Python
37
star
24

phrasenode

Mapping natural language commands to web elements
Python
37
star
25

color-describer

Code for Learning to Generate Compositional Color Descriptions
OpenEdge ABL
27
star
26

contract-nli-bert

A baseline system for ContractNLI (https://stanfordnlp.github.io/contract-nli/)
Python
25
star
27

python-corenlp-protobuf

Python bindings for Stanford CoreNLP's protobufs.
Python
21
star
28

stanza-resources

21
star
29

miniwob-plusplus-demos

Demos for the MiniWoB++ benchmark
17
star
30

multi-distribution-retrieval

Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval
Python
13
star
31

huggingface-models

Scripts for pushing models to huggingface repos
Python
11
star
32

sentiment-treebank

Updated version of SST
Python
9
star
33

nlp-meetup-demo

Java
8
star
34

plot-data

datasets for plotting
Jupyter Notebook
7
star
35

en-worldwide-newswire

NER dataset built from foreign newswire
6
star
36

plot-interface

Web interface for the plotting project
JavaScript
4
star
37

contract-nli

ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
HTML
4
star
38

pdf-struct-models

A repository for hosting models for https://github.com/stanfordnlp/pdf-struct
HTML
2
star
39

wob-data

Data for QAWoB and FlightWoB web interaction benchmarks from the World of Bits paper (Shi et al., 2017).
Python
2
star
40

pdf-struct-dataset

Dataset for pdf-struct (https://github.com/stanfordnlp/pdf-struct)
HTML
1
star
41

handparsed-treebank

Extra hand parsed data for training models
Perl
1
star
42

coqa

CoQA -- A Conversational Question Answering Challenge
Shell
1
star
43

chirpy-parlai-blenderbot-fork

A fork of ParlAI supporting Chirpy Cardinal's custom neural generator
Python
1
star
44

nn-depparser

A re-implementation of nndep using PyTorch.
Python
1
star