• Stars
    star
    156
  • Rank 239,589 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 9 years ago
  • Updated almost 8 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Learning to Auto-Complete using RNN Language Models

Learning Python Code Suggestion with a Sparse Pointer Network

This repository contains the code used in the paper "Learning Python Code Suggestion with a Sparse Pointer Network"

Prerequisites

Generating the Corpus

Step 1: Cloning the Repos

To recreate the corpus used in the paper, run:

python3 github-scraper/scraper.py --mode=recreate --outdir=<PATH-TO-OUTPUT-DIR> --dbfile=/FULL/PATH/TO/pycodesuggest/data/cloned_repos.dat --githubuser=<GITHUB USERNAME>

Where outdir is the path on your local machine where the repos will be cloned. Note that the dbfile path should be the full path on your machine. You may be prompted for your Github password.


To obtain a fresh corpus based on a new search of Github, using the same criteria as the paper, run:

python3 github-scraper/scraper.py --mode=new --outdir=<PATH-TO-OUTPUT-DIR> --dbfile=cloned_repos.dat --githubuser=<GITHUB USERNAME>

Note that you may interrupt the process and continue where it left off later by providing the same dbfile.


There are a number of other parameters that allow you to create your own custom corpus, specifying the programming language or search term used to query Github amongst others. Run python3 github-scraper/scraper.py -h for more information

Step 2: (OPTIONAL): Remove unnecessary files

Linux/Mac OS: Run the following command in your output directory to remove non Python files

find . -type f ! -name "*.py" -delete

Step 3: Normalisation

Run the following command to normalise all files with a .py extension by providing the output directory of step 1 as the path. The normalised files will be written to a new directory with "normalised" appended to the path.

python3 github-scraper/normalisation.py --path=<PATH TO DOWNLOADED CORPUS>

Files which can't be parsed as valid Python3 will be ignored. The list of successfully processed files is written to PATH/processed.txt which also allows for the normalisation to continue if interrupted.

Step 4: Split into train/dev/test

To use the same train/dev/test split as used in the paper, copy the files train_files.txt, valid_files.txt and test_files.txt from the data directory into the downloaded corpus and normalised corpus directories.


To generate a new split, run the following command which generates the list of train files (train_files.txt), validation files (valid_files.txt) and test files (test_files.txt) in the ratio 0.5/0.2/0.3. Use the normalised path from the previous step. This will ensure that the list of files is available in both the normalised and unnormalised data sets.

python3 github-scraper/processFiles.py --path=<PATH TO NORMALISED CORPUS>

Then copy the 3 generated lists to the original un-normalised path.

Citing

If you make use of this code or the Python corpus, please cite:

@article{pycodesuggest,
  author    = {Avishkar Bhoopchand and
               Tim Rockt{\"{a}}schel and
               Earl Barr and
               Sebastian Riedel},
  title     = {Learning Python Code Suggestion with a Sparse Pointer Network},
  year      = {2016},
  url       = {http://arxiv.org/abs/1611.08307}
}

More Repositories

1

stat-nlp-book

Interactive Lecture Notes, Slides and Exercises for Statistical NLP
Jupyter Notebook
269
star
2

egal

easy drawing in jupyter
JavaScript
257
star
3

jack

Jack the Reader
Python
257
star
4

torch-imle

Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions
Python
257
star
5

emoji2vec

emoji2vec: Learning Emoji Representations from their Description
Jupyter Notebook
257
star
6

fakenewschallenge

UCL Machine Reading - FNC-1 Submission
Python
166
star
7

cqd

Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs
Python
95
star
8

ntp

End-to-End Differentiable Proving
NewLisp
88
star
9

d4

Differentiable Forth Interpreter
Python
66
star
10

low-rank-logic

Code for Injecting Logical Background Knowledge into Embeddings for Relation Extraction
Scala
65
star
11

inferbeddings

Injecting Background Knowledge in Neural Models via Adversarial Set Regularisation
Python
59
star
12

gntp

Python
57
star
13

ctp

Conditional Theorem Proving
Python
51
star
14

EMAT

Efficient Memory-Augmented Transformers
Python
34
star
15

stat-nlp-book-scala

Interactive book on Statistical NLP
Scala
32
star
16

simpleNumericalFactChecker

Fact checker for simple claims about statistical properties
Python
26
star
17

adversarial-nli

Code and data for the CoNLL 2018 paper "Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge."
Python
25
star
18

acl2015tutorial

Moro files for the ACL 2015 Tutorial on Matrix and Tensor Factorization Methods for Natural Language Processing
Scala
20
star
19

numerate-language-models

Python
19
star
20

fever

FEVER Workshop Shared-Task
Python
16
star
21

APE

Adaptive Passage Encoder for Open-domain Question Answering
Python
15
star
22

stat-nlp-course

Code for the UCL Statistical NLP course
Scala
11
star
23

newshack

BBC Newshack code
Scala
1
star
24

eqa-tools

Tools for Exam Question Answering
Python
1
star
25

softconf-start-sync

Softconf START sync, tool for Google Sheets
JavaScript
1
star
26

bibtex

BibTeX files
TeX
1
star