• Stars
    star
    2
  • Language
    Python
  • Created over 6 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A python based chat-bot based on deep seq2seq model trained to talk and interact like a friend. The system uses a encoder-decoder architecture with each block being a LSTM model. The models were trained on the Movie Dialog dataset and the end product was an interactive python app which could hold a good conversation with a human.

More Repositories

1

RL-VRP-PtrNtwrk

Reinforcement Learning for Solving the Vehicle Routing Problem
Python
62
star
2

Neural-Architecture-Search-using-Reinforcement-Learning

An implementation of neural architecture search using the REINFORCE algorithm. we use a re-current network to generate the model descriptions of neural networks and trainthis RNN with reinforcement learning to maximize the expected accuracy of thegenerated architectures on a validation set. This algorithm is tested on the CIFAR-10 dataset. The project is inspired from the work presented in the paper "NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING" by Barret et al from Google Brain.
Python
6
star
3

3D-Shape-Generation-using-3D-GANS

An implementation of the paper "Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversial Modelling" by Wu (et al) presented at NIPS 2016. The paper introduces 3D GANs, which leverages volumetric convolutional networks and vanilla GANs to produce 3D objects from a probabilistic space. This implementations uses python and the Keras framework to build the 3D GAN Architecture.
Python
5
star
4

Self-Critical-Sequential-training-with-RL-for-chatbots

A chatbot implemented as a seq-2-seq model and trained using cross entropy method. The performance of the chatbot is improved by using Sequence Level Training using REINFORCE algorithm. In order to apply the REINFORCE algorithm (Williams, 1992; Zaremba & Sutskever, 2015) to the problem of sequence generation we cast our problem in the reinforcement learning (RL) framework (Sutton & Barto, 1988). Our generative model (the RNN) can be viewed as an agent, which interacts with the external environment (the words and the context vector it sees as input at every time step). The parameters of this agent defines a policy, whose execution results in the agent picking an action. In the sequence generation setting, an action refers to predicting the next word in the sequence at each time step. After taking an action the agent updates its internal state (the hidden units of RNN). Once the agent has reached the end of a sequence, it observes a reward. We can choose any reward function. Here, we use BLEU (Papineni et al., 2002) and ROUGE-2 (Lin & Hovy, 2003) since these are the metrics we use at test time.
2
star
5

Predicting-Future-Stock-Prices-using-Actor-Critic-Method

An implementation of a stock trading bot using an Actor Critic algorithm. The trading environment is converted into an MDP with the state being the stock values and actions of the agent is to either HOLD, SELL or BUY. The agent is trained to maximise the overall revenue in the simulated Trading environment.
Python
1
star
6

Face_Recognition

Python program to recognise faces and smiles
Python
1
star
7

Oauth2.0Spring

Java
1
star
8

Visual-Question-Answering-System

An end-to-end VQA system implemented using the Keras framework.Visual Question Answering (VQA) is one such challenge which requires high-level scene interpretation from images combined with language modelling of relevant Q&A. Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. This is an implementation of the model proposed by the original VQA paper by Agrawal et. al.
Python
1
star
9

Sequence-to-sequence-Video-Captioning-System

An implementation of a sequence-to-sequence video captioning system inspired by the paper "Sequence to Sequence – Video to Text" by Subhashini Et. Al. An end-to-end sequence-to-sequence model is used to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. The LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip.
1
star