Numenta research papers code and data
This repository contains reproducible code for selected Numenta papers. It is currently under construction and will eventually include the source code for all the scripts used in Numenta's papers.
Avoiding Catastrophe: Active Dendrites Enable Multi-task Learning in Dynamic Environments
In this paper we show that the biophysical properties of dendrites, synapses, and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. First, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture in two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model’s prediction task changes throughout training.
Grid Cell Path Integration For Movement-Based Visual Object Recognition
This paper demonstrates the implementation of a sensorimotor network that uses grid-cell computations to process a sequence of visual inputs, specifically a sequence of image patches from the MNIST dataset. The network is able to classify novel digits (as well as perform other tasks) in a way that is robust to the specific sequence over which the visual space is sampled, a challenging setting for typical machine learning approaches. The work builds on our previous paper, “Locations in the Neocortex."
Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning
In this paper we investigate how dendritic properties can add value to ANNs in the context of continual learning, an area where ANNs suffer from catastrophic forgetting
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
In this paper we discuss inherent benefits of high dimensional sparse representations. We focus on robustness and sensitivity to interference. These are central issues with today’s neural network systems where even small and large perturbations can cause dramatic changes to a network’s output.
Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells
This paper provides an implementation for a location layer with grid-like modules that encode object-specific locations. This layer is incorpated into a network with an input layer and simulations show how the model can learn many complex objects and later infer which learned object is being sensed.
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
This paper proposes a network model composed of columns and layers that performs robust object learning and recognition. The model introduces a new feature to cortical columns, location information, which is represented relative to the object being sensed. Pairing sensory features with locations is a requirement for modeling objects and therefore must occur somewhere in the neocortex. We propose it occurs in every column in every region.
The HTM Spatial Pooler – a neocortical algorithm for online sparse distributed coding
This paper describes an important component of HTM, the HTM spatial pooler, which is a neurally inspired algorithm that learns sparse distributed representations online. Written from a neuroscience perspective, the paper demonstrates key computational properties of HTM spatial pooler.
Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark
14th IEEE ICMLA 2015 - This paper discusses how we should think about anomaly detection for streaming applications. It introduces a new open-source benchmark for detecting anomalies in real-time, time-series data.
Unsupervised Real-Time Anomaly Detection for Streaming Data
This paper discusses the requirements necessary for real-time anomaly detection in streaming data, and demonstrates how Numenta's online sequence memory algorithm, HTM, meets those requirements. It presents detailed results using the Numenta Anomaly Benchmark (NAB), the first open-source benchmark designed for testing real-time anomaly detection algorithms.
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
Foundational paper describing core HTM theory for sequence memory and its relationship to the neocortex. Written with a neuroscience perspective, the paper explains why neurons need so many synapses and how networks of neurons can form a powerful sequence learning mechanism.