• Stars
    star
    158
  • Rank 237,131 (Top 5 %)
  • Language
  • License
    MIT License
  • Created about 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

meta-learning research

Meta-learning research


|| maml_rl | Meta-RL | learning-to-learn | supervised-reptile | pytorch-maml-rl | metacar | pytorch-meta-optimizer | TCML-tensorflow | awesome-architecture-search | awesome-meta-learning | Meta-Learning-Papers | awesome-NAS | google-research/nasbench | [AlphaX-NASBench101] | paperswithcode: meta-learning | paperswithcode: architecture-search | Awesome-Meta-Learning | Hands-On Meta Learning With Python ||


Review papers







  • Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions. [arxiv] (2017).
  • Learning to learn by gradient descent by gradient descent. [arxiv] , 2016, [code]
  • Using fast weights to attend to the recent past. [arxiv] 2016
  • Hypernetworks. In ICLR 2017, [arxiv] .
  • Siamese neural networks for one-shot image recognition. [arxiv]
  • One-shot learning by inverting a compositional causal process.[arxiv] 2013.
  • Meta-learning with memory-augmented neural networks.[arxiv] 2016.
  • Matching networks for one shot learning.[arxiv] 2016.
  • Learning to remember rare events.[arxiv] In ICLR 2017.
  • Learning to navigate in complex environments.[arxiv] DeepMind, 2016.
  • Neural architecture search with reinforcement learning. [arxiv] ICLR 2017.
  • Rl2: Fast reinforcement learning via slow reinforcement learning. UC Berkeley and OpenAI,[arxiv] 2016.
  • Learning to optimize. (ICLR),[arxiv] 2017.
  • Towards a neural statistician.[arxiv] (ICLR), 2017.
  • Actor-mimic: Deep multitask and transfer reinforcement learning. [arxiv] (ICLR), 2016.
  • Optimization as a model for few-shot learning. [arxiv] (ICLR), 2017.
  • Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.[arxiv] , [code], [pytorch-maml-rl], [code]
  • Learning to Learn for Global Optimization of Black Box Functions. [arxiv]
  • Meta Networks.[arxiv] 2017.
  • One-Shot Imitation Learning.[arxiv] 2017.
  • Active One-shot Learning.[arxiv] 2017.
  • Learned Optimizers that Scale and Generalize.[arxiv] 2017.
  • Low-shot visual object recognition (2016).[arxiv]
  • Learning to reinforcement learn.[arxiv] 2016. [code]
  • Learning to Learn: Meta-Critic Networks for Sample Efficient Learning. [arxiv] 2017.
  • Meta-SGD: Learning to Learn Quickly for Few Shot Learning. [arxiv] 2017.
  • Meta-Learning with Temporal Convolutions.[arxiv] 2017. [code]
  • Meta Learning Shared Hierarchies.[arxiv] 2017.
  • One-shot visual imitation learning via meta-learning. [arxiv] 2017. [code]
  • Learning to Compare: Relation Network for Few Shot Learning. [arxiv] 2017.
  • Human-level concept learning through probabilistic program induction.[arxiv] 2015.
  • Neural task programming: Learning to generalize across hierarchical tasks. [arxiv] 2017.
  • Learning feed-forward one-shot learners. [arxiv]
  • Learning to learn: Model regression networks for easy small sample learning. [arxiv] 2016.

  • Meta-learning in reinforcement learning. [paper] 2003.
  • Learning to learn using gradient descent. [paper] 2001.
  • A meta-learning method based on temporal difference error. [paper] 2009.
  • Learning to learn: Introduction and overview. [paper] 1998.
  • Meta-learning with backpropagation. [paper] 2001.
  • A perspective view and survey of meta-learning. [paper] 2002.
  • Zero-data learning of new tasks. [paper] 2008.
  • One shot learning of simple visual concepts. [paper] 2011.
  • One-shot learning of object categories. [paper] 2006.
  • A neural network that embeds its own meta-levels. [paper] 1993.
  • Lifelong learning algorithms. [paper] 1998.
  • Learning a synaptic learning rule. [paper] 1990.
  • On the search for new learning rules for ANNs. [paper] 1995.
  • Learning many related tasks at the same time with backpropagation. [paper] 1995.
  • Introduction to the special issue on meta-learning. [paper] 2004.
  • Meta-learning in computational intelligence. [paper] 2011.
  • Fixed-weight networks can learn. [paper] 1990.
  • Evolutionary principles in self-referential learning; On learning how to learn: The meta-meta-... hook. [paper] 1987.
  • Learning to control fast-weight memories: An alternative to dynamic recurrent networks.Neural Computation, [paper] 1992.
  • Simple principles of metalearning. [paper] 1996.
  • Learning to learn. [paper] 1998.

Maintainer

Gopala KR / @gopala-kr Will