• Stars
    star
    2,346
  • Rank 19,611 (Top 0.4 %)
  • Language
  • License
    MIT License
  • Created almost 4 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A curated list of resources on implicit neural representations.

Awesome Implicit Neural Representations Awesome

A curated list of resources on implicit neural representations, inspired by awesome-computer-vision.

Hiring graduate students!

I am looking for graduate students to join my new lab at MIT CSAIL in July 2022. If you are excited about neural implicit representations, neural rendering, neural scene representations, and their applications in vision, graphics, and robotics, apply here! In the webform, you can choose me as "Potential Adviser", and in your SoP, please describe how our research interests are well-aligned. The deadline is Dec 15th!

Disclaimer

This list does not aim to be exhaustive, as implicit neural representations are a rapidly growing research field with hundreds of papers to date. Instead, it lists the papers that I give my students to read, which introduce key concepts & foundations of implicit neural representations across applications. I will therefore generally not merge pull requests. This is not an evaluation of the quality or impact of a paper, but rather the result of my and my students' research interests.

However, if you see potential for another list that is broader or narrower in scope, get in touch, and I'm happy to link to it right here and contribute to it as well as I can!

Disclosure: I am an author on the following papers.

Table of contents

What are implicit neural representations?

Implicit Neural Representations (sometimes also referred to as coordinate-based representations) are a novel way to parameterize signals of all kinds. Conventional signal representations are usually discrete - for instance, images are discrete grids of pixels, audio signals are discrete samples of amplitudes, and 3D shapes are usually parameterized as grids of voxels, point clouds, or meshes. In contrast, Implicit Neural Representations parameterize a signal as a continuous function that maps the domain of the signal (i.e., a coordinate, such as a pixel coordinate for an image) to whatever is at that coordinate (for an image, an R,G,B color). Of course, these functions are usually not analytically tractable - it is impossible to "write down" the function that parameterizes a natural image as a mathematical formula. Implicit Neural Representations thus approximate that function via a neural network.

Why are they interesting?

Implicit Neural Representations have several benefits: First, they are not coupled to spatial resolution anymore, the way, for instance, an image is coupled to the number of pixels. This is because they are continuous functions! Thus, the memory required to parameterize the signal is independent of spatial resolution, and only scales with the complexity of the underyling signal. Another corollary of this is that implicit representations have "infinite resolution" - they can be sampled at arbitrary spatial resolutions.

This is immediately useful for a number of applications, such as super-resolution, or in parameterizing signals in 3D and higher dimensions, where memory requirements grow intractably fast with spatial resolution. Further, generalizing across neural implicit representations amounts to learning a prior over a space of functions, implemented via learning a prior over the weights of neural networks - this is commonly referred to as meta-learning and is an extremely exciting intersection of two very active research areas! Another exciting overlap is between neural implicit representations and the study of symmetries in neural network architectures - for intance, creating a neural network architecture that is 3D rotation-equivariant immediately yields a viable path to rotation-equivariant generative models via neural implicit representations.

Another key promise of implicit neural representations lie in algorithms that directly operate in the space of these representations. In other words: What's the "convolutional neural network" equivalent of a neural network operating on images represented by implicit representations?

Colabs

This is a list of Google Colabs that immediately allow you to jump in and toy around with implicit neural representations!

Papers

Implicit Neural Representations of Geometry

The following three papers first (and concurrently) demonstrated that implicit neural representations outperform grid-, point-, and mesh-based representations in parameterizing geometry and seamlessly allow for learning priors over shapes.

Since then, implicit neural representations have achieved state-of-the-art-results in 3D computer vision:

Implicit representations of Geometry and Appearance

From 2D supervision only (โ€œinverse graphicsโ€)

3D scenes can be represented as 3D-structured neural scene representations, i.e., neural implicit representations that map a 3D coordinate to a representation of whatever is at that 3D coordinate. This then requires the formulation of a neural renderer, in particular, a ray-marcher, which performs rendering by repeatedly sampling the neural implicit representation along a ray.

One may also encode geometry and appearance of a 3D scene via its 360-degree, 4D light field. This obviates the need for ray-marching and enables real-time rendering and fast training with minimal memory footprint, but requires additional machinery to ensure multi-view consistency.

From 3D supervision

For dynamic scenes

The following papers concurrently proposed to leverage a similar approach for the reconstruction of dynamic scenes from 2D observations only via Neural Radiance Fields.

Symmetries in Implicit Neural Representations

Hybrid implicit / explicit (condition implicit on local features)

The following four papers concurrently proposed to condition an implicit neural representation on local features stored in a voxelgrid:

This has since been leveraged for inverse graphics as well:

  • Neural Sparse Voxel Fields Applies a similar concept to neural radiance fields.
  • Pixel-NERF (Yu et al. 2020) proposes to condition a NeRF on local features lying on camera rays, extracted from contact images, as proposed in PiFU (see "from 3D supervision").

The following papers condition a deep signed distance function on local patches:

Learning correspondence with Neural Implicit Representations

Robotics Applications

Generalization & Meta-Learning with Neural Implicit Representations

Fitting high-frequency detail with positional encoding & periodic nonlinearities

Implicit Neural Representations of Images

Composing implicit neural representations

The following papers propose to assemble scenes from per-object 3D implicit neural representations.

Implicit Representations for Partial Differential Equations & Boundary Value Problems

Generative Adverserial Networks with Implicit Representations

For 3D

For 2D

For 2D image synthesis, neural implicit representations enable the generation of high-resolution images, while also allowing the principled treatment of symmetries such as rotation and translation equivariance.

Image-to-image translation

Articulated representations

Talks

Links

  • awesome-NeRF - List of implicit representations specifically on neural radiance fields (NeRF)

License

License: MIT