• Stars
    star
    304
  • Rank 137,274 (Top 3 %)
  • Language
    TeX
  • License
    MIT License
  • Created almost 2 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Class notes for the course "Long Term Memory in AI - Vector Search and Databases" COS 597A @ Princeton Fall 2023

Long Term Memory in AI - Vector Search and Databases

NOTE: COS 597A class times changed for Fall semester 2023. Classes will be held 9am-12noon.

Instructors

Overview

Long Term Memory is a foundational capability in the modern AI Stack. At their core, these systems use vector search. Vector search is also a basic tool for systems that manipulate large collections of media like search engines, knowledge bases, content moderation tools, recommendation systems, etc. As such, the discipline lays at the intersection of Artificial Intelligence and Database Management Systems. This course will cover the theoretical foundations and practical implementation of vector search applications, algorithms, and systems. The course will be evaluated with project and in-class presentation.

Contribute

All class materials are intended to be used freely by academics anywhere, students and professors alike. Please contribute in the form of pull requests or by opening issues.

https://github.com/edoliberty/vector-search-class-notes

On unix-like systems (e.g. macos) with bibtex and pdflatex available you should be able to run this:

git clone [email protected]:edoliberty/vector-search-class-notes.git
cd vector-search-class-notes
./build

Syllabus

  • 9/8 - Class 1 - Introduction to Vector Search [Matthijs + Edo + Nataly]

    • Intro to the course: Topic, Schedule, Project, Grading, ...
    • Embeddings as an information bottleneck. Instead of learning end-to-end, use embeddings as an intermediate representation
    • Advantages: scalability, instant updates, and explainability
    • Typical volumes of data and scalability. Embeddings are the only way to manage / access large databases
    • The embedding contract: the embedding extractor and embedding indexer agree on the meaning of the distance. Separation of concerns.
    • The vector space model in information retrieval
    • Vector embeddings in machine learning
    • Define vector, vector search, ranking, retrieval, recall
  • 9/15 - Class 2 - Text embeddings [Matthijs]

    • 2-layer word embeddings. Word2vec and fastText, obtained via a factorization of a co-occurrence matrix. Embedding arithmetic: king + woman - man = queen, (already based on similarity search)
    • Sentence embeddings: How to train, masked LM. Properties of sentence embeddings.
    • Large Language Models: reasoning as an emerging property of a LM. What happens when the training set = the whole web
  • 9/22 - Class 3 - Image embeddings [Matthijs]

    • Pixel structures of images. Early works on direct pixel indexing
    • Traditional CV models. Global descriptors (GIST). Local descriptors (SIFT and friends)Direct indexing of local descriptors for image matching, local descriptor pooling (Fisher, VLAD)
    • Convolutional Neural Nets. Off-the-shelf models. Trained specifically (contrastive learning, self-supervised learning)
    • Modern Computer Vision models
  • 9/29 - Class 4 - Low Dimensional Vector Search [Edo]

    • Vector search problem definition
    • k-d tree, space partitioning data structures
    • Worst case proof for kd-trees
    • Probabilistic inequalities. Recap of basic inequalities: Markov, Chernoof, Hoeffding
    • Concentration Of Measure phenomena. Orthogonality of random vectors in high dimensions
    • Curse of dimensionality and the failure of space partitioning
  • 10/6 - Class 5 - Dimensionality Reduction [Edo]

    • Singular Value Decomposition (SVD)
    • Applications of the SVD
    • Rank-k approximation in the spectral norm
    • Rank-k approximation in the Frobenius norm
    • Linear regression in the least-squared loss
    • PCA, Optimal squared loss dimension reduction
    • Closest orthogonal matrix
    • Computing the SVD: The power method
    • Random-projection
    • Matrices with normally distributed independent entries
    • Fast Random Projections
  • 10/13 - No Class - Midterm Examination Week

  • 10/20 - No Class - Fall Recess

  • 10/27 - Class 6 - Approximate Nearest Neighbor Search [Edo]

    • Definition of Approximate Nearest Neighbor Search (ANNS)
    • Criteria: Speed / accuracy / memory usage / updateability / index construction time
    • Definition of Locality Sensitive Hashing and examples
    • The LSH Algorithm
    • LSH Analysis, proof of correctness, and asymptotics
  • 11/3 - Class 7 - Clustering [Edo]

    • K-means clustering - mean squared error criterion.
    • Lloyd’s Algorithm
    • k-means and PCA
    • ε-net argument for fixed dimensions
    • Sampling based seeding for k-means
    • k-means++
    • The Inverted File Model (IVF)
  • 11/10 - Class 8 - Quantization for lossy vector compression [Matthijs]

    • Vector quantization is a topline (directly optimizes the objective)
    • Binary quantization and hamming comparison
    • Product quantization. Chunked vector quantization. Optimized vector quantization
    • Additive quantization. Extension of product quantization. Difficulty in training approximations (Residual quantization, CQ, TQ, LSQ, etc.)
    • Cost of coarse quantization vs. inverted list scanning
  • 11/17 - Class 9 - Graph based indexes (Guest lecturer Harsha Vardhan Simhadri + Edo]

    • Early works: hierarchical k-means
    • Neighborhood graphs. How to construct them. Nearest Neighbor Descent
    • Greedy search in Neighborhood graphs. That does not work -- need long jumps
    • HNSW. A practical hierarchical graph-based index
    • NSG. Evolving a graph k-NN graph
  • 11/24 - No Class - Thanksgiving Recess

  • 12/1 - Class 10 - Student project and paper presentations [Edo + Nataly]

Project

Class work includes a final project. It will be graded based on

  1. 50% - Project submission
  2. 50% - In-class presentation

Projects can be in three different flavors

  • Theory/Research: propose a new algorithm for a problem we explored in class (or modify an existing one), explain what it achieves, give experimental evidence or a proof for its behavior. If you choose this kind of project you are expected to submit a write up.
  • Data Science/AI: create an interesting use case for vector search using Pinecone, explain what data you used, what value your application brings, and what insights you gained. If you choose this kind of project you are expected to submit code (e.g. Jupyter Notebooks) and a writeup of your results and insights.
  • Engineering/HPC: adapt or add to FAISS, explain your improvements, show experimental results. If you choose this kind of project you are expected to submit a branch of FAISS for review along with a short writeup of your suggested improvement and experiments.

Project schedule  

  • 11/24 - One-page project proposal approved by the instructors
  • 12/1 - Final project submission
  • 12/1 - In-class presentation

Some more details

  • Project Instructor: Nataly [email protected]
  • Projects can be worked on individually, in teams of two or at most three students.
  • Expect to spend a few hours over the semester on the project proposal. Try to get it approved well ahead of the deadline.
  • Expect to spent 3-5 full days on the project itself (on par with preparing for a final exam)
  • In class project project presentation are 5 minutes per student (teams of two students present for 10 minutes. Teams of three, 15 minutes).  

Selected Literature