• Stars
    star
    278
  • Rank 148,454 (Top 3 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created about 7 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Repository of sample applications for https://vespa.ai, the open big data serving engine

Vespa logo

Vespa sample applications

For operational sample applications, see examples/operations.

Getting started - Basic Sample Applications

Basic album-recommendation

The album-recommendation is the intro application to Vespa. Learn how to configure the schema for simple recommendation and search use cases.

Simple semantic search

The simple semantic search application demonstrating indexed vector search using HNSW, creating embedding vectors from a transformer language model inside Vespa, and hybrid text and semantic ranking. This app also demonstrates native embedders.

Indexing multiple vectors per field

The Vespa Multi-Vector Indexing with HNSW demonstrates how to index multiple vectors per document field for better semantic search.

Customizing embeddings

The custom-embeddings application demonstrates how to customize frozen document embeddings for downstream tasks. Also includes a deep neural network example.

More advanced sample applications

News search and recommendation tutorial

The news sample application used in the Vespa tutorial. This application demonstrates basic search functionality.

It also demonstrates how to build a recommendation system where approximate nearest neighbor search in a shared user/item embedding space is used to retrieve recommended content for a user. This sample app also demonstrates use of parent-child relationships.

Billion-scale Image Search

This billion-scale-image-search app demonstrates billion-scale image search using CLIP retrieval. Features separation of compute from storage, and query time vector similarity de-duping. PCA dimension reduction and more.

State-of-the-art Text Ranking

This msmarco-ranking application demonstrates how to represent state-of-the-art text ranking using Transformer (BERT) models. It uses the MS Marco passage and document ranking datasets and features both bi-encoders, cross-encoders and late-interaction models (ColBERT).

The passage ranking part uses multiple state of the art pretrained language models in a multiphase retrieval and ranking pipeline. See also Pretrained Transformer Models for Search blog post series. There is also a simpler ranking app also using the MS Marco relevancy dataset. See text-search which uses traditional IR text matching with BM25/Vespa nativeRank.

Next generation E-Commerce Search

The use-case-shopping app creates an end-to-end E-Commerce shopping engine. This use case also bundles a frontend application. It uses the Amazon product data set. It demonstrates building next generation E-commerce Search using Vespa. See also the commerce-product-ranking sample application for using learning-to-rank techniques (Including XGBoost and LightGBM) for improving product search ranking.

Extractive Question Answering

The dense-passage-retrieval-with-ann application demonstrates end to end question answering using Facebook's DPR (Dense passage Retriever) for Extractive Question Answering. Extractive question answering, extracts the answer from the evidence passage(s).

This sample app uses Vespa's approximate nearest neighbor search to efficiently retrieve text passages from a Wikipedia-based collection of 21M passages. A BERT-based reader component reads the top-ranking passages and produces the textual answer to the question.

See also Efficient Open Domain Question Answering with Vespa and Scaling Question Answering with Vespa.

Search as you type and search suggestions

The incremental-search application demonstrates search-as-you-type where for each keystroke of the user, retrieves matching documents. It also demonstrates search suggestions (query auto-completion).

Vespa as ML inference server (model-inference)

The model-inference application demonstrates using Vespa as a stateless ML model inference server where Vespa takes care of distributing ML models to multiple serving containers, offering horizontal scaling and safe deployment. Model versioning and feature processing pipeline. Stateless ML model serving can also be used in state-of-the-art retrieval and ranking pipelines, e.g. query classification and encoding text queries to dense vector representation for efficient retrieval using Vespa's approximate nearest neighbor search.


Note: Applications with pom.xml are Java/Maven projects and must be built before being deployed. Refer to the Developer Guide for more information.

Contribute to the Vespa sample applications.


Vespa Sampleapps Search Feed

sample-apps link checker

sample-apps build

sample-apps verify-guides sample-apps verify-guides-big