Hybrid Search - Quickstart

This tutorial creates a hybrid text search application combining traditional keyword matching with semantic vector search (dense retrieval). It also demonstrates using Vespa native embedder functionality.

Refer to troubleshooting for any problem when running this guide.

Install pyvespa and start Docker Daemon, validate minimum 6G available:

!pip3 install pyvespa
!docker info | grep "Total Memory"

Create an application package

The application package has all the Vespa configuration files - create one from scratch:

from vespa.package import ApplicationPackage, Field, Schema, Document, RankProfile, HNSW, RankProfile, Component, Parameter, FieldSet, GlobalPhaseRanking, Function

package = ApplicationPackage(
                    Field(name="id", type="string", indexing=["summary"]),
                    Field(name="title", type="string", indexing=["index", "summary"], index="enable-bm25"),
                    Field(name="body", type="string", indexing=["index", "summary"], index="enable-bm25", bolding=True),
                    Field(name="embedding", type="tensor<float>(x[384])",
                        indexing=["input title . \" \" . input body", "embed", "index", "attribute"],
                FieldSet(name = "default", fields = ["title", "body"])
                    inputs=[("query(q)", "tensor<float>(x[384])")],
                        name="bm25sum", expression="bm25(title) + bm25(body)"
                    inputs=[("query(q)", "tensor<float>(x[384])")],
                    first_phase="closeness(field, embedding)"
                    inputs=[("query(q)", "tensor<float>(x[384])")],
                    first_phase="closeness(field, embedding)",
                        expression="reciprocal_rank_fusion(bm25sum, closeness(field, embedding))",
        components=[Component(id="e5", type="hugging-face-embedder",
                Parameter("transformer-model", {"url": "https://github.com/vespa-engine/sample-apps/raw/master/simple-semantic-search/model/e5-small-v2-int8.onnx"}),
                Parameter("tokenizer-model", {"url": "https://raw.githubusercontent.com/vespa-engine/sample-apps/master/simple-semantic-search/model/tokenizer.json"})

Note that the name cannot have - or _.

Deploy the Vespa application

Deploy package on the local machine using Docker, without leaving the notebook, by creating an instance of VespaDocker. VespaDocker connects to the local Docker daemon socket and starts the Vespa docker image.

If this step fails, please check that the Docker daemon is running, and that the Docker daemon socket can be used by clients (Configurable under advanced settings in Docker Desktop).

from vespa.deployment import VespaDocker

vespa_docker = VespaDocker()
app = vespa_docker.deploy(application_package=package)

app now holds a reference to a Vespa instance.

Feeding documents to Vespa

In this example we use the HF Datasets library to stream the BeIR/nfcorpus dataset and index in our newly deployed Vespa instance. Read more about the NFCorpus:

NFCorpus is a full-text English retrieval data set for Medical Information Retrieval.

The following uses the stream option of datasets to stream the data without downloading all the contents locally. The map functionality allows us to convert the dataset fields into the expected feed format for pyvespa which expects a dict with the keys id and fields:

{ "id": "vespa-document-id", "fields": {"vespa_field": "vespa-field-value"}}

from datasets import load_dataset

dataset = load_dataset("BeIR/nfcorpus", "corpus", split="corpus", streaming=True)
vespa_feed = dataset.map(lambda x: {"id": x["_id"], "fields": { "title": x["title"], "body": x["text"], "id": x["_id"]}})

Now we can feed to Vespa using feed_iterable which accepts any Iterable and an optional callback function where we can check the outcome of each operation. The application is configured to use embedding functionality, that produce a vector embedding using a concatenation of the title and the body input fields. This step is computionally expensive. Read more about embedding inference in Vespa in the Accelerating Transformer-based Embedding Retrieval with Vespa.

from vespa.io import VespaResponse, VespaQueryResponse

def callback(response:VespaResponse, id:str):
    if not response.is_successful():
        print(f"Error when feeding document {id}: {response.get_json()}")

app.feed_iterable(vespa_feed, schema="doc", namespace="tutorial", callback=callback)

Querying Vespa

Using the Vespa Query language we can query the indexed data.

  • Using a context manager with app.syncio() as session to handle connection pooling (best practices)

  • The query method accepts any valid Vespa query api parameter in **kwargs

  • Vespa api parameter names that contains . must be sent as dict parameters in the body method argument

The following searches for How Fruits and Vegetables Can Treat Asthma? using different retrieval and ranking strategies.

import pandas as pd
def display_hits_as_df(response:VespaQueryResponse, fields) -> pd.DataFrame:
    records = []
    for hit in response.hits:
        record = {}
        for field in fields:
            record[field] = hit['fields'][field]
    return pd.DataFrame(records)



Next steps

This is just an intro into the capabilities of Vespa and pyvespa. Browse the site to learn more about schemas, feeding and queries - find more complex applications in examples.