Vector Databases
A vector database is a specialized type of database that plays an essential role in AI applications.
In vector databases, queries differ from traditional relational databases. Instead of exact matches, they perform similarity searches. When given a vector as a query, a vector database returns vectors that are “similar” to the query vector. Further details on how this similarity is calculated at a high-level is provided in a Vector Similarity.
Vector databases are used to integrate your data with AI models. The first step in their usage is to load your data into a vector database. Then, when a user query is to be sent to the AI model, a set of similar documents is first retrieved. These documents then serve as the context for the user’s question and are sent to the AI model, along with the user’s query. This technique is known as Retrieval Augmented Generation (RAG).
The following sections describe the Spring AI interface for using multiple vector database implementations and some high-level sample usage.
The last section is intended to demystify the underlying approach of similarity searching in vector databases.
API Overview
This section serves as a guide to the VectorStore
interface and its associated classes within the Spring AI framework.
Spring AI offers an abstracted API for interacting with vector databases through the VectorStore
interface.
Here is the VectorStore
interface definition:
public interface VectorStore {
void add(List<Document> documents);
Optional<Boolean> delete(List<String> idList);
List<Document> similaritySearch(String query);
List<Document> similaritySearch(SearchRequest request);
}
and the related SearchRequest
builder:
public class SearchRequest {
public final String query;
private int topK = 4;
private double similarityThreshold = SIMILARITY_THRESHOLD_ALL;
private Filter.Expression filterExpression;
public static SearchRequest query(String query) { return new SearchRequest(query); }
private SearchRequest(String query) { this.query = query; }
public SearchRequest withTopK(int topK) {...}
public SearchRequest withSimilarityThreshold(double threshold) {...}
public SearchRequest withSimilarityThresholdAll() {...}
public SearchRequest withFilterExpression(Filter.Expression expression) {...}
public SearchRequest withFilterExpression(String textExpression) {...}
public String getQuery() {...}
public int getTopK() {...}
public double getSimilarityThreshold() {...}
public Filter.Expression getFilterExpression() {...}
}
To insert data into the vector database, encapsulate it within a Document
object.
The Document
class encapsulates content from a data source, such as a PDF or Word document, and includes text represented as a string.
It also contains metadata in the form of key-value pairs, including details such as the filename.
Upon insertion into the vector database, the text content is transformed into a numerical array, or a float[]
, known as vector embeddings, using an embedding model. Embedding models, such as Word2Vec, GLoVE, and BERT, or OpenAI’s text-embedding-ada-002
, are used to convert words, sentences, or paragraphs into these vector embeddings.
The vector database’s role is to store and facilitate similarity searches for these embeddings. It does not generate the embeddings itself. For creating vector embeddings, the EmbeddingModel
should be utilized.
The similaritySearch
methods in the interface allow for retrieving documents similar to a given query string. These methods can be fine-tuned by using the following parameters:
-
k
: An integer that specifies the maximum number of similar documents to return. This is often referred to as a 'top K' search, or 'K nearest neighbors' (KNN). -
threshold
: A double value ranging from 0 to 1, where values closer to 1 indicate higher similarity. By default, if you set a threshold of 0.75, for instance, only documents with a similarity above this value are returned. -
Filter.Expression
: A class used for passing a fluent DSL (Domain-Specific Language) expression that functions similarly to a 'where' clause in SQL, but it applies exclusively to the metadata key-value pairs of aDocument
. -
filterExpression
: An external DSL based on ANTLR4 that accepts filter expressions as strings. For example, with metadata keys like country, year, andisActive
, you could use an expression such as:country == 'UK' && year >= 2020 && isActive == true.
Find more information on the Filter.Expression
in the Metadata Filters section.
Schema Initialization
Some vector stores require their backend schema to be initialized before usage.
It will not be initialized for you by default.
You must opt-in, by passing a boolean
for the appropriate constructor argument or, if using Spring Boot, setting the appropriate initialize-schema
property to true
in application.properties
or application.yml
.
Check the documentation for the vector store you are using for the specific property name.
Batching Strategy
When working with vector stores, it’s often necessary to embed large numbers of documents. While it might seem straightforward to make a single call to embed all documents at once, this approach can lead to issues. Embedding models process text as tokens and have a maximum token limit, often referred to as the context window size. This limit restricts the amount of text that can be processed in a single embedding request. Attempting to embed too many tokens in one call can result in errors or truncated embeddings.
To address this token limit, Spring AI implements a batching strategy. This approach breaks down large sets of documents into smaller batches that fit within the embedding model’s maximum context window. Batching not only solves the token limit issue but can also lead to improved performance and more efficient use of API rate limits.
Spring AI provides this functionality through the BatchingStrategy
interface, which allows for processing documents in sub-batches based on their token counts.
The core BatchingStrategy
interface is defined as follows:
public interface BatchingStrategy {
List<List<Document>> batch(List<Document> documents);
}
This interface defines a single method, batch
, which takes a list of documents and returns a list of document batches.
Default Implementation
Spring AI provides a default implementation called TokenCountBatchingStrategy
.
This strategy batches documents based on their token counts, ensuring that each batch does not exceed a calculated maximum input token count.
Key features of TokenCountBatchingStrategy
:
-
Uses OpenAI’s max input token count (8191) as the default upper limit.
-
Incorporates a reserve percentage (default 10%) to provide a buffer for potential overhead.
-
Calculates the actual max input token count as:
actualMaxInputTokenCount = originalMaxInputTokenCount * (1 - RESERVE_PERCENTAGE)
The strategy estimates the token count for each document, groups them into batches without exceeding the max input token count, and throws an exception if a single document exceeds this limit.
You can also customize the TokenCountBatchingStrategy
to better suit your specific requirements. This can be done by creating a new instance with custom parameters in a Spring Boot @Configuration
class.
Here’s an example of how to create a custom TokenCountBatchingStrategy
bean:
@Configuration
public class EmbeddingConfig {
@Bean
public BatchingStrategy customTokenCountBatchingStrategy() {
return new TokenCountBatchingStrategy(
EncodingType.CL100K_BASE, // Specify the encoding type
8000, // Set the maximum input token count
0.1 // Set the reserve percentage
);
}
}
In this configuration:
-
EncodingType.CL100K_BASE
: Specifies the encoding type used for tokenization. This encoding type is used by theJTokkitTokenCountEstimator
to accurately estimate token counts. -
8000
: Sets the maximum input token count. This value should be less than or equal to the maximum context window size of your embedding model. -
0.1
: Sets the reserve percentage. The percentage of tokens to reserve from the max input token count. This creates a buffer for potential token count increases during processing.
By default, this constructor uses Document.DEFAULT_CONTENT_FORMATTER
for content formatting and MetadataMode.NONE
for metadata handling. If you need to customize these parameters, you can use the full constructor with additional parameters.
Once defined, this custom TokenCountBatchingStrategy
bean will be automatically used by the EmbeddingModel
implementations in your application, replacing the default strategy.
The TokenCountBatchingStrategy
internally uses a TokenCountEstimator
(specifically, JTokkitTokenCountEstimator
) to calculate token counts for efficient batching. This ensures accurate token estimation based on the specified encoding type.
Additionally, TokenCountBatchingStrategy
provides flexibility by allowing you to pass in your own implementation of the TokenCountEstimator
interface. This feature enables you to use custom token counting strategies tailored to your specific needs. For example:
TokenCountEstimator customEstimator = new YourCustomTokenCountEstimator();
TokenCountBatchingStrategy strategy = new TokenCountBatchingStrategy(
this.customEstimator,
8000, // maxInputTokenCount
0.1, // reservePercentage
Document.DEFAULT_CONTENT_FORMATTER,
MetadataMode.NONE
);
Custom Implementation
While TokenCountBatchingStrategy
provides a robust default implementation, you can customize the batching strategy to fit your specific needs.
This can be done through Spring Boot’s auto-configuration.
To customize the batching strategy, define a BatchingStrategy
bean in your Spring Boot application:
@Configuration
public class EmbeddingConfig {
@Bean
public BatchingStrategy customBatchingStrategy() {
return new CustomBatchingStrategy();
}
}
This custom BatchingStrategy
will then be automatically used by the EmbeddingModel
implementations in your application.
Vector stores supported by Spring AI are configured to use the default TokenCountBatchingStrategy .
SAP Hana vector store is not currently configured for batching.
|
VectorStore Implementations
These are the available implementations of the VectorStore
interface:
-
Azure Vector Search - The Azure vector store.
-
Apache Cassandra - The Apache Cassandra vector store.
-
Chroma Vector Store - The Chroma vector store.
-
Elasticsearch Vector Store - The Elasticsearch vector store.
-
GemFire Vector Store - The GemFire vector store.
-
Milvus Vector Store - The Milvus vector store.
-
MongoDB Atlas Vector Store - The MongoDB Atlas vector store.
-
Neo4j Vector Store - The Neo4j vector store.
-
OpenSearch Vector Store - The OpenSearch vector store.
-
Oracle Vector Store - The Oracle Database vector store.
-
PgVector Store - The PostgreSQL/PGVector vector store.
-
Pinecone Vector Store - PineCone vector store.
-
Qdrant Vector Store - Qdrant vector store.
-
Redis Vector Store - The Redis vector store.
-
SAP Hana Vector Store - The SAP HANA vector store.
-
Typesense Vector Store - The Typesense vector store.
-
Weaviate Vector Store - The Weaviate vector store.
-
SimpleVectorStore - A simple implementation of persistent vector storage, good for educational purposes.
More implementations may be supported in future releases.
If you have a vector database that needs to be supported by Spring AI, open an issue on GitHub or, even better, submit a pull request with an implementation.
Information on each of the VectorStore
implementations can be found in the subsections of this chapter.
Example Usage
To compute the embeddings for a vector database, you need to pick an embedding model that matches the higher-level AI model being used.
For example, with OpenAI’s ChatGPT, we use the OpenAiEmbeddingModel
and a model named text-embedding-ada-002
.
The Spring Boot starter’s auto-configuration for OpenAI makes an implementation of EmbeddingModel
available in the Spring application context for dependency injection.
The general usage of loading data into a vector store is something you would do in a batch-like job, by first loading data into Spring AI’s Document
class and then calling the save
method.
Given a String
reference to a source file that represents a JSON file with data we want to load into the vector database, we use Spring AI’s JsonReader
to load specific fields in the JSON, which splits them up into small pieces and then passes those small pieces to the vector store implementation.
The VectorStore
implementation computes the embeddings and stores the JSON and the embedding in the vector database:
@Autowired
VectorStore vectorStore;
void load(String sourceFile) {
JsonReader jsonReader = new JsonReader(new FileSystemResource(sourceFile),
"price", "name", "shortDescription", "description", "tags");
List<Document> documents = jsonReader.get();
this.vectorStore.add(documents);
}
Later, when a user question is passed into the AI model, a similarity search is done to retrieve similar documents, which are then "'stuffed'" into the prompt as context for the user’s question.
String question = <question from user>
List<Document> similarDocuments = store.similaritySearch(this.question);
Additional options can be passed into the similaritySearch
method to define how many documents to retrieve and a threshold of the similarity search.
Metadata Filters
This section describes various filters that you can use against the results of a query.
Filter String
You can pass in an SQL-like filter expressions as a String
to one of the similaritySearch
overloads.
Consider the following examples:
-
"country == 'BG'"
-
"genre == 'drama' && year >= 2020"
-
"genre in ['comedy', 'documentary', 'drama']"
Filter.Expression
You can create an instance of Filter.Expression
with a FilterExpressionBuilder
that exposes a fluent API.
A simple example is as follows:
FilterExpressionBuilder b = new FilterExpressionBuilder();
Expression expression = this.b.eq("country", "BG").build();
You can build up sophisticated expressions by using the following operators:
EQUALS: '=='
MINUS : '-'
PLUS: '+'
GT: '>'
GE: '>='
LT: '<'
LE: '<='
NE: '!='
You can combine expressions by using the following operators:
AND: 'AND' | 'and' | '&&';
OR: 'OR' | 'or' | '||';
Considering the following example:
Expression exp = b.and(b.eq("genre", "drama"), b.gte("year", 2020)).build();
You can also use the following operators:
IN: 'IN' | 'in';
NIN: 'NIN' | 'nin';
NOT: 'NOT' | 'not';
Consider the following example:
Expression exp = b.and(b.eq("genre", "drama"), b.gte("year", 2020)).build();