Abstract, extensible framework for benchmarking vector databases and models across different datasets for Image Search and caption generation.
Install just the core framework (no adapters):
pip install imsearch_evalTriton adapters (for Triton Inference Server):
pip install imsearch_eval[triton]HuggingFace adapters (for loading datasets from Hugging Face Hub):
pip install imsearch_eval[huggingface]Weaviate adapters (includes Triton, as WeaviateAdapter uses TritonModelUtils):
pip install imsearch_eval[weaviate]Milvus adapters (includes Triton, as MilvusAdapter uses TritonModelUtils):
pip install imsearch_eval[milvus]NRP adapters (via NRP Envoy AI Gateway; available models):
pip install imsearch_eval[nrp]All adapters:
pip install imsearch_eval[all]Development dependencies:
pip install imsearch_eval[dev]git clone https://github.com/waggle-sensor/imsearch_eval
cd imsearch_eval
pip install -e . # Core only
# Or with extras:
pip install -e ".[triton]"
pip install -e ".[huggingface]"
pip install -e ".[weaviate]"
pip install -e ".[milvus]"
pip install -e ".[nrp]"
pip install -e ".[all]"from imsearch_eval import BenchmarkEvaluator
from imsearch_eval.adapters import WeaviateAdapter, TritonModelProvider
import tritonclient.grpc as TritonClient
# Initialize clients
weaviate_client = WeaviateAdapter.init_client(host="127.0.0.1", port="8080")
triton_client = TritonClient.InferenceServerClient(url="triton:8001")
# Create adapters
vector_db = WeaviateAdapter(
weaviate_client=weaviate_client,
triton_client=triton_client
)
model_provider = TritonModelProvider(triton_client=triton_client)
# Use in evaluator (requires a BenchmarkDataset implementation)
evaluator = BenchmarkEvaluator(
vector_db=vector_db,
model_provider=model_provider,
dataset=dataset, # Your BenchmarkDataset implementation
collection_name="my_collection",
query_method="clip_hybrid_query"
)The framework is organized into two main components:
- Framework (
imsearch_eval/framework/): Abstract interfaces and evaluation logic (dataset-agnostic, model-agnostic, vector database-agnostic) - Adapters (
imsearch_eval/adapters/): Shared concrete implementations for vector databases and models
imsearch_eval/
├── framework/ # Abstract interfaces and evaluation logic
│ ├── interfaces.py # VectorDBAdapter, ModelProvider, Query, BenchmarkDataset, etc.
│ ├── model_utils.py # ModelUtils abstract interface
│ └── evaluator.py # BenchmarkEvaluator class
│
└── adapters/ # Shared concrete implementations
├── __init__.py # Exports all adapters
├── triton.py # TritonModelProvider, TritonModelUtils
├── nrp.py # NRPModelProvider, NRPModelUtils
├── weaviate.py # WeaviateAdapter, WeaviateQuery
└── milvus.py # MilvusAdapter, MilvusQuery
VectorDBAdapter: Abstract interface for vector databases- Methods:
init_client(),search(),create_collection(),delete_collection(),insert_data(),close()
- Methods:
ModelProvider: Abstract interface for model providers- Methods:
get_embedding(),generate_caption()
- Methods:
Query: Abstract interface for query classes (used by vector DB adapters)- Method:
query(near_text, collection_name, limit, query_method, **kwargs)- Generic query method - Each vector DB implementation can define its own query types via
query_methodparameter
- Method:
ModelUtils: Abstract interface for model utilities (inimsearch_eval.framework.model_utils)- Methods:
calculate_embedding(),generate_caption()
- Methods:
BenchmarkDataset: Abstract interface for benchmark datasetsDataLoader: Abstract interface for loading data into vector DBsConfig: Abstract interface for configuration/hyperparametersQueryResult: Container for query results
fuse_embeddings(): Utility function to combine image and text embeddings- Parameters:
img_emb(numpy array),txt_emb(numpy array),alpha(float, default: 0.5) - Returns: Normalized fused embedding vector
- Useful for combining multimodal embeddings with a weighted average
- Parameters:
TritonModelUtils: Triton-based implementation ofModelUtilsinterfaceTritonModelProvider: Triton inference server model provider
Dependencies: tritonclient[grpc]
WeaviateQuery: ImplementsQueryinterface for Weaviate- Generic
query()method routes to specific methods based onquery_methodparameter - Also provides Weaviate-specific methods:
hybrid_query(),colbert_query(),clip_hybrid_query()
- Generic
WeaviateAdapter: ImplementsVectorDBAdapterinterface for Weaviate- Uses
WeaviateQueryinternally for search operations
- Uses
Dependencies: weaviate-client, tritonclient[grpc] (for embedding generation)
MilvusQuery: ImplementsQueryinterface for Milvus- Generic
query()method routes to specific methods based onquery_methodparameter - Supports hybrid search combining dense and sparse vectors (BM25)
- Provides methods:
clip_hybrid_query(),vector_query()
- Generic
MilvusAdapter: ImplementsVectorDBAdapterinterface for Milvus- Uses
MilvusQueryinternally for search operations - Supports multi-vector search with native hybrid search capabilities
- Uses
Dependencies: pymilvus>=2.6.6, tritonclient[grpc] (for embedding generation)
HuggingFaceDataset: ImplementsBenchmarkDatasetinterface for loading datasets from Hugging Face Hub- Loads datasets directly from Hugging Face using the
datasetslibrary - Supports dataset splits, sampling, and random seeding
- Provides both
load()(returns pandas DataFrame) andload_as_dataset()(returns Hugging Face Dataset) methods
- Loads datasets directly from Hugging Face using the
Dependencies: datasets, pandas
NRPModelUtils: NRP-based implementation ofModelUtilsNRPModelProvider: Model provider for the NRP Envoy AI Gateway
Caption models: gemma3, qwen3, gpt-oss, kimi, glm-4.7, minimax-m2, glm-v.
Dependencies: openai
BenchmarkEvaluator: Main evaluation class that works with any combination of adapters and benchmark datasets- Computes metrics: NDCG (for multiple score columns), precision, recall, accuracy, MRR (mean reciprocal rank) for each score column, Success@k (hit rate)
- Supports parallel query processing with configurable workers
- Automatically computes NDCG for all available score columns (e.g.,
rerank_score,clip_score,score,distance) - Supports numeric relevance scores (not just binary 0/1)
- Only counts relevance for correctly retrieved results (results that belong to the query)
Your BenchmarkDataset.load() must return a pandas DataFrame. Column names can differ, but the meaning of the fields below must stay constant because they’re used to compute metrics.
BenchmarkEvaluator gets the required column names from your BenchmarkDataset:
get_query_column()→ query textget_query_id_column()→ query/group id (unique id for each unique query)get_relevance_column()→ relevance label (1/0)get_metadata_columns()→ optional metadata copied into the per-query stats output
- Query text: The text sent to
VectorDBAdapter.search(...). - Query id: A stable identifier used to group rows belonging to the same query.
- Relevance label: Relevance score for each row/item. Can be binary (1 = relevant, 0 = not relevant) or numeric (e.g., 0.0-1.0 for graded relevance). Used for precision/recall/NDCG. The evaluator sums relevance values, so numeric scores are supported.
- Image: A file path/URL/bytes you use when building embeddings or generating captions (consumed by your
BenchmarkDataset/ adapter, not the core evaluator). - Ranking score(s): If your search results include score columns like
rerank_score,clip_score,score, ordistance, the evaluator will compute NDCG for each available score column. The default order of preference is:["rerank_score", "clip_score", "score", "distance"]. You can customize this via thescore_columnsparameter. - License / rights_holder: Useful when combining datasets, otherwise optional.
- Additional metadata: Any extra fields you want to use for result breakdowns (e.g., animalspecies category). These do not change the metrics; they're just copied into the results table.
-
Import adapters:
from imsearch_eval.adapters import WeaviateAdapter, TritonModelProvider
-
Initialize clients:
import tritonclient.grpc as TritonClient # For Weaviate weaviate_client = WeaviateAdapter.init_client(host="127.0.0.1", port="8080") # For Milvus milvus_client = MilvusAdapter.init_client(uri="http://localhost:19530") triton_client = TritonClient.InferenceServerClient(url="triton:8001")
-
Create adapters:
# For Weaviate vector_db = WeaviateAdapter( weaviate_client=weaviate_client, triton_client=triton_client ) # For Milvus vector_db = MilvusAdapter( milvus_client=milvus_client, triton_client=triton_client ) model_provider = TritonModelProvider(triton_client=triton_client)
-
Create benchmark dataset (you need to implement this):
from imsearch_eval import BenchmarkDataset import pandas as pd class MyBenchmarkDataset(BenchmarkDataset): def load(self, split="test", **kwargs) -> pd.DataFrame: # Load your dataset return dataset_df def get_query_column(self) -> str: return "query" def get_query_id_column(self) -> str: return "query_id" def get_relevance_column(self) -> str: return "relevant" def get_metadata_columns(self) -> list: return ["category", "type"]
-
Create evaluator and run:
from imsearch_eval import BenchmarkEvaluator dataset = MyBenchmarkDataset() evaluator = BenchmarkEvaluator( vector_db=vector_db, model_provider=model_provider, dataset=dataset, collection_name="my_collection", query_method="clip_hybrid_query", # Query type (e.g., "clip_hybrid_query" for Weaviate/Milvus) limit=25, # Maximum number of results per query (default: 25) target_vector="default", # Vector space to search in (default: "default") score_columns=["rerank_score", "clip_score", "score", "distance"], # Columns to compute NDCG for query_parameters={} # Additional parameters passed to query method ) # Evaluate queries with parallel processing results, stats = evaluator.evaluate_queries( split="test", query_batch_size=100, # Number of queries per batch (default: 100) workers=0, # Number of parallel workers (0 = use all CPUs, default: 0) sample_size=None, # Limit number of samples (None = all, default: None) seed=None # Random seed for sampling (default: None) )
vector_db: Vector database adapter instance (required)model_provider: Model provider instance (required)dataset: Benchmark dataset instance (required)collection_name: Name of the collection to search (default:"default")query_method: Method/type of query to perform (default:None)- For Weaviate: Can be
"clip_hybrid_query","hybrid_query","colbert_query", or a custom callable function - For Milvus: Can be
"clip_hybrid_query","vector_query", or a custom callable function - For other vector DBs: Implement your own query types in your
Queryimplementation - The
Query.query()method routes to the appropriate implementation based onquery_method query_methodcan also be a callable function for custom query logic
- For Weaviate: Can be
limit: Maximum number of results to return per query (default:25)target_vector: Name of the vector space to search in (default:"default"). Useful for multi-vector search scenarios.score_columns: List of column names to try for NDCG computation, in order of preference (default:["rerank_score", "clip_score", "score", "distance"]). The evaluator will compute NDCG for each column that exists in the results.query_parameters: Additional parameters passed to the specific query method (default:{}). These are passed as**kwargsto the query method.
query_batch_size: Number of queries to submit in one batch (default:100)dataset: Optional pre-loaded dataset DataFrame. IfNone, will load usingdataset.load()(default:None)split: Dataset split to use if loading dataset (default:"test")sample_size: Number of samples to load from the dataset. IfNone, loads all samples (default:None)seed: Seed for random number generator when sampling. IfNone, uses a random seed (default:None)workers: Number of workers to use for parallel processing. If0, uses all available CPUs (default:0)
The evaluator computes the following metrics for each query:
correctly_returned: Number of results that belong to the query (i.e.,queried_on_query_id == query_id)incorrectly_returned: Number of results that don't belong to the queryrelevant_images: Sum of relevance scores for correctly retrieved results (supports numeric relevance, not just binary)non_relevant_images: Total results minus relevant resultsaccuracy:correctly_returned / total_results- Proportion of results that belong to the queryprecision:relevant_images / total_results- Proportion of retrieved results that are relevantrecall:relevant_images / relevant_in_dataset- Proportion of relevant items in dataset that were retrievedreciprocal_rank: 1 / (1-based rank of the first relevant result in the returned list); 0 if no relevant result in top-k. MRR =query_stats_df["reciprocal_rank"].mean()across queries.hit: 1 if at least one relevant result is in the results, else 0. Success@k (hit rate) =query_stats_df["hit"].mean()across queries. k is the evaluator'slimitor the number of returned results.{score_column}_NDCG: Normalized Discounted Cumulative Gain computed for each score column found in results (e.g.,rerank_score_NDCG,clip_score_NDCG)
Important: Relevance is only counted for correctly retrieved results (results that belong to the query). This ensures that precision and recall metrics are accurate.
The ModelProvider and ModelUtils interfaces accept model_name parameters:
- Embedding models:
"clip","colbert","align"(for TritonModelProvider) - Caption models:
"gemma3","qwen2_5"(for TritonModelProvider); for NRP:"gemma3","qwen3","gpt-oss","kimi","glm-4.7","minimax-m2","glm-v"(NRP supports onlygenerate_caption, not embeddings) - Other implementations can define their own model names
Combine imsearch_benchmaker and imsearch_eval to create a complete pipeline for image search evaluation. imsearch_benchmaker creates the benchmarks and imsearch_eval uses them to evaluate the performance of the image search system. imsearch_benchmarks is the central place to store the benchmark pipeline code, so after you create the benchmark you can store it there.
-
Create a Query class implementing the
Queryinterface:from imsearch_eval import Query import pandas as pd class MyVectorDBQuery(Query): def query(self, near_text, collection_name, limit=25, query_method="vector", **kwargs): # Implement your query logic # query_method can be "vector", "keyword", "hybrid", etc. return pd.DataFrame(results)
-
Create an adapter implementing
VectorDBAdapter:from imsearch_eval import VectorDBAdapter, QueryResult class MyVectorDBAdapter(VectorDBAdapter): @classmethod def init_client(cls, **kwargs): # Initialize your vector DB client return client def __init__(self, client=None, **kwargs): if client is None: client = self.init_client(**kwargs) self.client = client self.query_instance = MyVectorDBQuery(client) def search(self, query, collection_name, limit=25, query_method="vector", **kwargs): df = self.query_instance.query(query, collection_name, limit, query_method, **kwargs) return QueryResult(df.to_dict('records')) # Implement other required methods...
-
Create ModelUtils implementation (optional but recommended):
from imsearch_eval.framework.model_utils import ModelUtils class MyModelUtils(ModelUtils): def calculate_embedding(self, text, image=None, model_name="default"): # Your embedding implementation return embedding def generate_caption(self, image, model_name="default"): # Your caption generation return caption
-
Create ModelProvider:
from imsearch_eval import ModelProvider class MyModelProvider(ModelProvider): def __init__(self, **kwargs): self.model_utils = MyModelUtils(**kwargs) def get_embedding(self, text, image=None, model_name="default"): return self.model_utils.calculate_embedding(text, image, model_name) def generate_caption(self, image, model_name="default"): return self.model_utils.generate_caption(image, model_name)
Create a benchmark dataset implementing BenchmarkDataset:
from imsearch_eval import BenchmarkDataset
import pandas as pd
class MyBenchmarkDataset(BenchmarkDataset):
def load(self, split="test", **kwargs) -> pd.DataFrame:
# Load your dataset
return dataset_df
def get_query_column(self) -> str:
return "query"
def get_query_id_column(self) -> str:
return "query_id"
def get_relevance_column(self) -> str:
return "relevant"
def get_metadata_columns(self) -> list:
return []The framework uses abstract interfaces to ensure consistency and extensibility:
-
Framework defines interfaces (
imsearch_eval.framework.interfaces,imsearch_eval.framework.model_utils):VectorDBAdapter,ModelProvider,Query,ModelUtils,BenchmarkDataset, etc.- These define the contract that all implementations must follow
-
Adapters implement interfaces (
imsearch_eval.adapters):TritonModelUtilsimplementsModelUtilsTritonModelProviderimplementsModelProviderand usesTritonModelUtilsWeaviateQueryimplementsQueryWeaviateAdapterimplementsVectorDBAdapterand usesWeaviateQuery
-
Users use adapters:
- Import from
imsearch_eval.adapters(e.g.,from imsearch_eval.adapters import WeaviateAdapter, TritonModelProvider) - Use the abstract interfaces, not concrete implementations
- Easy to swap implementations without changing benchmark code
- Import from
from imsearch_eval.adapters import WeaviateQuery
import tritonclient.grpc as TritonClient
# WeaviateQuery implements the Query interface
triton_client = TritonClient.InferenceServerClient(url="triton:8001")
query_instance = WeaviateQuery(weaviate_client, triton_client)
# Use the generic query() method
results = query_instance.query(
near_text="search query",
collection_name="my_collection",
limit=25,
query_method="clip_hybrid_query" # Weaviate-specific query type
)
# Or use Weaviate-specific methods directly
results = query_instance.clip_hybrid_query("search query", "my_collection", limit=25)from imsearch_eval.adapters import TritonModelProvider, TritonModelUtils
import tritonclient.grpc as TritonClient
# TritonModelUtils implements the ModelUtils interface
triton_client = TritonClient.InferenceServerClient(url="triton:8001")
model_utils = TritonModelUtils(triton_client)
# Use the abstract methods
embedding = model_utils.calculate_embedding("text", image=None, model_name="clip")
caption = model_utils.generate_caption(image, model_name="gemma3")
# Or use via ModelProvider
model_provider = TritonModelProvider(triton_client)
embedding = model_provider.get_embedding("text", image=None, model_name="clip")
caption = model_provider.generate_caption(image, model_name="gemma3")NRP uses the Envoy AI Gateway. Pass your API key (defaults to environment variable "NRP_API_KEY") (and optionally base_url); the provider creates the OpenAI-compatible client internally:
import os
from imsearch_eval.adapters import NRPModelProvider
model_provider = NRPModelProvider()
caption = model_provider.generate_caption(image, prompt="Describe this image in detail.", model_name="gemma3")- Dataset-Agnostic: Works with any dataset by implementing
BenchmarkDataset - Extensible: Easy to add new vector databases, models, and datasets
- Abstract Interfaces: Clean separation between evaluation logic and implementations
- Reusable: Framework code can be shared across all benchmarks
- Consistent: Same evaluation metrics and methodology
- Type Safe: Abstract interfaces ensure all implementations provide required functionality
- Flexible: Each implementation can define its own query types and model names
- Robust Error Handling: Gracefully handles query errors and empty results, logging errors and returning default statistics
- Parallel Processing: Configurable parallel query evaluation with progress bars
- Multiple NDCG Scores: Automatically computes NDCG for all available score columns in results
- MRR and Success@k: Per-query reciprocal rank and hit indicator; aggregate as mean for MRR and Success@k (hit rate)
- Numeric Relevance Support: Supports both binary (0/1) and numeric (0.0-1.0) relevance scores
- Accurate Metrics: Only counts relevance for correctly retrieved results to ensure metric accuracy