diff --git a/demos/continuous_batching/accuracy/README.md b/demos/continuous_batching/accuracy/README.md index db76509fc6..c7acb1b678 100644 --- a/demos/continuous_batching/accuracy/README.md +++ b/demos/continuous_batching/accuracy/README.md @@ -9,7 +9,7 @@ It reports end to end quality of served model from the client application point Install the framework via pip: ```console -pip3 install --extra-index-url "https://download.pytorch.org/whl/cpu" lm_eval[api] langdetect immutabledict dotenv openai +pip3 install --extra-index-url "https://download.pytorch.org/whl/cpu" lm_eval[api] langdetect immutabledict dotenv openai hf_transfer ``` ## Exporting the models diff --git a/demos/embeddings/README.md b/demos/embeddings/README.md index b74042f754..cc7c689389 100644 --- a/demos/embeddings/README.md +++ b/demos/embeddings/README.md @@ -489,7 +489,7 @@ Difference score with HF AutoModel: 0.020293646680283224 It is easy also to run model evaluation using [MTEB](https://github.com/embeddings-benchmark/mteb) framework using a custom class based on openai model: ```bash -pip install "mteb<2" einops openai --extra-index-url "https://download.pytorch.org/whl/cpu" +pip install "mteb==1.39.7" "einops==0.8.1" "openai==2.7.2" "pillow==12.0.0" "transformers==4.57.6" --extra-index-url "https://download.pytorch.org/whl/cpu" curl https://raw.githubusercontent.com/openvinotoolkit/model_server/refs/heads/main/demos/embeddings/ovms_mteb.py -o ovms_mteb.py python ovms_mteb.py --model BAAI/bge-large-en-v1.5 --service_url http://localhost:8000/v3/embeddings ```