Improve embeddings load test with batch support, new API params, and proper metrics#64
Open
Improve embeddings load test with batch support, new API params, and proper metrics#64
Conversation
…proper metrics - Add --embeddings-batch-size to send arrays of texts per request - Add --embeddings-dimensions to request specific output vector size - Add --embeddings-prompt-template for Jinja2 structured input preprocessing - Always parse embeddings response to capture prompt_tokens from API usage - Emit latency_per_embedding metric (total_latency / batch_size) - Fix quitting listener to use embeddings-specific summary metrics instead of crashing on missing LLM-only metrics (time_to_first_token, latency_per_token) - Fix FireworksProvider to skip perf_metrics_in_response for embeddings payloads - Update logging_params to show embeddings_batch_size/dimensions instead of completion_tokens when in embeddings mode - parse_output_json for embeddings now returns shape "NxD" and prompt_tokens Made-with: Cursor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
--embeddings-batch-size Nflag sends N texts as a single array request, enabling throughput testing across different batch sizes--embeddings-dimensions(output vector size) and--embeddings-prompt-template(Jinja2 template for structured inputs) now passed through to the Fireworks embeddings APIprompt_tokensfrom the API usage field; newlatency_per_embeddingmetric (total latency ÷ batch size) is emitted and reported in the summary with percentilesKeyErrorwhen running embeddings tests (was trying to access LLM-only metrics liketime_to_first_token);FireworksProviderno longer injectsperf_metrics_in_responseinto embeddings payloads; logging params omitcompletion_tokensfor embeddings and showembeddings_batch_size/embeddings_dimensionsinsteadTest plan
--embeddings --max-requests 3againstaccounts/pyroworks/deployments/i907pjzb— 0 failures, correctNxDshape reported,prompt_tokenscaptured from API--chat --streamagainstaccounts/fireworks/models/gpt-oss-20b— 0 failures, all LLM metrics intact--chat --no-stream— 0 failures, TTFT/latency_per_token correctly blanked in summary--no-chat --no-stream— 0 failures--chat --stream --prompt-images-with-resolutions 1920x1080againstaccounts/fireworks/models/kimi-k2p5— 0 failuresMade with Cursor