From 2c394e617cb69bb6553e35c1782e80348cdb1ff5 Mon Sep 17 00:00:00 2001 From: AMATH <116212274+amathxbt@users.noreply.github.com> Date: Sat, 18 Apr 2026 10:09:57 +0100 Subject: [PATCH] Improve error handling for invalid model identifiers Add a shared parser to validate model identifiers and raise a descriptive ValueError for invalid formats. Signed-off-by: AMATH <116212274+amathxbt@users.noreply.github.com> --- ...ise clear error for invalid model identifiers | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 fix(llm): raise clear error for invalid model identifiers diff --git a/fix(llm): raise clear error for invalid model identifiers b/fix(llm): raise clear error for invalid model identifiers new file mode 100644 index 0000000..a205e9a --- /dev/null +++ b/fix(llm): raise clear error for invalid model identifiers @@ -0,0 +1,16 @@ +## Summary + +Fixes invalid model string handling in the LLM client. + +`LLM.completion()` and `LLM.chat()` currently assume the model identifier always contains a `/` separator and directly access `model.split("/")[1]`. When a caller passes a plain string such as `"gpt-5"`, the SDK raises an unhelpful `IndexError` instead of a clear validation error. + +This PR adds a shared parser that validates model identifiers and raises a descriptive `ValueError` when the input is not in `provider/model-name` format. + +Fixes #249 + +## Problem + +Both `LLM.completion()` and `LLM.chat()` strip the provider prefix with direct string splitting: + +```python +model.split("/")[1]