Skip to content

Add temperature parameter and max_completion_tokens support#3

Open
andmalt wants to merge 1 commit intoanastasiosyal:mainfrom
andmalt:main
Open

Add temperature parameter and max_completion_tokens support#3
andmalt wants to merge 1 commit intoanastasiosyal:mainfrom
andmalt:main

Conversation

@andmalt
Copy link
Copy Markdown

@andmalt andmalt commented Jul 17, 2025

This PR adds support for configurable temperature and max_completion_tokens.

Changes:

  • Added optional temperature parameter to control response randomness
  • Made max_completion_tokens configurable instead of hardcoded to 1000
  • Added proper token counting for prompt and completion tokens
  • Improved response format to match OpenAI API standards with usage statistics

The temperature parameter now supports:

  • Values > 0: Enables sampling with specified temperature
  • Value = 0: Disables sampling for deterministic responses
  • None/omitted: Uses model default behavior

This makes the API more flexible and compatible with standard chat completion interfaces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant