Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2,879 changes: 1,425 additions & 1,454 deletions .speakeasy/gen.lock

Large diffs are not rendered by default.

20 changes: 14 additions & 6 deletions .speakeasy/gen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,42 +13,49 @@ generation:
requestResponseComponentNamesFeb2024: true
securityFeb2025: true
sharedErrorComponentsApr2025: true
methodSignaturesApr2024: true
sharedNestedComponentsJan2026: true
nameOverrideFeb2026: true
methodSignaturesApr2024: true
auth:
oAuth2ClientCredentialsEnabled: true
oAuth2PasswordEnabled: false
hoistGlobalSecurity: true
schemas:
allOfMergeStrategy: shallowMerge
requestBodyFieldName: ""
versioningStrategy: automatic
persistentEdits:
enabled: "true"
tests:
generateTests: true
generateNewTests: false
skipResponseBodyAssertions: false
python:
version: 2.0.0a3
version: 2.0.0-a3.1
additionalDependencies:
dev:
pytest: ^8.2.2
pytest-asyncio: ^0.23.7
main: {}
allowedRedefinedBuiltins:
- id
- object
- input
- dir
asyncMode: both
authors:
- Mistral
baseErrorName: MistralError
clientServerStatusCodesAsErrors: true
constFieldCasing: upper
constFieldCasing: normal
defaultErrorName: SDKError
description: Python Client SDK for the Mistral AI API.
enableCustomCodeRegions: true
enumFormat: union
envVarPrefix: MISTRAL
fixFlags:
asyncPaginationSep2025: true
conflictResistantModelImportsFeb2026: true
responseRequiredSep2024: true
flatAdditionalProperties: true
flattenGlobalSecurity: true
Expand All @@ -60,17 +67,17 @@ python:
option: openapi
paths:
callbacks: ""
errors: ""
errors: errors
operations: ""
shared: ""
webhooks: ""
inferUnionDiscriminators: true
inputModelSuffix: input
license: ""
maxMethodParams: 15
maxMethodParams: 999
methodArguments: infer-optional-args
moduleName: mistralai.client
multipartArrayFormat: legacy
multipartArrayFormat: standard
outputModelSuffix: output
packageManager: uv
packageName: mistralai
Expand All @@ -80,3 +87,4 @@ python:
responseFormat: flat
sseFlatResponse: false
templateVersion: v2
useAsyncHooks: false
18 changes: 9 additions & 9 deletions .speakeasy/workflow.lock
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
speakeasyVersion: 1.685.0
speakeasyVersion: 1.729.0
sources:
mistral-azure-source:
sourceNamespace: mistral-openapi-azure
Expand All @@ -14,8 +14,8 @@ sources:
- latest
mistral-openapi:
sourceNamespace: mistral-openapi
sourceRevisionDigest: sha256:74d0de7750f6a1878b68c9da683eba7a447d7c367131d0cb8f5c3b1e05829624
sourceBlobDigest: sha256:41e8354c48993fc29be68959d835ea4f8e0cc1d4b4fbd527afcd970bc02c62a2
sourceRevisionDigest: sha256:4f8e25101b35a66b9c93089fe3d491990268bdbefb70a349740e01ba9c8e28f8
sourceBlobDigest: sha256:8566b35549178910c6fd4d005474d612bb9c476ef58785bb51c46251de145f71
tags:
- latest
targets:
Expand All @@ -25,24 +25,24 @@ targets:
sourceRevisionDigest: sha256:e32d21a6317d1bca6ab29f05603b96038e841752c2698aab47f434ea0d6530b7
sourceBlobDigest: sha256:2dad2b1b7a79de6917c363ce7e870d11efe31ac08e3bfe0258f72823fe1ad13e
codeSamplesNamespace: mistral-openapi-azure-code-samples
codeSamplesRevisionDigest: sha256:a34c3049c604d0bb67101d042e959f14098964fe784f98975a9201c84dbf44d0
codeSamplesRevisionDigest: sha256:248e5daaa44589805664ab1479502885758fde0f1da3b384b97b1a09d74c8256
mistralai-gcp-sdk:
source: mistral-google-cloud-source
sourceNamespace: mistral-openapi-google-cloud
sourceRevisionDigest: sha256:4d9938ab74c4d41d62cd24234c8b8109e286c4aeec093e21d369259a43173113
sourceBlobDigest: sha256:5a558d5ea7a936723c7a5540db5a1fba63d85d25b453372e1cf16395b30c98d3
codeSamplesNamespace: mistral-openapi-google-cloud-code-samples
codeSamplesRevisionDigest: sha256:fa36e5999e79c32e8b2c1317cc0d6ed179912ced15194f02b5f80da22e45ae5f
codeSamplesRevisionDigest: sha256:f6c4dc988e9b7be6f8d8087d14b2269be601bb9bff2227b07e1018efe88e1556
mistralai-sdk:
source: mistral-openapi
sourceNamespace: mistral-openapi
sourceRevisionDigest: sha256:74d0de7750f6a1878b68c9da683eba7a447d7c367131d0cb8f5c3b1e05829624
sourceBlobDigest: sha256:41e8354c48993fc29be68959d835ea4f8e0cc1d4b4fbd527afcd970bc02c62a2
sourceRevisionDigest: sha256:4f8e25101b35a66b9c93089fe3d491990268bdbefb70a349740e01ba9c8e28f8
sourceBlobDigest: sha256:8566b35549178910c6fd4d005474d612bb9c476ef58785bb51c46251de145f71
codeSamplesNamespace: mistral-openapi-code-samples
codeSamplesRevisionDigest: sha256:99fcae1bc81801e3825648a44f5ffa62a8f124e3186e5570be40414de164e7f2
codeSamplesRevisionDigest: sha256:f3cf9d6d99a27d6e753bd6e1a2f2c2fb290f412a455576de4bab610ab4825939
workflow:
workflowVersion: 1.0.0
speakeasyVersion: 1.685.0
speakeasyVersion: 1.729.0
sources:
mistral-azure-source:
inputs:
Expand Down
2 changes: 1 addition & 1 deletion .speakeasy/workflow.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
workflowVersion: 1.0.0
speakeasyVersion: 1.685.0
speakeasyVersion: 1.729.0
sources:
mistral-azure-source:
inputs:
Expand Down
12 changes: 11 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,19 +1,29 @@
.PHONY: help test-generate update-speakeasy-version
.PHONY: help generate test-generate update-speakeasy-version check-config

help:
@echo "Available targets:"
@echo " make generate Generate all SDKs (main, Azure, GCP)"
@echo " make test-generate Test SDK generation locally"
@echo " make update-speakeasy-version VERSION=x.y.z Update Speakeasy CLI version"
@echo " make check-config Check gen.yaml against recommended defaults"
@echo ""
@echo "Note: Production SDK generation is done via GitHub Actions:"
@echo " .github/workflows/sdk_generation_mistralai_sdk.yaml"

# Generate all SDKs (main, Azure, GCP)
generate:
speakeasy run -t all

# Test SDK generation locally.
# For production, use GitHub Actions: .github/workflows/sdk_generation_mistralai_sdk.yaml
# This uses the Speakeasy CLI version defined in .speakeasy/workflow.yaml
test-generate:
speakeasy run --skip-versioning

# Check gen.yaml configuration against Speakeasy recommended defaults
check-config:
speakeasy configure generation check

# Update the Speakeasy CLI version (the code generator tool).
# This modifies speakeasyVersion in .speakeasy/workflow.yaml and regenerates the SDK.
# Usage: make update-speakeasy-version VERSION=1.685.0
Expand Down
46 changes: 25 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,7 @@ $ source ~/.zshenv
<!-- Start Summary [summary] -->
## Summary

Mistral AI API: Dora OpenAPI schema

Our Chat Completion and Embeddings APIs specification. Create your account on [La Plateforme](https://console.mistral.ai) to get access and read the [docs](https://docs.mistral.ai) to learn how to use it.
Mistral AI API: Our Chat Completion and Embeddings APIs specification. Create your account on [La Plateforme](https://console.mistral.ai) to get access and read the [docs](https://docs.mistral.ai) to learn how to use it.
<!-- End Summary [summary] -->

<!-- Start Table of Contents [toc] -->
Expand Down Expand Up @@ -161,8 +159,8 @@ with Mistral(

res = mistral.chat.complete(model="mistral-large-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -190,8 +188,8 @@ async def main():

res = await mistral.chat.complete_async(model="mistral-large-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -269,8 +267,8 @@ with Mistral(

res = mistral.agents.complete(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], agent_id="<id>", stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -298,8 +296,8 @@ async def main():

res = await mistral.agents.complete_async(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], agent_id="<id>", stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -616,7 +614,14 @@ with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:

res = mistral.beta.conversations.start_stream(inputs="<value>", stream=True, completion_args={
res = mistral.beta.conversations.start_stream(inputs=[
{
"object": "entry",
"type": "function.result",
"tool_call_id": "<id>",
"result": "<value>",
},
], stream=True, completion_args={
"response_format": {
"type": "text",
},
Expand Down Expand Up @@ -653,7 +658,7 @@ with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:

res = mistral.beta.libraries.documents.upload(library_id="f973c54e-979a-4464-9d36-8cc31beb21fe", file={
res = mistral.beta.libraries.documents.upload(library_id="a02150d9-5ee0-4877-b62c-28b1fcdf3b76", file={
"file_name": "example.file",
"content": open("example.file", "rb"),
})
Expand All @@ -680,8 +685,8 @@ with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:

res = mistral.models.list(
retries=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
res = mistral.models.list(,
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))

# Handle response
print(res)
Expand Down Expand Up @@ -711,7 +716,7 @@ with Mistral(
<!-- Start Error Handling [errors] -->
## Error Handling

[`MistralError`](./src/mistralai/client/models/mistralerror.py) is the base class for all HTTP error responses. It has the following properties:
[`MistralError`](./src/mistralai/client/errors/mistralerror.py) is the base class for all HTTP error responses. It has the following properties:

| Property | Type | Description |
| ------------------ | ---------------- | --------------------------------------------------------------------------------------- |
Expand All @@ -724,8 +729,7 @@ with Mistral(

### Example
```python
import mistralai.client
from mistralai.client import Mistral, models
from mistralai.client import Mistral, errors
import os


Expand All @@ -741,7 +745,7 @@ with Mistral(
print(res)


except models.MistralError as e:
except errors.MistralError as e:
# The base class for HTTP error responses
print(e.message)
print(e.status_code)
Expand All @@ -750,13 +754,13 @@ with Mistral(
print(e.raw_response)

# Depending on the method different errors may be thrown
if isinstance(e, models.HTTPValidationError):
print(e.data.detail) # Optional[List[mistralai.client.ValidationError]]
if isinstance(e, errors.HTTPValidationError):
print(e.data.detail) # Optional[List[models.ValidationError]]
```

### Error Classes
**Primary error:**
* [`MistralError`](./src/mistralai/client/models/mistralerror.py): The base class for HTTP error responses.
* [`MistralError`](./src/mistralai/client/errors/mistralerror.py): The base class for HTTP error responses.

<details><summary>Less common errors (6)</summary>

Expand All @@ -768,9 +772,9 @@ with Mistral(
* [`httpx.TimeoutException`](https://www.python-httpx.org/exceptions/#httpx.TimeoutException): HTTP request timed out.


**Inherit from [`MistralError`](./src/mistralai/client/models/mistralerror.py)**:
* [`HTTPValidationError`](./src/mistralai/client/models/httpvalidationerror.py): Validation Error. Status code `422`. Applicable to 53 of 75 methods.*
* [`ResponseValidationError`](./src/mistralai/client/models/responsevalidationerror.py): Type mismatch between the response data and the expected Pydantic model. Provides access to the Pydantic validation error via the `cause` attribute.
**Inherit from [`MistralError`](./src/mistralai/client/errors/mistralerror.py)**:
* [`HTTPValidationError`](./src/mistralai/client/errors/httpvalidationerror.py): Validation Error. Status code `422`. Applicable to 53 of 75 methods.*
* [`ResponseValidationError`](./src/mistralai/client/errors/responsevalidationerror.py): Type mismatch between the response data and the expected Pydantic model. Provides access to the Pydantic validation error via the `cause` attribute.

</details>

Expand Down
8 changes: 4 additions & 4 deletions USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ with Mistral(

res = mistral.chat.complete(model="mistral-large-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -44,8 +44,8 @@ async def main():

res = await mistral.chat.complete_async(model="mistral-large-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -123,8 +123,8 @@ with Mistral(

res = mistral.agents.complete(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], agent_id="<id>", stream=False, response_format={
"type": "text",
Expand Down Expand Up @@ -152,8 +152,8 @@ async def main():

res = await mistral.agents.complete_async(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
], agent_id="<id>", stream=False, response_format={
"type": "text",
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/models/agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
| `description` | *OptionalNullable[str]* | :heavy_minus_sign: | N/A |
| `handoffs` | List[*str*] | :heavy_minus_sign: | N/A |
| `metadata` | Dict[str, *Any*] | :heavy_minus_sign: | N/A |
| `object` | [Optional[models.AgentObject]](../models/agentobject.md) | :heavy_minus_sign: | N/A |
| `object` | *Optional[Literal["agent"]]* | :heavy_minus_sign: | N/A |
| `id` | *str* | :heavy_check_mark: | N/A |
| `version` | *int* | :heavy_check_mark: | N/A |
| `versions` | List[*int*] | :heavy_check_mark: | N/A |
Expand Down
2 changes: 1 addition & 1 deletion docs/models/agentconversation.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
| `name` | *OptionalNullable[str]* | :heavy_minus_sign: | Name given to the conversation. |
| `description` | *OptionalNullable[str]* | :heavy_minus_sign: | Description of the what the conversation is about. |
| `metadata` | Dict[str, *Any*] | :heavy_minus_sign: | Custom metadata for the conversation. |
| `object` | [Optional[models.AgentConversationObject]](../models/agentconversationobject.md) | :heavy_minus_sign: | N/A |
| `object` | *Optional[Literal["conversation"]]* | :heavy_minus_sign: | N/A |
| `id` | *str* | :heavy_check_mark: | N/A |
| `created_at` | [date](https://docs.python.org/3/library/datetime.html#date-objects) | :heavy_check_mark: | N/A |
| `updated_at` | [date](https://docs.python.org/3/library/datetime.html#date-objects) | :heavy_check_mark: | N/A |
Expand Down
Loading
Loading