Skip to content

Commit e2b2d06

Browse files
techpro-aimlapigitbook-bot
authored andcommitted
GITBOOK-764: docs: add 2 qwen embeddings, add manual examples for all embedding models (but problem ones)
1 parent da2a0dd commit e2b2d06

19 files changed

Lines changed: 1007 additions & 74 deletions

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ If you've already made your choice and know the model ID, use the [Search panel]
8080
{% endtab %}
8181

8282
{% tab title="Models by DEVELOPER" %}
83-
**Alibaba Cloud**: [Text/Chat](api-references/text-models-llm/Alibaba-Cloud/) [Image](api-references/video-models/alibaba-cloud/) [Video](api-references/image-models/alibaba-cloud/) [Text-to-Speech](api-references/speech-models/text-to-speech/alibaba-cloud/)
83+
**Alibaba Cloud**: [Text/Chat](api-references/text-models-llm/Alibaba-Cloud/) [Image](api-references/video-models/alibaba-cloud/) [Video](api-references/image-models/alibaba-cloud/) [Text-to-Speech](api-references/speech-models/text-to-speech/alibaba-cloud/) [Embedding](api-references/embedding-models/alibaba-cloud/)
8484

8585
**Anthracite**: [Text/Chat](api-references/text-models-llm/Anthracite/)
8686

docs/SUMMARY.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -412,6 +412,9 @@
412412
* [mistral-ocr-latest](api-references/vision-models/ocr-optical-character-recognition/mistral-ai/mistral-ocr-latest.md)
413413
* [OFR: Optical Feature Recognition](api-references/vision-models/ofr-optical-feature-recognition.md)
414414
* [Embedding Models](api-references/embedding-models/README.md)
415+
* [Alibaba Cloud](api-references/embedding-models/alibaba-cloud/README.md)
416+
* [qwen-text-embedding-v3](api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v3.md)
417+
* [qwen-text-embedding-v4](api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v4.md)
415418
* [Anthropic](api-references/embedding-models/Anthropic/README.md)
416419
* [voyage-2](api-references/embedding-models/Anthropic/voyage-2.md)
417420
* [voyage-code-2](api-references/embedding-models/Anthropic/voyage-code-2.md)
@@ -424,11 +427,10 @@
424427
* [bge-base-en](api-references/embedding-models/BAAI/bge-base-en.md)
425428
* [bge-large-en](api-references/embedding-models/BAAI/bge-large-en.md)
426429
* [Google](api-references/embedding-models/Google/README.md)
427-
* [textembedding-gecko](api-references/embedding-models/Google/textembedding-gecko.md)
428430
* [text-multilingual-embedding-002](api-references/embedding-models/Google/text-multilingual-embedding-002.md)
429431
* [OpenAI](api-references/embedding-models/OpenAI/README.md)
430-
* [text-embedding-3-large](api-references/embedding-models/OpenAI/text-embedding-3-large.md)
431432
* [text-embedding-3-small](api-references/embedding-models/OpenAI/text-embedding-3-small.md)
433+
* [text-embedding-3-large](api-references/embedding-models/OpenAI/text-embedding-3-large.md)
432434
* [text-embedding-ada-002](api-references/embedding-models/OpenAI/text-embedding-ada-002.md)
433435
* [Together AI](api-references/embedding-models/Together-AI/README.md)
434436
* [m2-bert-80M-retrieval](api-references/embedding-models/Together-AI/m2-bert-80M-retrieval.md)

docs/api-references/embedding-models/Anthropic/voyage-2.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,10 @@ If you don’t have an API key for the AI/ML API yet, feel free to use our [Quic
2828
[voyage-2.json](../../../.gitbook/assets/voyage-2.json)
2929
{% endopenapi %}
3030

31-
## Example in Python
31+
## Code Example
3232

33+
{% tabs %}
34+
{% tab title="Python" %}
3335
```python
3436
import openai
3537

@@ -51,8 +53,10 @@ response = client.embeddings.create(
5153
# Print the embedding
5254
print(response)
5355
```
56+
{% endtab %}
57+
{% endtabs %}
5458

55-
This Python example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
59+
This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
5660

5761
<details>
5862

docs/api-references/embedding-models/Anthropic/voyage-finance-2.md

Lines changed: 85 additions & 3 deletions
Large diffs are not rendered by default.

docs/api-references/embedding-models/Anthropic/voyage-large-2-instruct.md

Lines changed: 85 additions & 3 deletions
Large diffs are not rendered by default.

docs/api-references/embedding-models/Anthropic/voyage-large-2.md

Lines changed: 85 additions & 3 deletions
Large diffs are not rendered by default.

docs/api-references/embedding-models/Anthropic/voyage-law-2.md

Lines changed: 85 additions & 3 deletions
Large diffs are not rendered by default.

docs/api-references/embedding-models/Anthropic/voyage-multilingual-2.md

Lines changed: 85 additions & 3 deletions
Large diffs are not rendered by default.

docs/api-references/embedding-models/BAAI/bge-base-en.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,7 @@ An embedding model that excels in creating high-precision linguistic representat
2222

2323
If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up).
2424

25-
## Submit a request
26-
27-
### API Schema
25+
## API Schema
2826

2927
{% openapi src="../../../.gitbook/assets/bge-base-en.json" path="/v1/embeddings" method="post" %}
3028
[bge-base-en.json](../../../.gitbook/assets/bge-base-en.json)

docs/api-references/embedding-models/Google/text-multilingual-embedding-002.md

Lines changed: 85 additions & 3 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)