Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -334,6 +334,11 @@ if __name__ == "__main__":
- Use `get_new_thread()` for multi-turn conversations
- Prefer `HostedMCPTool` for service-managed MCP, `MCPStreamableHTTPTool` for client-managed

## Best Practices

1. **This SDK is async-first** — use `async def` handlers and `async with` throughout.
2. **Always use context managers for clients and async credentials.** Wrap every client in `with Client(...) as client:` (sync) or `async with Client(...) as client:` (async). For async `DefaultAzureCredential` from `azure.identity.aio`, also use `async with credential:` so tokens and transports are cleaned up.

## Reference Files

- [references/tools.md](references/tools.md): Detailed hosted tool patterns
Expand Down
96 changes: 48 additions & 48 deletions .github/plugins/azure-sdk-python/skills/agents-v2-py/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,27 +78,26 @@ from azure.ai.projects.models import (
### 2. Create Hosted Agent

```python
client = AIProjectClient(
with AIProjectClient(
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
credential=DefaultAzureCredential()
)

agent = client.agents.create_version(
agent_name="my-hosted-agent",
definition=ImageBasedHostedAgentDefinition(
container_protocol_versions=[
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
],
cpu="1",
memory="2Gi",
image="myregistry.azurecr.io/my-agent:latest",
tools=[{"type": "code_interpreter"}],
environment_variables={
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
"MODEL_NAME": "gpt-4o-mini"
}
) as client:
agent = client.agents.create_version(
agent_name="my-hosted-agent",
definition=ImageBasedHostedAgentDefinition(
container_protocol_versions=[
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
],
cpu="1",
memory="2Gi",
image="myregistry.azurecr.io/my-agent:latest",
tools=[{"type": "code_interpreter"}],
environment_variables={
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
"MODEL_NAME": "gpt-4o-mini"
}
)
)
)

print(f"Created agent: {agent.name} (version: {agent.version})")
```
Expand Down Expand Up @@ -235,34 +234,33 @@ from azure.ai.projects.models import (
def create_hosted_agent():
"""Create a hosted agent with custom container image."""

client = AIProjectClient(
with AIProjectClient(
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
credential=DefaultAzureCredential()
)

agent = client.agents.create_version(
agent_name="data-processor-agent",
definition=ImageBasedHostedAgentDefinition(
container_protocol_versions=[
ProtocolVersionRecord(
protocol=AgentProtocol.RESPONSES,
version="v1"
)
],
image="myregistry.azurecr.io/data-processor:v1.0",
cpu="2",
memory="4Gi",
tools=[
{"type": "code_interpreter"},
{"type": "file_search"}
],
environment_variables={
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
"MODEL_NAME": "gpt-4o-mini",
"MAX_RETRIES": "3"
}
) as client:
agent = client.agents.create_version(
agent_name="data-processor-agent",
definition=ImageBasedHostedAgentDefinition(
container_protocol_versions=[
ProtocolVersionRecord(
protocol=AgentProtocol.RESPONSES,
version="v1"
)
],
image="myregistry.azurecr.io/data-processor:v1.0",
cpu="2",
memory="4Gi",
tools=[
{"type": "code_interpreter"},
{"type": "file_search"}
],
environment_variables={
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
"MODEL_NAME": "gpt-4o-mini",
"MAX_RETRIES": "3"
}
)
)
)

print(f"Created hosted agent: {agent.name}")
print(f"Version: {agent.version}")
Expand Down Expand Up @@ -322,11 +320,13 @@ async def create_hosted_agent_async():

## Best Practices

1. **Version Your Images** - Use specific tags, not `latest` in production
2. **Minimal Resources** - Start with minimum CPU/memory, scale up as needed
3. **Environment Variables** - Use for all configuration, never hardcode
4. **Error Handling** - Wrap agent creation in try/except blocks
5. **Cleanup** - Delete unused agent versions to free resources
1. **Pick sync OR async and stay consistent.** Do not mix `azure.xxx` sync clients with `azure.xxx.aio` async clients in the same call path. Choose one mode per module.
2. **Always use context managers for clients and async credentials.** Wrap every client in `with Client(...) as client:` (sync) or `async with Client(...) as client:` (async). For async `DefaultAzureCredential` from `azure.identity.aio`, also use `async with credential:` so tokens and transports are cleaned up.
3. **Version Your Images** - Use specific tags, not `latest` in production
4. **Minimal Resources** - Start with minimum CPU/memory, scale up as needed
5. **Environment Variables** - Use for all configuration, never hardcode
6. **Error Handling** - Wrap agent creation in try/except blocks
7. **Cleanup** - Delete unused agent versions to free resources

## Reference Links

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,18 +67,17 @@ from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
from azure.core.credentials import AzureKeyCredential

client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

request = AnalyzeTextOptions(text="Your text content to analyze")
response = client.analyze_text(request)

# Check each category
for category in [TextCategory.HATE, TextCategory.SELF_HARM,
TextCategory.SEXUAL, TextCategory.VIOLENCE]:
result = next((r for r in response.categories_analysis
if r.category == category), None)
if result:
print(f"{category}: severity {result.severity}")
with ContentSafetyClient(endpoint, AzureKeyCredential(key)) as client:
request = AnalyzeTextOptions(text="Your text content to analyze")
response = client.analyze_text(request)

# Check each category
for category in [TextCategory.HATE, TextCategory.SELF_HARM,
TextCategory.SEXUAL, TextCategory.VIOLENCE]:
result = next((r for r in response.categories_analysis
if r.category == category), None)
if result:
print(f"{category}: severity {result.severity}")
```

## Analyze Image
Expand All @@ -89,20 +88,19 @@ from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
from azure.core.credentials import AzureKeyCredential
import base64

client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
with ContentSafetyClient(endpoint, AzureKeyCredential(key)) as client:
# From file
with open("image.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")

# From file
with open("image.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")

request = AnalyzeImageOptions(
image=ImageData(content=image_data)
)
request = AnalyzeImageOptions(
image=ImageData(content=image_data)
)

response = client.analyze_image(request)
response = client.analyze_image(request)

for result in response.categories_analysis:
print(f"{result.category}: severity {result.severity}")
for result in response.categories_analysis:
print(f"{result.category}: severity {result.severity}")
```

### Image from URL
Expand All @@ -126,17 +124,16 @@ from azure.ai.contentsafety import BlocklistClient
from azure.ai.contentsafety.models import TextBlocklist
from azure.core.credentials import AzureKeyCredential

blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))

blocklist = TextBlocklist(
blocklist_name="my-blocklist",
description="Custom terms to block"
)
with BlocklistClient(endpoint, AzureKeyCredential(key)) as blocklist_client:
blocklist = TextBlocklist(
blocklist_name="my-blocklist",
description="Custom terms to block"
)

result = blocklist_client.create_or_update_text_blocklist(
blocklist_name="my-blocklist",
options=blocklist
)
result = blocklist_client.create_or_update_text_blocklist(
blocklist_name="my-blocklist",
options=blocklist
)
```

### Add Block Items
Expand Down Expand Up @@ -215,10 +212,12 @@ request = AnalyzeTextOptions(

## Best Practices

1. **Use blocklists** for domain-specific terms
2. **Set severity thresholds** appropriate for your use case
3. **Handle multiple categories** — content can be harmful in multiple ways
4. **Use halt_on_blocklist_hit** for immediate rejection
5. **Log analysis results** for audit and improvement
6. **Consider 8-severity mode** for finer-grained control
7. **Pre-moderate AI outputs** before showing to users
1. **Pick sync OR async and stay consistent.** Do not mix `azure.ai.contentsafety` sync clients with `azure.ai.contentsafety.aio` async clients in the same call path. Choose one mode per module.
2. **Always use context managers for clients and async credentials.** Wrap every client in `with ContentSafetyClient(...) as client:` (sync) or `async with ContentSafetyClient(...) as client:` (async). For async `DefaultAzureCredential` from `azure.identity.aio`, also use `async with credential:` so tokens and transports are cleaned up.
3. **Use blocklists** for domain-specific terms
4. **Set severity thresholds** appropriate for your use case
5. **Handle multiple categories** — content can be harmful in multiple ways
6. **Use halt_on_blocklist_hit** for immediate rejection
7. **Log analysis results** for audit and improvement
8. **Consider 8-severity mode** for finer-grained control
9. **Pre-moderate AI outputs** before showing to users
Original file line number Diff line number Diff line change
Expand Up @@ -70,22 +70,21 @@ from azure.ai.contentunderstanding.models import AnalyzeInput
from azure.identity import DefaultAzureCredential

endpoint = os.environ["CONTENTUNDERSTANDING_ENDPOINT"]
client = ContentUnderstandingClient(
with ContentUnderstandingClient(
endpoint=endpoint,
credential=DefaultAzureCredential()
)

# Analyze document from URL
poller = client.begin_analyze(
analyzer_id="prebuilt-documentSearch",
inputs=[AnalyzeInput(url="https://example.com/document.pdf")]
)

result = poller.result()

# Access markdown content (contents is a list)
content = result.contents[0]
print(content.markdown)
) as client:
# Analyze document from URL
poller = client.begin_analyze(
analyzer_id="prebuilt-documentSearch",
inputs=[AnalyzeInput(url="https://example.com/document.pdf")]
)

result = poller.result()

# Access markdown content (contents is a list)
content = result.contents[0]
print(content.markdown)
```

## Access Document Content Details
Expand Down Expand Up @@ -226,19 +225,18 @@ from azure.identity.aio import DefaultAzureCredential

async def analyze_document():
endpoint = os.environ["CONTENTUNDERSTANDING_ENDPOINT"]
credential = DefaultAzureCredential()

async with ContentUnderstandingClient(
endpoint=endpoint,
credential=credential
) as client:
poller = await client.begin_analyze(
analyzer_id="prebuilt-documentSearch",
inputs=[AnalyzeInput(url="https://example.com/doc.pdf")]
)
result = await poller.result()
content = result.contents[0]
return content.markdown
async with DefaultAzureCredential() as credential:
async with ContentUnderstandingClient(
endpoint=endpoint,
credential=credential
) as client:
poller = await client.begin_analyze(
analyzer_id="prebuilt-documentSearch",
inputs=[AnalyzeInput(url="https://example.com/doc.pdf")]
)
result = await poller.result()
content = result.contents[0]
return content.markdown

asyncio.run(analyze_document())
```
Expand Down Expand Up @@ -273,10 +271,12 @@ from azure.ai.contentunderstanding.models import (

## Best Practices

1. **Use `begin_analyze` with `AnalyzeInput`** — this is the correct method signature
2. **Access results via `result.contents[0]`** — results are returned as a list
3. **Use prebuilt analyzers** for common scenarios (document/image/audio/video search)
4. **Create custom analyzers** only for domain-specific field extraction
5. **Use async client** for high-throughput scenarios with `azure.identity.aio` credentials
6. **Handle long-running operations** — video/audio analysis can take minutes
7. **Use URL sources** when possible to avoid upload overhead
1. **Pick sync OR async and stay consistent.** Do not mix `azure.ai.contentunderstanding` sync clients with `azure.ai.contentunderstanding.aio` async clients in the same call path. Choose one mode per module.
2. **Always use context managers for clients and async credentials.** Wrap every client in `with ContentUnderstandingClient(...) as client:` (sync) or `async with ContentUnderstandingClient(...) as client:` (async). For async `DefaultAzureCredential` from `azure.identity.aio`, also use `async with credential:` so tokens and transports are cleaned up.
3. **Use `begin_analyze` with `AnalyzeInput`** — this is the correct method signature
4. **Access results via `result.contents[0]`** — results are returned as a list
5. **Use prebuilt analyzers** for common scenarios (document/image/audio/video search)
6. **Create custom analyzers** only for domain-specific field extraction
7. **Use async client** for high-throughput scenarios with `azure.identity.aio` credentials
8. **Handle long-running operations** — video/audio analysis can take minutes
9. **Use URL sources** when possible to avoid upload overhead
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,9 @@ When responding to requests about Azure AI Language Conversations:
4. Handle exceptions properly.

## Best Practices
- **Pick sync OR async and stay consistent.** Do not mix `azure.ai.language.conversations` sync clients with `azure.ai.language.conversations.aio` async clients in the same call path. Choose one mode per module.
- **Always use context managers for clients and async credentials.** Wrap every client in `with ConversationAnalysisClient(...) as client:` (sync) or `async with ConversationAnalysisClient(...) as client:` (async). For async `DefaultAzureCredential` from `azure.identity.aio`, also use `async with credential:` so tokens and transports are cleaned up.
- Use environment variables for the endpoint, API key, project name, and deployment name.
- Always use context managers (`with client:`) to ensure proper resource handling.
- Clearly map the `participantId` and `id` in the `conversationItem` payload.

## Examples
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -273,10 +273,12 @@ print(f"Default: {default_ds.name}")

## Best Practices

1. **Use versioning** for data, models, and environments
2. **Configure idle scale-down** to reduce compute costs
3. **Use environments** for reproducible training
4. **Stream job logs** to monitor progress
5. **Register models** after successful training jobs
6. **Use pipelines** for multi-step workflows
7. **Tag resources** for organization and cost tracking
1. **Pick sync OR async and stay consistent.** Do not mix `azure.ai.ml` sync clients with `azure.ai.ml` async clients in the same call path. Choose one mode per module.
2. **Always use context managers for clients and async credentials.** Wrap every client in `with MLClient(...) as client:` (sync) or `async with MLClient(...) as client:` (async). For async `DefaultAzureCredential` from `azure.identity.aio`, also use `async with credential:` so tokens and transports are cleaned up.
3. **Use versioning** for data, models, and environments
4. **Configure idle scale-down** to reduce compute costs
5. **Use environments** for reproducible training
6. **Stream job logs** to monitor progress
7. **Register models** after successful training jobs
8. **Use pipelines** for multi-step workflows
9. **Tag resources** for organization and cost tracking
Loading
Loading