A sophisticated Python-based conversational AI agent for recipe search and ingredient extraction, powered by Azure AI Foundry with a clean, extensible architecture.
- Start with Managed (Azure-native orchestration)
- Graduate to BYO (Semantic Kernel/LangChain) if complexity demands it
- Search by ingredients, cuisine, dietary restrictions
- Filter by cooking time and difficulty level
- Get personalized recommendations based on preferences
- NER-style parsing with regex patterns
- LLM fallback for complex cases
- Automatic dietary constraint detection
- Maintains context across multiple turns
- Remembers dietary restrictions and preferences
- Auto-detects and stores user constraints
- Orchestrator: Strategy pattern for swappable orchestration (Managed, Semantic Kernel, LangChain)
- Tools: Modular, extensible tool system
- Memory: Pluggable memory backends (In-Memory, Redis, Cosmos DB)
- Observability: Comprehensive logging for monitoring
Azure AI Foundry already provides a native agent orchestrator. You may optionally swap in external orchestrators depending on your needs.
What it is:
- Azure's built-in agent runtime
- Handles planning, memory, tool calling, context management, and multi-step reasoning
- No external framework required
Use when:
- β You want the simplest, most Azure-aligned approach
- β You want built-in multi-step tool calling
- β You prefer serverless agent behavior
- β You want minimal code and no orchestration maintenance
Pros:
- β Automatic planning
- β Built-in memory + tools
- β Simplest architecture
- β No extra dependencies
Cons:
- β Less low-level control
What it is:
- Microsoft's agent framework with planners, memory connectors, skills, and deep Azure integrations
Use when:
- β You want multi-step planning with more control than AI Foundry
- β You need connectors (AI Search, Cosmos DB, Storage, SQL, etc.)
- β Your org is already using Microsoft tooling
- β You want a structured, enterprise-ready agent stack
Pros:
-
β Planners
-
β Built-in memory + safety
-
β Azure-native ecosystem
Cons:
- β More dependencies
- β More opinionated patterns
What it is:
- Large ecosystem for LLM apps, ideal for RAG-heavy agents and integrations
Use when:
- β You need many integrations (Pinecone, Weaviate, loaders, crawlers)
- β Your team already uses LangChain
- β You're building complex RAG or retrieval workflows
- β You want rapid prototyping
Pros:
- β Huge ecosystem
- β Excellent for RAG pipelines
- β Many pre-built chains
Cons:
- β Heavy dependencies
- β Frequent version churn
- β More complexity than needed for basic agents
What it is:
- Toolkit for building agents that operate inside Teams, Outlook, and other M365 apps
Use when:
- β Your agent must run inside Teams/M365
- β You need built-in connectors (Graph API, Calendar, Mail, SharePoint)
- β You want M365 auth handled for you
Pros:
- β Native Teams/Outlook integration
- β Pre-built M365 connectors
- β Handles authentication and card formats
Cons:
- β Not suitable for general-purpose agents
- β Tightly bound to M365 ecosystem
What it is:
- You manually build the orchestration loop using the Azure Models API (ChatCompletions)
- You handle: planning, tool routing, memory, retries, context, validationβeverything
Use when:
- β You need extremely specialized control
- β You're integrating with legacy systems
- β You're experimenting or prototyping low-level LLM behavior
Pros:
- β Full transparency and customization
Cons:
- β Not recommended for production
- β SK or LangChain do this far better
- β High maintenance and complexity
- β No built-in planning or memory
| Option | Orchestration Provided By | Best For | Multi-Step Planning | Dependencies |
|---|---|---|---|---|
| 1. Azure AI Foundry Agent | Azure | General agents, simplest path | Yes (built-in) | None |
| 2. Semantic Kernel | SK Planner | Enterprise + Azure-native apps | Yes | Semantic Kernel |
| 3. LangChain | LC AgentExecutor | RAG-heavy + integrations | Yes | LangChain |
| 4. M365 Agent Toolkit | Toolkit | Teams/M365 apps | Yes | M365 SDKs |
| 5. Custom Manual | You | Highly custom logic | No (you build it) | None |
- Educational: Shows core agent patterns without framework magic
- Transparent: You see exactly what happens at each step
- Flexible: Easy to customize without fighting framework constraints
- Lightweight: Minimal dependencies
When to Switch to BYO Framework:
- You need automatic multi-step planning
- You want framework-provided memory/telemetry instead of building it
- You need extensive integrations (LangChain's 300+ connectors)
- Your team prefers configuration over custom code
How to Switch: This project uses strategy pattern. To swap orchestrators: implement new class in orchestrator.py, change ORCHESTRATOR_TYPE in .env.
- Simplicity: Direct Azure AI Foundry integration without framework overhead
- Transparency: See exactly what's happeningβno "magic" abstractions
- Flexibility: Easy to customize without fighting framework constraints
- Educational Value: Understand core AI agent patterns before adopting frameworks
- Production-Ready: Proven approach used by many Azure AI applications
- True "Managed": Let Azure handle orchestration natively
The Philosophy:
- Start with Managed (Azure-native orchestration)
- Graduate to BYO (Semantic Kernel/LangChain) when complexity demands it
- Choose M365 Toolkit only if building M365-specific agents
Vision: Extend this Azure AI Foundry agent to work natively within Microsoft 365 environments (Teams, Outlook, SharePoint) while maintaining the current architecture.
Why This Matters:
- Keep the clean orchestrator pattern we've built
- Add M365 as a deployment target, not a replacement
- Enable the agent to work in chat interfaces where users already are
- Access M365 data (emails, calendars, documents) as additional context
Planned Approach:
- Maintain current Azure AI Foundry core
- Add M365 authentication layer (Azure AD/Entra ID)
- Implement M365 message adapters (Teams cards, Outlook actionable messages)
- Create M365-specific tools (calendar lookup, email search, document retrieval)
- Deploy as Teams app or Outlook add-in
What This Means:
- Same agent logic (orchestrator, tools, memory)
- Multiple interfaces (CLI, Teams, Outlook, web)
- M365 as a channel, not a rewrite
π Detailed implementation guide: See docs/m365-integration-guide.md for step-by-step instructions.
A365Agent/
β
βββ app.py # Main application with chat loop
βββ config.py # Configuration and settings
βββ orchestrator.py # Orchestrator interface + implementations
βββ memory.py # Memory storage interface + implementations
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variable template
βββ .env # Your credentials (not in git)
β
βββ tools/ # Tool implementations
β βββ __init__.py
β βββ ingredient_extractor.py # Ingredient parsing tool
β βββ recipe_search.py # Recipe search tool
β
βββ data/ # Data files
β βββ recipes.json # Sample recipe dataset (18 recipes)
β
βββ README.md # This file
- Python 3.8 or higher
- Azure AI Foundry project with deployed model
- Azure OpenAI API credentials
-
Clone or navigate to the project
cd c:\Usha\UKRepos\A365Agent
-
Install dependencies
pip install -r requirements.txt -
Configure environment variables
Copy the example file:
Copy-Item .env.example .envEdit
.envand add your Azure AI Foundry credentials:AZURE_OPENAI_ENDPOINT=https://your-foundry-project.openai.azure.com/ AZURE_OPENAI_API_KEY=your_api_key_here MODEL_DEPLOYMENT_NAME=gpt-4
How to get your credentials:
- Go to Azure AI Foundry Portal
- Select your project
- Navigate to Settings > Endpoints
- Copy the endpoint URL and API key
- Note your model deployment name
python app.pyYou: Find gluten-free dinner recipes under 30 minutes
ChefAI: I found several great gluten-free options that can be made quickly...
You: I have salmon, lemon, and asparagus
ChefAI: Based on those ingredients, I recommend...
You: Show me vegan Italian recipes
ChefAI: Here are some delicious vegan Italian dishes...
You: Make it under 25 minutes
ChefAI: Filtering for quick options under 25 minutes...
You: Find me pasta recipes
ChefAI: Here are some pasta options...
You: Make it dairy-free
ChefAI: Here are dairy-free pasta recipes...
exit,quit,bye- End the sessionclear- Reset conversation memorypreferences- View saved preferenceshelp- Show help information
The orchestrator uses a strategy pattern for swappable implementations:
# Default: Managed orchestrator with Azure OpenAI function calling
orchestrator = ManagedOrchestrator()
# Future: Swap to Semantic Kernel
orchestrator = SemanticKernelOrchestrator()
# Future: Swap to LangChain
orchestrator = LangChainOrchestrator()Current Implementation:
ManagedOrchestrator: Uses native Azure OpenAI function calling- Handles tool selection, execution, and response synthesis
- No external orchestration framework required
Extensibility: See orchestrator.py for implementation guides for Semantic Kernel and LangChain.
Tools are self-describing functions with OpenAI function schemas:
# Each tool has a schema for the LLM
ingredient_extractor.schema = {
"type": "function",
"function": {
"name": "ingredient_extractor",
"description": "Extract ingredients and dietary constraints from text",
"parameters": {...}
}
}Available Tools:
-
ingredient_extractor
- Regex-based parsing for speed
- LLM fallback for complex cases
- Detects dietary constraints automatically
-
recipe_search
- Searches local JSON dataset
- Filters: ingredients, diet, cuisine, time, difficulty
- Returns up to 5 matching recipes
Add New Tools:
- Create tool function in
tools/ - Add OpenAI function schema
- Register in
app.py
Pluggable memory backends:
# Default: In-memory (development)
memory = InMemoryStore()
# Production: Redis (persistent, distributed)
memory = RedisMemory()
# Production: Cosmos DB (Azure, globally distributed)
memory = CosmosDBMemory()Memory Features:
- Conversation history with timestamp
- User preferences (dietary, cuisine, time constraints)
- Auto-extraction of preferences from conversation
- Session metadata and statistics
Switch Memory Backend:
Set in .env:
MEMORY_BACKEND=in_memory # or redis, cosmos_dbAll interactions are logged with:
- User input and timestamp
- Selected tools and arguments
- Tool execution results
- Model response and rationale
- Decision reasoning
Log Levels:
LOG_LEVEL=INFO # INFO, DEBUG, WARNING, ERROR
ENABLE_DETAILED_LOGGING=trueLogs are written to:
- Console (stdout)
- File:
chef_ai_agent.log
Future: Integrate with Azure Monitor for production observability.
| Variable | Required | Description | Example |
|---|---|---|---|
AZURE_OPENAI_ENDPOINT |
β | Azure AI Foundry endpoint URL | https://your-project.openai.azure.com/ |
AZURE_OPENAI_API_KEY |
β | Azure OpenAI API key | Your API key |
MODEL_DEPLOYMENT_NAME |
β | Model deployment name | gpt-4, gpt-35-turbo |
API_VERSION |
β | API version | 2024-08-01-preview |
ORCHESTRATOR_TYPE |
β | Orchestrator backend | managed, semantic_kernel, langchain |
MEMORY_BACKEND |
β | Memory storage | in_memory, redis, cosmos_db |
LOG_LEVEL |
β | Logging level | INFO, DEBUG |
Adjust in config.py:
MAX_CONVERSATION_HISTORY = 10 # Number of turns to keep
TEMPERATURE = 0.7 # Model creativity (0.0-1.0)
MAX_TOKENS = 1500 # Max response length
MAX_RECIPE_RESULTS = 5 # Recipes returned per search-
Install Semantic Kernel:
pip install semantic-kernel -
Uncomment implementation in
orchestrator.py:class SemanticKernelOrchestrator(Orchestrator): # Implementation provided in comments
-
Update config:
ORCHESTRATOR_TYPE=semantic_kernel
-
Install LangChain:
pip install langchain langchain-openai -
Uncomment implementation in
orchestrator.py:class LangChainOrchestrator(Orchestrator): # Implementation provided in comments
-
Update config:
ORCHESTRATOR_TYPE=langchain
Replace local search with Spoonacular API:
-
Get API key: Sign up at Spoonacular
-
Add to
.env:SPOONACULAR_API_KEY=your_api_key
-
Update
tools/recipe_search.py:import requests def recipe_search(...): api_key = Config.SPOONACULAR_API_KEY url = "https://api.spoonacular.com/recipes/complexSearch" params = { "apiKey": api_key, "includeIngredients": ",".join(ingredients) if ingredients else None, "diet": dietary_restrictions[0] if dietary_restrictions else None, # ... other params } response = requests.get(url, params=params) return format_results(response.json())
See detailed guide in tools/recipe_search.py.
-
Install Redis:
pip install redis
-
Start Redis:
docker run -d -p 6379:6379 redis
-
Uncomment implementation in
memory.py:class RedisMemory(Memory): # Implementation provided in comments
-
Configure:
MEMORY_BACKEND=redis REDIS_URL=redis://localhost:6379
-
Install Cosmos SDK:
pip install azure-cosmos -
Create Cosmos DB in Azure Portal
-
Uncomment implementation in
memory.py:class CosmosDBMemory(Memory): # Implementation provided in comments
-
Configure:
MEMORY_BACKEND=cosmos_db COSMOS_DB_ENDPOINT=https://your-cosmos.documents.azure.com:443/ COSMOS_DB_KEY=your_cosmos_key
Test ingredient extractor:
python -m tools.ingredient_extractorTest recipe search:
python -m tools.recipe_searchFind gluten-free dinner in under 30 minutes
I have salmon, lemon, and asparagus
Make it dairy-free and Mediterranean
Show me easy vegan recipes
What can I cook with chicken and rice?
View detailed logs:
Get-Content chef_ai_agent.log -Tail 50Filter by level:
Select-String -Path chef_ai_agent.log -Pattern "ERROR"- Interaction count
- Tools used per session
- Response times
- Tool call frequency
- Error rates
Uncomment in config.py:
APPLICATIONINSIGHTS_CONNECTION_STRING = os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING")Install SDK:
pip install azure-monitor-opentelemetryError: "AZURE_OPENAI_ENDPOINT not found"
- Ensure
.envfile exists (copy from.env.example) - Check environment variable names match exactly
- Verify no extra spaces or quotes
Error: "Model deployment not found"
- Check your model deployment name in Azure AI Foundry
- Ensure the model is deployed and running
- Verify the name matches exactly in
.env
Error: "Tool schema missing"
- Ensure tool functions have
.schemaattribute - Check schema format matches OpenAI function calling spec
Slow responses:
- Reduce
MAX_TOKENSin config - Use a faster model (e.g.,
gpt-35-turboinstead ofgpt-4) - Check network latency to Azure
Conversation context lost:
- Check
MAX_CONVERSATION_HISTORYsetting - Verify memory backend is initialized
- Use
preferencescommand to check stored data
- Azure AI Foundry Documentation
- Azure OpenAI Service
- OpenAI Function Calling Guide
- Semantic Kernel
- LangChain
This project is provided as-is for educational and development purposes.
Contributions welcome! Areas for enhancement:
- Additional tools (nutrition lookup, meal planning)
- More orchestrator implementations
- Enhanced observability and monitoring
- UI/Web interface
- Voice input/output
- Multi-user support
Happy Cooking with ChefAI! π³π¨βπ³π©βπ³