Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Portkey, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.
Quick Start
from portkey_ai import Portkey
# 1. Install: pip install portkey-ai
# 2. Add @azure-foundry provider in Model Catalog
# 3. Use it:
portkey = Portkey(
api_key="PORTKEY_API_KEY",
provider="@azure-foundry"
)
response = portkey.chat.completions.create(
model="DeepSeek-V3-0324", # Your deployed model name
messages=[{"role": "user", "content": "Tell me about cloud computing"}]
)
print(response.choices[0].message.content)
Add Provider in Model Catalog
To integrate Azure AI Foundry with Portkey, you’ll create a provider in the Model Catalog. This securely stores your Azure AI Foundry credentials, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.
OpenAI models on Azure
If you’re specifically looking to use OpenAI models on Azure, you should use Azure OpenAI instead, which is optimized for OpenAI models.
Understanding Azure AI Foundry Deployments
Azure AI Foundry offers three different ways to deploy models, each with unique endpoints and configurations:
- AI Services: Azure-managed models accessed through Azure AI Services endpoints
- Managed: User-managed deployments running on dedicated Azure compute resources
- Serverless: Seamless, scalable deployment without managing infrastructure
You can learn more about the Azure AI Foundry deployment here.
Creating Your Azure AI Foundry Provider
Integrate Azure AI Foundry with Portkey to centrally manage your AI models and deployments. This guide walks you through setting up the provider using API key authentication.
Prerequisites
Before creating your provider, you’ll need:
- An active Azure AI Foundry account
- Access to your Azure AI Foundry portal
- A deployed model on Azure Foundry
Step 1: Navigate to Model Catalog
Go to Model Catalog → Add Provider and select Azure AI Foundry as your provider.
Fill in the basic information for your provider:
-
Name: A descriptive name for this provider (e.g., “Azure AI Production”)
-
Short Description: Optional context about this provider’s purpose
-
Slug: A unique identifier used in API calls (e.g., “@azure-ai-prod”)
Step 3: Set Up Authentication
Portkey supports three authentication methods for Azure AI Foundry. For most use cases, we recommend using the Default (API Key) method.
Default (API Key)
Azure Managed Entity
Azure Entra ID
Gather Your Azure Credentials
From your Azure AI Foundry portal, you’ll need to collect:
- Navigate to your model deployment in Azure AI Foundry
- Click on the deployment to view details
- Copy the API Key from the authentication section
- Copy the Target URI - this is your endpoint URL
- Note the API Version from your deployment URL
- Azure Deployment Name (Optional): Only required for Managed Services deployments
Enter Credentials in Portkey
For managed Azure deployments:Required parameters:
-
Azure Managed ClientID: Your managed client ID
-
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
- For AI Services:
https://your-resource-name.services.ai.azure.com/models
- For Managed:
https://your-model-name.region.inference.ml.azure.com/score
- For Serverless:
https://your-model-name.region.models.ai.azure.com
-
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url.
Examples:
- If your URL is
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview, the API version is 2024-05-01-preview
-
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
To use this authentication your azure application need to have the role of: conginitive services user.
Enterprise-level authentication with Azure Entra ID:Required parameters:
-
Azure Entra ClientID: Your Azure Entra client ID
-
Azure Entra Secret: Your client secret
-
Azure Entra Tenant ID: Your tenant ID
-
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
- For AI Services:
https://your-resource-name.services.ai.azure.com/models
- For Managed:
https://your-model-name.region.inference.ml.azure.com/score
- For Serverless:
https://your-model-name.region.models.ai.azure.com
-
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url.
Examples:
- If your URL is
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview, the API version is 2024-05-01-preview
-
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments. Common in Managed deployments.
You can Learn more about these Azure Entra Resources here
Adding Multiple Models to Your Azure AI Foundry Provider
You can deploy multiple models through a single Azure AI Foundry provider by using Portkey’s custom models feature.
Steps to Add Additional Models
- Navigate to your Azure AI Foundry provider in Model Catalog
- Select the Model Provisioning step
- Click Add Model in the top-right corner
Enter the following details for your Azure deployment:
Model Slug: Use your Azure Model Deployment name exactly as it appears in Azure AI Foundry
Short Description: Optional description for team reference
Model Type: Select “Custom model”
Base Model: Choose the model that matches your deployment’s API structure (e.g., select gpt-4 for GPT-4 deployments)
This is just for reference. If you can’t find the particular model, you can just choose a similar model.
Custom Pricing: Enable to track costs with your negotiated rates
Once configured, this model will be available alongside others in your provider, allowing you to manage multiple Azure deployments through a single set of credentials.
Azure AI Foundry Capabilities
Function Calling
Azure AI Foundry supports function calling (tool calling) for compatible models. Here’s how to implement it with Portkey:
tools = [{
"type": "function",
"function": {
"name": "getWeather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}]
response = portkey.chat.completions.create(
model="DeepSeek-V3-0324", # Use a model that supports function calling
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like in Delhi?"}
],
tools=tools,
tool_choice="auto"
)
print(response.choices[0])
Vision Capabilities
Process images alongside text using Azure AI Foundry’s vision capabilities:
response = portkey.chat.completions.create(
model="Llama-4-Scout-17B-16E", # Use a model that supports vision
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
]
}
],
max_tokens=500
)
print(response.choices[0].message.content)
Structured Outputs
Get consistent, parseable responses in specific formats:
import json
response = portkey.chat.completions.create(
model="cohere-command-a", # Use a model that supports response formats
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "List the top 3 cloud providers with their main services"}
],
response_format={"type": "json_object"},
temperature=0
)
print(json.loads(response.choices[0].message.content))
Portkey Gateway Features
Portkey provides advanced gateway features for Azure AI Foundry deployments:
Fallbacks
Create fallback configurations to ensure reliability when working with Azure AI Foundry models:
{
"strategy": {
"mode": "fallback"
},
"targets": [
{
"provider":"@azure-foundry-virtual-key",
"override_params": {
"model": "DeepSeek-V3-0324"
}
},
{
"provider":"@openai-virtual-key",
"override_params": {
"model": "gpt-4o"
}
}
]
}
Load Balancing
Distribute requests across multiple models for optimal performance:
{
"strategy": {
"mode": "loadbalance"
},
"targets": [
{
"provider":"@azure-foundry-virtual-key-1",
"override_params": {
"model": "DeepSeek-V3-0324"
},
"weight": 0.7
},
{
"provider":"@azure-foundry-virtual-key-2",
"override_params": {
"model": "cohere-command-a"
},
"weight": 0.3
}
]
}
Conditional Routing
Route requests based on specific conditions like user type or content requirements:
{
"strategy": {
"mode": "conditional",
"conditions": [
{
"query": { "metadata.user_type": { "$eq": "premium" } },
"then": "high-performance-model"
},
{
"query": { "metadata.content_type": { "$eq": "code" } },
"then": "code-specialized-model"
}
],
"default": "standard-model"
},
"targets": [
{
"name": "high-performance-model",
"provider":"@azure-foundry-virtual-key-1",
"override_params": {
"model": "Llama-4-Scout-17B-16E"
}
},
{
"name": "code-specialized-model",
"provider":"@azure-foundry-virtual-key-2",
"override_params": {
"model": "DeepSeek-V3-0324"
}
},
{
"name": "standard-model",
"provider":"@azure-foundry-virtual-key-3",
"override_params": {
"model": "cohere-command-a"
}
}
]
}
Managing Prompts
You can manage all prompts to Azure AI Foundry in the Prompt Library. Once you’ve created and tested a prompt in the library, use the portkey.prompts.completions.create interface to use the prompt in your application.
prompt_completion = portkey.prompts.completions.create(
prompt_id="Your Prompt ID",
variables={
# The variables specified in the prompt
}
)
Next Steps