Quick Start Guide - Azure AI Foundry¶
This notebook provides a hands-on introduction to Azure AI Foundry. You'll learn how to:
- Initialize the AI Project client
- List available models
- Create a simple completion request
- Handle basic error scenarios
Prerequisites¶
- Completed environment setup from previous notebook
- Azure credentials configured
Import Required Libraries and Setup¶
In the next cell, we'll:
- Import the necessary Azure SDK libraries for authentication and AI Projects
- Import standard Python libraries for environment variables and JSON handling
- Initialize Azure credentials using DefaultAzureCredential
- This will automatically use your logged-in Azure CLI credentials
- Alternatively, it can use other authentication methods like environment variables or managed identity
# Import required libraries
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
import os
import json
# Initialize credentials
credential = DefaultAzureCredential()
Initialize AI Project Client¶
Note: Before proceeding, ensure you:
- Copy your
.env.local
file to.env
- Update the project connection string in your
.env
file- Have a Hub and Project already provisioned in Azure AI Foundry
You can find your project connection string in Azure AI Foundry under your project's settings:
Creating the AI Project Client¶
In the next cell, we'll create an AI Project client using the connection string from our .env
file.
Note: This example uses the synchronous client. For higher performance scenarios, you can also create an asynchronous client by importing
asyncio
and using the async methods fromAIProjectClient
.
The client will be used to:
- Connect to your Azure AI Project using the connection string
- Authenticate using Azure credentials
- Enable making inference requests to your deployed models
from dotenv import load_dotenv
# Create AI Project client
# Load environment variables from .env file
load_dotenv()
try:
client = AIProjectClient.from_connection_string(
conn_str=os.getenv("PROJECT_CONNECTION_STRING"),
credential=credential
)
print("✓ Successfully initialized AIProjectClient")
except Exception as e:
print(f"× Error initializing client: {str(e)}")
Create a Simple Completion¶
Let's try a basic completion request:
Now that we have an authenticated client, let's use it to make a chat completion request. The code below demonstrates how to:
- Get a ChatCompletionsClient from the azure-ai-inference package
- Use it to make a simple completion request
We'll use the MODEL_DEPLOYMENT_NAME from our .env
file, making it easy to switch between different deployed models without changing code. This could be an Azure OpenAI model, Microsoft model, or other providers that support chat completions.
Note: Make sure you have the azure-ai-inference package installed (from requirements.txt)
from azure.ai.inference.models import UserMessage
model_deployment_name = os.getenv("MODEL_DEPLOYMENT_NAME")
try:
chat_client = client.inference.get_chat_completions_client()
response = chat_client.complete(
model=model_deployment_name,
messages=[UserMessage(content="How to be healthy in one sentence?")]
)
print(response.choices[0].message.content)
except Exception as e:
print(f"An error occurred: {str(e)}")
Create a simple Agent¶
Using AI Agent Service, we can create a simple agent to answer health related questions.
Let's explore Azure AI Agent Service, a powerful tool for building intelligent agents.
Azure AI Agent Service is a fully managed service that helps developers build, deploy, and scale AI agents without managing infrastructure. It combines large language models with tools that allow agents to:
- Answer questions using RAG (Retrieval Augmented Generation)
- Perform actions through tool calling
- Automate complex workflows
The code below demonstrates how to:
- Create an agent with a code interpreter tool
- Create a conversation thread
- Send a message requesting BMI analysis
- Process the request and get results
- Save any generated visualizations
The agent will use the model specified in our .env file (MODEL_DEPLOYMENT_NAME) and will have access to a code interpreter tool for creating visualizations. This showcases how agents can combine natural language understanding with computational capabilities.
The visualization will be saved as a PNG file in the same folder as this notebook.
from azure.ai.projects.models import CodeInterpreterTool
try:
# Create an agent with code interpreter
code_interpreter = CodeInterpreterTool()
agent = client.agents.create_agent(
model=model_deployment_name,
name="bmi-calculator",
instructions="You are a health analyst who calculates BMI using US metrics (pounds, feet/inches). Use average US male measurements: 5'9\" (69 inches) and 198 pounds. Create a visualization showing where this BMI falls on the scale.",
tools=code_interpreter.definitions,
tool_resources=code_interpreter.resources,
)
thread = client.agents.create_thread()
# Request BMI analysis
message = client.agents.create_message(
thread_id=thread.id,
role="user",
content="Calculate BMI for an average US male (5'9\", 198 lbs). Create a visualization showing where this BMI falls on the standard BMI scale from 15 to 35. Include the standard BMI categories (Underweight, Normal, Overweight, Obese) in the visualization."
)
# Process the request
run = client.agents.create_and_process_run(thread_id=thread.id, assistant_id=agent.id)
# Get and save any visualizations
messages = client.agents.list_messages(thread_id=thread.id)
for image_content in messages.image_contents:
file_name = f"bmi_analysis_{image_content.image_file.file_id}.png"
client.agents.save_file(file_id=image_content.image_file.file_id, file_name=file_name)
print(f"Analysis saved as: {file_name}")
# Print the analysis
if last_msg := messages.get_last_text_message_by_sender("assistant"):
print(f"Analysis: {last_msg.text.value}")
# Cleanup
client.agents.delete_agent(agent.id)
except Exception as e:
print(f"An error occurred: {str(e)}")