🍏 Observability & Tracing Demo with azure-ai-projects
and azure-ai-inference
🍎¶
Welcome to this Health & Fitness-themed notebook, where we'll explore:
- Getting Model Info with an
AIProjectClient
- Listing Connections to show how we can manage and check all our resources
- Observability and tracing examples, showing how to set up:
- Console tracing (OpenTelemetry logs printed to stdout)
- Azure Monitor tracing (sending your logs to an Application Insights resource)
- Viewing your traces in Azure AI Foundry 🎉
Disclaimer: This is a fun demonstration of AI and observability! Any references to workouts, diets, or health routines in the code or prompts are purely for educational purposes. Always consult a professional for health advice. 🙌
1. Setup & Imports 🛠️¶
In this step, we'll load environment variables (like PROJECT_CONNECTION_STRING
), then initialize the AIProjectClient
. We'll confirm we can retrieve model info. The sample environment variables are typically stored in an .env
file or in your shell environment.
import os
from dotenv import load_dotenv
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from azure.ai.inference.models import UserMessage
from pathlib import Path # For cross-platform path handling
# Get the path to the .env file which is in the parent directory
notebook_path = Path().absolute() # Get absolute path of current notebook
parent_dir = notebook_path.parent # Get parent directory
load_dotenv(parent_dir / '.env') # Load environment variables from .env file
connection_string = os.environ.get("PROJECT_CONNECTION_STRING")
if not connection_string:
raise ValueError("🚨 PROJECT_CONNECTION_STRING not found in environment. Please set it in your .env.")
try:
# Create the AIProjectClient
project_client = AIProjectClient.from_connection_string(
credential=DefaultAzureCredential(),
conn_str=connection_string
)
print("✅ Successfully created AIProjectClient")
# Get chat completions client and make request
with project_client.inference.get_chat_completions_client() as inference_client:
response = inference_client.complete(
model=os.environ.get("MODEL_DEPLOYMENT_NAME", "gpt-4o"), # Get model name from env or use default
messages=[UserMessage(content="How many feet are in a mile?")]
)
print("💡 Response:")
print(response.choices[0].message.content)
except Exception as e:
print("❌ Failed to initialize client or get response:", e)
2. List & Inspect Connections 🔌¶
We'll now demonstrate how to list connections in your AI Foundry project. This can help you see all the resources connected, or just a subset (like AZURE_OPEN_AI
connections).
Note: We'll just print them out so you can see the details.
from azure.ai.projects.models import ConnectionType
with project_client:
# List all connections
all_conns = project_client.connections.list()
print(f"🔎 Found {len(all_conns)} total connections.")
for idx, c in enumerate(all_conns):
print(f"{idx+1}) Name: {c.name}, Type: {c.type}, IsDefault: {c.is_default}")
# Filter for Azure OpenAI type, as an example
aoai_conns = project_client.connections.list(connection_type=ConnectionType.AZURE_OPEN_AI)
print(f"\n🌀 Found {len(aoai_conns)} Azure OpenAI connections:")
for c in aoai_conns:
print(f" -> {c.name}")
# Get the default Azure AI Services connection
default_conn = project_client.connections.get_default(
connection_type=ConnectionType.AZURE_AI_SERVICES,
include_credentials=False
)
if default_conn:
print("\n⭐ Default Azure AI Services connection:")
print(default_conn)
else:
print("No default connection found for Azure AI Services.")
import sys
import os
from azure.ai.inference.models import UserMessage
from opentelemetry import trace
# We'll enable local console tracing so we can see the telemetries in our terminal
project_client.telemetry.enable(destination=sys.stdout)
# We'll do a small LLM call example:
try:
with project_client.inference.get_chat_completions_client() as client:
prompt_msg = "I'd like to start a simple home workout routine. Any tips?"
response = client.complete(
model=os.environ.get("MODEL_DEPLOYMENT_NAME", "some-deployment-name"),
messages=[UserMessage(content=prompt_msg)]
)
print("\n🤖 Response:", response.choices[0].message.content)
except Exception as exc:
print(f"❌ Chat Completions example failed: {exc}")
3.2 Azure Monitor Tracing Example¶
Now, instead of just console logs, we can push these logs to Application Insights (Azure Monitor) for deeper APM (application performance monitoring) and persistent logs.
In order to do this, ensure you have an Application Insights Connection String associated with your AI Foundry project. Then configure your local environment to pull that connection string and set up opentelemetry
for remote ingestion.
We'll do a quick demonstration of how to do that (similar to the official sample).
import os
from azure.monitor.opentelemetry import configure_azure_monitor
from azure.ai.inference.models import UserMessage
# Enable Azure Monitor tracing if available
connection_str = project_client.telemetry.get_connection_string()
if connection_str:
print("🔧 Found App Insights connection string. Configuring...")
configure_azure_monitor(connection_string=connection_str)
project_client.telemetry.enable() # add optional additional instrumentations
# We'll do a test chat call again, which should get logged to Azure Monitor
try:
with project_client.inference.get_chat_completions_client() as client:
prompt_msg = "Any low-impact exercises recommended for knee issues?"
response = client.complete(
model=os.environ.get("MODEL_DEPLOYMENT_NAME", "some-deployment-name"),
messages=[UserMessage(content=prompt_msg)]
)
print("\n🤖 Response (logged to App Insights):", response.choices[0].message.content)
except Exception as exc:
print(f"❌ Chat Completions with Azure Monitor example failed: {exc}")
else:
print("No Application Insights connection string is configured in this project.")
4. Wrap-Up & Next Steps 🎉¶
Congrats on exploring:
- Basic usage of AIProjectClient (model info, listing connections)
- Observability with console tracing
- Application Insights-based tracing for deeper logs & APM
Where to go next?
- AI Foundry Portal: Under the Tracing tab, you can see your traces in an easy UI.
- Azure Monitor: Head into the Application Insights resource for advanced metrics, logs, and dashboards.
- azure-ai-evaluation: Evaluate the quality of your LLM outputs, get scoring metrics, or embed it in your CI/CD pipeline.
🍀 Health Reminder: All suggestions from the LLM are for demonstration only. Always consult professionals for health and fitness guidance.
Enjoy building robust, observable GenAI apps! 🏋️♂️