🍏 Fitness Fun: Azure Functions + AI Agent Tutorial 🍎¶
In this notebook, we'll explore how to use Azure Functions with the Azure AI Foundry SDKs (azure-ai-projects
, azure-ai-inference
, azure-ai-evaluation
, opentelemetry-sdk
, and azure-core-tracing-opentelemetry
). We'll demonstrate how to:
- Set up an Azure Function that listens on a storage queue.
- Create an AI Agent that can invoke this function.
- Send a prompt to the agent, which then calls the Azure Function.
- Retrieve the processed result from the output queue.
All with a fun, health-and-fitness-themed example! We'll keep it whimsical, but remember:
⚠️ Important Disclaimer¶
This example is for demonstration purposes only and does not provide genuine medical or health advice. Always consult a professional for real medical or fitness advice.
Prerequisites¶
- Azure Subscription.
- Azure AI Foundry project. (You'll need your
PROJECT_CONNECTION_STRING
andMODEL_DEPLOYMENT_NAME
.) - Azure Functions environment or local emulator (Azurite), plus storage queue knowledge.
- Python 3.8+ with
azure-ai-projects
,azure-identity
,opentelemetry-sdk
, andazure-core-tracing-opentelemetry
installed.
Overview¶
We'll do a high-level sequence of events:
- Azure Function is set up to read messages from an input queue and write responses to an output queue.
- AI Agent is created with an
AzureFunctionTool
that references these queues. - User provides a question or command; the agent decides whether or not to call the function.
- The agent sends a message to the input queue, which triggers the function.
- Azure Function processes the message, sends back a response to the output queue.
- The agent picks up the response from the output queue.
- The User sees the final answer from the agent.
1. Azure Function Setup (Example)¶
Below is a snippet of how you'd implement the Azure Function that receives a message from the input queue and posts a result to the output queue.
You can adapt this code to a local or cloud Azure Functions environment. The function's real logic can be anything – let's pretend it returns a comedic "foo-based" answer or some silly "fitness advice" snippet for demonstration.
# This code might live in your Azure Functions project in a file named: __init__.py
# or similar.
import os
import json
import logging
import azure.functions as func
from azure.storage.queue import QueueClient
from azure.core.pipeline.policies import BinaryBase64EncodePolicy, BinaryBase64DecodePolicy
from azure.identity import DefaultAzureCredential
app = func.FunctionApp()
@app.function_name(name="FooReply")
@app.queue_trigger(
arg_name="inmsg",
queue_name="azure-function-foo-input",
connection="STORAGE_SERVICE_ENDPOINT" # or connection string setting name
)
def run_foo(inmsg: func.QueueMessage) -> None:
logging.info("Azure Function triggered with a queue item.")
# This is the queue for output
out_queue = QueueClient(
os.environ["STORAGE_SERVICE_ENDPOINT"], # or read from config
queue_name="azure-function-tool-output",
credential=DefaultAzureCredential(),
message_encode_policy=BinaryBase64EncodePolicy(),
message_decode_policy=BinaryBase64DecodePolicy()
)
# Parse the function call payload, e.g. { "query": "Hello?", "outputqueueuri":"..."}
payload = json.loads(inmsg.get_body().decode('utf-8'))
user_query = payload.get("query", "")
# Example: We'll return a comedic 'Foo says: <some witty line>'
result_message = {
"FooReply": f"This is Foo, responding to: {user_query}! Stay strong 💪!",
"CorrelationId": payload.get("CorrelationId", "")
}
# Put the result on the output queue
out_queue.send_message(json.dumps(result_message).encode('utf-8'))
logging.info(f"Sent message: {result_message}")
Notes¶
- The input queue name is
azure-function-foo-input
. - The output queue name is
azure-function-tool-output
. - We used environment variables like
STORAGE_SERVICE_ENDPOINT
for the queue storage endpoint.
2. Notebook Setup¶
Now let's switch back to this notebook environment. We'll:
- Import libraries.
- Initialize
AIProjectClient
. - Create the Azure Function tool definition and the Agent.
# We'll do our standard imports
import os
import time
from pathlib import Path
from dotenv import load_dotenv
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import AzureFunctionTool, AzureFunctionStorageQueue, MessageRole
# Load env variables from .env in parent dir
notebook_path = Path().absolute()
parent_dir = notebook_path.parent
load_dotenv(parent_dir / '.env')
# Create AI Project Client
try:
project_client = AIProjectClient.from_connection_string(
credential=DefaultAzureCredential(exclude_managed_identity_credential=True, exclude_environment_credential=True),
conn_str=os.environ["PROJECT_CONNECTION_STRING"],
)
print("✅ Successfully initialized AIProjectClient")
except Exception as e:
print(f"❌ Error initializing AIProjectClient: {e}")
Create Agent with Azure Function Tool¶
We'll define a tool that references our function name (foo
or FooReply
from the sample) and the input + output queues. In this example, we'll store the queue endpoint in an env variable called STORAGE_SERVICE_ENDPOINT
.
You can adapt it to your own naming scheme. The agent instructions tell it to use the function whenever it sees certain keywords, or you could just let it call the function on its own.
try:
storage_endpoint = os.environ["STORAGE_SERVICE_ENDPONT"] # Notice it's spelled STORAGE_SERVICE_ENDPONT in sample
except KeyError:
print("❌ Please ensure STORAGE_SERVICE_ENDPONT is set in your environment.")
storage_endpoint = None
agent = None
if storage_endpoint:
# Create the AzureFunctionTool object
azure_function_tool = AzureFunctionTool(
name="foo",
description="Get comedic or silly advice from 'Foo'.",
parameters={
"type": "object",
"properties": {
"query": {"type": "string", "description": "The question to ask Foo."},
"outputqueueuri": {"type": "string", "description": "The output queue URI."}
},
},
input_queue=AzureFunctionStorageQueue(
queue_name="azure-function-foo-input",
storage_service_endpoint=storage_endpoint,
),
output_queue=AzureFunctionStorageQueue(
queue_name="azure-function-tool-output",
storage_service_endpoint=storage_endpoint,
),
)
# Construct the agent with the function tool attached
with project_client:
agent = project_client.agents.create_agent(
model=os.environ["MODEL_DEPLOYMENT_NAME"],
name="azure-function-agent-foo",
instructions=(
"You are a helpful health and fitness support agent.\n"
"If the user says 'What would foo say?' then call the foo function.\n"
"Always specify the outputqueueuri as '" + storage_endpoint + "/azure-function-tool-output'.\n"
"Respond with 'Foo says: <response>' after the tool call."
),
tools=azure_function_tool.definitions,
)
print(f"🎉 Created agent, agent ID: {agent.id}")
else:
print("Skipping agent creation, no storage_endpoint.")
3. Test the Agent¶
Now let's simulate a user message that triggers the function call. We'll create a conversation thread, post a user question that includes "What would foo say?", then run the agent.
The Agent Service will place a message on the azure-function-foo-input
queue. The function will handle it and place a response in azure-function-tool-output
. The agent will pick that up automatically and produce a final answer.
def run_foo_question(user_question: str, agent_id: str):
# 1) Create a new thread
thread = project_client.agents.create_thread()
print(f"📝 Created thread, thread ID: {thread.id}")
# 2) Create a user message
message = project_client.agents.create_message(
thread_id=thread.id,
role="user",
content=user_question
)
print(f"💬 Created user message, ID: {message.id}")
# 3) Create and process agent run
run = project_client.agents.create_and_process_run(
thread_id=thread.id,
assistant_id=agent_id
)
print(f"🤖 Run finished with status: {run.status}")
if run.status == "failed":
print(f"Run failed: {run.last_error}")
# 4) Retrieve messages
messages = project_client.agents.list_messages(thread_id=thread.id)
print("\n🗣️ Conversation:")
for m in reversed(messages.data): # oldest first
msg_str = ""
if m.content:
msg_str = m.content[-1].text.value if len(m.content) > 0 else ""
print(f"{m.role.upper()}: {msg_str}\n")
return thread, run
# If the agent was created, let's test it!
if agent:
my_thread, my_run = run_foo_question(
user_question="What is the best post-workout snack? What would foo say?",
agent_id=agent.id
)
4. Cleanup¶
We'll remove the agent when done. In real scenarios, you might keep your agent for repeated usage.
if agent:
try:
project_client.agents.delete_agent(agent.id)
print(f"🗑️ Deleted agent: {agent.name}")
except Exception as e:
print(f"❌ Error deleting agent: {e}")
🎉 Congratulations!¶
You just saw how to combine Azure Functions with AI Agent Service to create a dynamic, queue-driven workflow. In this whimsical example, your function returned comedic "Foo says..." lines, but in real applications, you can harness the power of Azure Functions to run anything from database lookups to complex calculations, returning the result seamlessly to your AI agent.
Next Steps¶
- Add OpenTelemetry to gather end-to-end tracing across your function and agent.
- Incorporate an evaluation pipeline with
azure-ai-evaluation
to measure how well your agent + function workflow addresses user queries. - Explore parallel function calls or more advanced logic in your Azure Functions.
Happy coding and stay fit! 🤸