🍎 Chat Completions with AIProjectClient 🍏¶
In this notebook, we'll demonstrate how to perform Chat Completions using the Azure AI Foundry SDK. We'll combine azure-ai-projects
and azure-ai-inference
packages to:
- Initialize an
AIProjectClient
. - Obtain a Chat Completions client to do direct LLM calls.
- Use a prompt template to add system context.
- Send user prompts in a health & fitness theme.
🏋️ Health-Fitness Disclaimer¶
This example is for demonstration only and does not provide real medical advice. Always consult a professional for health or medical-related questions.
Prerequisites¶
- Python 3.8+
azure-ai-projects
,azure-ai-inference
,azure-identity
- A
.env
file containing:PROJECT_CONNECTION_STRING=<your-project-connection-string> MODEL_DEPLOYMENT_NAME=<your-model-deployment-name>
Let's get started! 🎉
1. Initial Setup¶
Load environment variables, create an AIProjectClient
, and fetch a ChatCompletionsClient
. We'll also define a prompt template to show how you might structure a system message.
import os
from dotenv import load_dotenv
from pathlib import Path
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from azure.ai.inference.models import UserMessage, SystemMessage # for chat messages
# Load environment variables
notebook_path = Path().absolute()
parent_dir = notebook_path.parent
load_dotenv(parent_dir / '.env')
# Retrieve from environment
connection_string = os.environ.get("PROJECT_CONNECTION_STRING")
model_deployment = os.environ.get("MODEL_DEPLOYMENT_NAME")
try:
# Create the project client
project_client = AIProjectClient.from_connection_string(
credential=DefaultAzureCredential(),
conn_str=connection_string,
)
print("✅ Successfully created AIProjectClient")
except Exception as e:
print("❌ Error initializing client:", e)
Prompt Template¶
We'll define a quick system message that sets the context as a friendly, disclaimer-providing fitness assistant.
txt
SYSTEM PROMPT (template):
You are FitChat GPT, a helpful fitness assistant.
Always remind users: I'm not a medical professional.
Be friendly, provide general advice.
...
We'll then pass user content as a user message.
# We'll define a function that runs chat completions with a system prompt & user prompt
def chat_with_fitness_assistant(user_input: str):
"""Use chat completions to get a response from our LLM, with system instructions."""
# Our system message template
system_text = (
"You are FitChat GPT, a friendly fitness assistant.\n"
"Always remind users: I'm not a medical professional.\n"
"Answer with empathy and disclaimers."
)
# We'll open the chat completions client
with project_client.inference.get_chat_completions_client() as chat_client:
# Construct messages: system + user
system_message = SystemMessage(content=system_text)
user_message = UserMessage(content=user_input)
# Send the request
response = chat_client.complete(
model=model_deployment,
messages=[system_message, user_message]
)
return response.choices[0].message.content # simplest approach: get top choice's content
print("Defined a helper function to do chat completions.")
2. Try Chat Completions 🎉¶
We'll call the function with a user question about health or fitness, and see the result. Feel free to modify the question or run multiple times!
user_question = "How can I start a beginner workout routine at home?"
reply = chat_with_fitness_assistant(user_question)
print("🗣️ User:", user_question)
print("🤖 Assistant:", reply)
3. Another Example: Prompt Template with Fill-Ins 📝¶
We can go a bit further and add placeholders in the system message. For instance, imagine we have a userName or goal. We'll show a minimal example.
def chat_with_template(user_input: str, user_name: str, goal: str):
# Construct a system template with placeholders
system_template = (
"You are FitChat GPT, an AI personal trainer for {name}.\n"
"Your user wants to achieve: {goal}.\n"
"Remind them you're not a medical professional. Offer friendly advice."
)
# Fill in placeholders
system_prompt = system_template.format(name=user_name, goal=goal)
with project_client.inference.get_chat_completions_client() as chat_client:
system_msg = SystemMessage(content=system_prompt)
user_msg = UserMessage(content=user_input)
response = chat_client.complete(
model=model_deployment,
messages=[system_msg, user_msg]
)
return response.choices[0].message.content
# Let's try it out
templated_user_input = "What kind of home exercise do you recommend for a busy schedule?"
assistant_reply = chat_with_template(
templated_user_input,
user_name="Jordan",
goal="increase muscle tone and endurance"
)
print("🗣️ User:", templated_user_input)
print("🤖 Assistant:", assistant_reply)
4. Tips & Cleanup¶
Tips:
- Add OpenTelemetry for end-to-end tracing of your calls.
- Evaluate your chat completions with
azure-ai-evaluation
to see how well your LLM is performing. - Combine system + user + possibly assistant messages if you want to pass prior conversation context.
No Cleanup Required¶
Because we're using a direct chat completions client, we haven't created any persistent resources we need to tear down. In more advanced scenarios, you'd manage resources like Agents, Threads, or Tools.
🎉 Congratulations!¶
You've successfully performed chat completions with the Azure AI Foundry's AIProjectClient
and azure-ai-inference
. You've also seen how to incorporate prompt templates to tailor your system instructions.
Keep exploring, and stay healthy & fit! 🏋️