🏋️ Fun with Text and Image Embeddings 🍎¶
Welcome to our Health & Fitness embeddings notebook! In this tutorial, we'll show you how to:
- Initialize an
AIProjectClient
to access your Azure AI Foundry project. - Embed text using
azure-ai-inference
(text embeddings). - Embed images using
azure-ai-inference
(image embeddings). - Generate a health-themed image (simple example) and display it.
- Use a prompt template for extra fun.
We'll do it all with a playful 🍏 health theme. Let's jump in!
Disclaimer: We're not offering medical advice. This is purely educational & fun.
1. Setup & Environment¶
We'll import our libraries, load environment variables for:
PROJECT_CONNECTION_STRING
: your project connection string.MODEL_DEPLOYMENT_NAME
: the name of your model deployment.
Install these packages if you haven't already:
pip install azure-ai-projects azure-ai-inference azure-identity
Let's begin! 🚀
import os
from dotenv import load_dotenv
from pathlib import Path
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from azure.ai.inference.models import ImageEmbeddingInput
# Load environment variables (optional, if you keep them in a .env file)
load_dotenv()
# Retrieve from environment or fallback
PROJECT_CONNECTION_STRING = os.environ.get("PROJECT_CONNECTION_STRING", "<your-connection-string>")
MODEL_DEPLOYMENT_NAME = os.environ.get("MODEL_DEPLOYMENT_NAME", "<your-deployment-name>")
# Initialize project client
try:
project_client = AIProjectClient.from_connection_string(
credential=DefaultAzureCredential(),
conn_str=PROJECT_CONNECTION_STRING,
)
print("🎉 Successfully created AIProjectClient")
except Exception as e:
print("❌ Error creating AIProjectClient:", e)
2. Text Embeddings¶
We'll call get_embeddings_client()
to retrieve the embeddings client from our AIProjectClient
. Then we embed some fun health-themed phrases:
- "🍎 An apple a day keeps the doctor away"
- "🏋️ 15-minute HIIT workout routine"
- "🧘 Mindful breathing exercises"
This returns numeric vectors representing each phrase in semantic space.
text_phrases = [
"An apple a day keeps the doctor away 🍎",
"Quick 15-minute HIIT workout routine 🏋️",
"Mindful breathing exercises 🧘"
]
try:
with project_client.inference.get_embeddings_client() as embed_client:
response = embed_client.embed(
model=MODEL_DEPLOYMENT_NAME,
input=text_phrases
)
for item in response.data:
vec = item.embedding
sample_str = f"[{vec[0]:.4f}, {vec[1]:.4f}, ..., {vec[-2]:.4f}, {vec[-1]:.4f}]"
print(f"Sentence {item.index}: '{text_phrases[item.index]}':\n" \
f" Embedding length={len(vec)}\n" \
f" Sample: {sample_str}\n")
except Exception as e:
print("❌ Error embedding text:", e)
3. Prompt Template Example 📝¶
Even though our focus is embeddings, let's quickly show how you'd integrate a prompt template. Imagine we want to embed user text but prepend a little system context like “You are a helpful fitness coach.” We'll do that here in a minimal way.
# A basic prompt template (system-style) we'll prepend to user text.
TEMPLATE_SYSTEM = (
"You are HealthFitGPT, a fitness guidance model.\n"
"Please focus on healthy advice and disclaim you're not a doctor.\n\n"
"User message:" # We'll append user message after this.
)
def embed_with_template(user_text):
"""Embed user text with a system template in front."""
content = TEMPLATE_SYSTEM + user_text
with project_client.inference.get_embeddings_client() as embed_client:
rsp = embed_client.embed(
model=MODEL_DEPLOYMENT_NAME,
input=[content]
)
return rsp.data[0].embedding
sample_user_text = "Can you suggest a quick home workout for busy moms?"
embedding_result = embed_with_template(sample_user_text)
print("Embedding length:", len(embedding_result))
print("First few dims:", embedding_result[:8])
4. Image Embeddings¶
Now let's embed an image! We'll treat the image as input to a model that returns a vector. The snippet below references a file named gbb.jpeg
(replace with any local path). In a real scenario, this can be your own photo or user-uploaded image.
image_file_path = "gbb.jpeg" # Replace with your local image path!
try:
with project_client.inference.get_image_embeddings_client() as img_embed_client:
# Construct input for the image embeddings call
img_input = ImageEmbeddingInput.load(image_file=image_file_path, image_format="jpg")
resp = img_embed_client.embed(
model=MODEL_DEPLOYMENT_NAME,
input=[img_input]
)
for item in resp.data:
vec = item.embedding
snippet = f"[{vec[0]:.4f}, {vec[1]:.4f}, ..., {vec[-2]:.4f}, {vec[-1]:.4f}]"
print(f"Image index={item.index}, length={len(vec)}, sample={snippet}")
except Exception as e:
print("❌ Error embedding image:", e)
5. Generate a Health-Related Image 🏃¶
Though distinct from embeddings, let's have some fun and generate a small health-themed image using the same project_client
. The actual method name may vary depending on your installed version of azure-ai-inference
. We'll pretend there's a get_image_generation_client()
method. We'll pass a simple prompt describing a healthy scenario.
We'll then display the returned image inline. (In real usage, you'd handle the raw bytes, save them, or pass them back to your application.)
import base64
from IPython.display import Image, display
# This block is pseudo-code / example, depending on your actual method names:
def generate_health_image(prompt="A simple cartoon of a happy person jogging outdoors"):
try:
with project_client.inference.get_image_generation_client() as gen_client:
gen_response = gen_client.generate(
model=MODEL_DEPLOYMENT_NAME,
prompt=prompt,
# possibly other params like size, etc.
)
# We'll assume gen_response has a base64 image or raw bytes in .data[0].
# This is just an example structure:
if gen_response.data:
image_bytes = gen_response.data[0].binary # or .content, .image_bytes, etc.
return image_bytes
except Exception as e:
print("❌ Error generating image:", e)
return None
# Let's do a quick run
generated_img_data = generate_health_image()
if generated_img_data:
display(Image(generated_img_data))
else:
print("(No generated image data. This may be stub code for demonstration.)")
6. Wrap-Up & Next Steps¶
🎉 We've shown how to:
- Set up
AIProjectClient
. - Get text embeddings.
- Get image embeddings.
- Generate a health-themed image.
- Use a prompt template for some system context.
Where to go next?
- Explore
azure-ai-evaluation
for evaluating your embeddings. - Use
azure-core-tracing-opentelemetry
for end-to-end telemetry. - Build out a retrieval pipeline to compare similarity of embeddings.
Have fun experimenting, and remember: always consult real health professionals for medical advice!