Context Managers
Using Catsu with context managers for automatic cleanup
Catsu supports Python context managers (the with statement) for automatic resource cleanup.
Why Use Context Managers?
Context managers ensure:
- Automatic cleanup of resources
- Proper connection closing
- Exception-safe code
- Cleaner, more Pythonic syntax
Synchronous Context Manager
from catsu import Client
with Client() as client:
response = client.embed(
"openai:text-embedding-3-small",
"Hello, world!"
)
print(response.embeddings)
# Client is automatically cleaned up hereAsynchronous Context Manager
import asyncio
from catsu import Client
async def main():
async with Client() as client:
response = await client.aembed(
"openai:text-embedding-3-small",
"Hello, async world!"
)
print(response.embeddings)
# Client is automatically cleaned up here
asyncio.run(main())Configuration with Context Managers
You can still pass configuration parameters:
with Client(max_retries=5, timeout=60) as client:
response = client.embed("openai:text-embedding-3-small", "Text")Multiple Requests
with Client() as client:
# Process multiple requests
response1 = client.embed("openai:text-embedding-3-small", "Text 1")
response2 = client.embed("voyageai:voyage-3", "Text 2")
response3 = client.embed("cohere:embed-v4.0", "Text 3")
total_tokens = sum([
response1.usage.tokens,
response2.usage.tokens,
response3.usage.tokens
])
print(f"Total tokens: {total_tokens}")Async with asyncio.gather()
import asyncio
from catsu import Client
async def main():
async with Client() as client:
# Parallel requests with context manager
responses = await asyncio.gather(
client.aembed("openai:text-embedding-3-small", "Query 1"),
client.aembed("openai:text-embedding-3-small", "Query 2"),
client.aembed("openai:text-embedding-3-small", "Query 3"),
)
for i, response in enumerate(responses):
print(f"Query {i+1}: {len(response.embeddings[0])} dimensions")
asyncio.run(main())When to Use Context Managers
Use context managers when:
- You want automatic cleanup
- Working with limited-scope operations
- Following Python best practices
- Need exception-safe code
Regular initialization is fine for:
- Long-lived applications
- Reusing the same client across multiple functions
- When cleanup isn't critical
Best Practices
# Good: Context manager for scoped operations
def embed_batch(texts):
with Client() as client:
return client.embed("openai:text-embedding-3-small", texts)
# Also good: Reuse client for multiple operations
client = Client()
def embed_query(text):
return client.embed("voyageai:voyage-3", text, input_type="query")
def embed_document(text):
return client.embed("voyageai:voyage-3", text, input_type="document")Next Steps
- embed() Method - Synchronous embedding generation
- aembed() Method - Asynchronous embedding generation
- Best Practices - Optimize your usage