Welcome to Catsu
High-performance embeddings client for multiple providers

High-performance embeddings client for multiple providers, powered by Rust.
Catsu is a unified client for accessing multiple embedding providers through a single, consistent API. It supports 11 major embedding providers with over 60 models, providing:
- Unified Interface: Single API for all providers
- High Performance: Rust core with Python bindings
- Automatic Retry Logic: Built-in exponential backoff
- Cost Tracking: Automatic usage and cost calculation
- Async Support: Both sync and async methods
- NumPy Integration: Convert embeddings to numpy arrays
Quick Example
from catsu import Client
# Initialize the client (reads API keys from environment)
client = Client()
# Generate embeddings
response = client.embed(
"openai:text-embedding-3-small",
["Hello, embeddings!"]
)
# Access your embeddings
embedding = response.embeddings[0] # List of floats
print(f"Embedding: {embedding[:5]}...") # First 5 dimensions
print(f"Dimensions: {response.dimensions}")
# Check usage
print(f"Tokens used: {response.usage.tokens}")Get Started
Installation
Install Catsu and configure API keys
Quick Start
Generate your first embeddings in minutes
Client API
Learn about the Client class and methods
Providers
Explore all 11 supported providers
Why Catsu?
The world of embedding API clients is broken. Provider-specific libraries are inconsistent or broken, universal clients don't focus on embeddings, and there's no single source of truth for model metadata. Catsu fixes this with a clean, consistent interface across all major providers while handling the complexity of retries, error handling, and cost tracking for you.
Ready to get started? Head to the Installation page!
