aiinfrastructurepythonlitellmollama
February 2026 Active

LiteLLM Multi-Provider Router

Centralized AI model routing proxy with automatic fallback across Gemini, DeepSeek, Claude, and local Ollama. Handles rate limits, budget caps, and provider outages transparently. Every AI tool in my stack hits this before going to the cloud.

Built with

Python
Python
LiteLLM
LiteLLM
← All Projects