ishaan_jaff Profile Banner
Ishaan Profile
Ishaan

@ishaan_jaff

Followers
3K
Following
5K
Media
529
Statuses
2K

Co-Founder LiteLLM (YC W23) - Python SDK & LLM Gateway to Call 100+ LLMs in 1 format, set Budgets https://t.co/nXsBde05K7

san francisco
Joined November 2017
Don't wanna be here? Send us removal request.
@ishaan_jaff
Ishaan
2 years
🚅 LiteLLM v1.2.0 🚅 Use Azure, OpenAI, Cohere, Anyscale, Anthropic, Hugging Face 100+ LLMs as a drop in replacement for OpenAI.⚡️ Use LiteLLM to 20x your rate limits using load balancing + queuing.
Tweet card summary image
github.com
Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq] - BerriAI/litellm
6
21
118
@ishaan_jaff
Ishaan
9 hours
- [Docs] Add details on when to use specific health endpoints. - Bug fix - Litellm fix setting models on fallbacks UI.
0
0
0
@ishaan_jaff
Ishaan
9 hours
- Bug fix - Gemini CLI support for using Tools with Gemini CLI (h/t Przemek Pietrzkiewicz). - [Feat] Background Health Checks - Allow disabling background health checks for a specific (h/t Andrés Carrillo López). - Bug fix - Support for using VertexAI models with Gemini CLI.
1
0
0
@ishaan_jaff
Ishaan
1 day
[Bug Fix] Infra - ensure that stale Prisma clients disconnect DB connection.
0
0
0
@ishaan_jaff
Ishaan
1 day
[Feat] v2 updates - tracking DAU, WAU, MAU for coding tool usage + show Daily Usage per User. [Feat] UI + Backend add a tab for use agent activity. [Feat] Allow redacting message / response content for specific logging integrations - DD LLM Observability.
1
0
0
@ishaan_jaff
Ishaan
2 days
[Bug Fix] Gemini-CLI - The Gemini Custom API request has an incorrect authorization format. [Infra] Looses MCP python version restrictions. [Feat] MLflow Logging - Allow adding tags for ML Flow logging requests.
0
0
1
@ishaan_jaff
Ishaan
3 days
- [Bug Fix] Pass through logging handler VertexAI - ensure multimodal embedding responses are logged. - [Feat] Add cost tracking support for Google AI Studio Image Generation.
0
0
0
@ishaan_jaff
Ishaan
3 days
- [Bug Fix] The model gemini-2.5-flash with the merge_reasoning_content_in_choices parameter does not work (h/t Przemek Pietrzkiewicz). - [Docs] docs - openwebui show how to include reasoning content for gemini models.
1
0
0
@ishaan_jaff
Ishaan
7 days
- [Feat] UI - Allow Adding LiteLLM Auto Router on UI. - [Feat] Edit Auto Router Settings on UI. - [LLM Translation] - Bug fix Anthropic Tool calling. - [Feat] Backend Router - Add Auto-Router powered by semantic-router.
0
0
0
@ishaan_jaff
Ishaan
7 days
Get started with auto router here: Other improvements on this release 👇.
docs.litellm.ai
LiteLLM can auto select the best model for a request based on rules you define.
1
0
0
@ishaan_jaff
Ishaan
7 days
Today we're launching @LiteLLM Auto Router - LiteLLM can now auto-select the best model for a given request . On LiteLLM you can now define a set of key words to route to a specific model. LiteLLM will select the best model for the given request (Powered by @AurelioAI_ )
Tweet media one
1
1
5
@ishaan_jaff
Ishaan
9 days
- [Feat] Add Recraft API - Image Edits Support.
0
0
0
@ishaan_jaff
Ishaan
9 days
- [Feat] - Track cost + add tags for health checks done by LiteLLM Proxy (h/t Andrés Carrillo López). - [Bug fix] - Azure KeyVault not in image, add azure-keyvault==4.2.0 to Docker img (h/t Ryan McLaughlin). - [Feat] Add cost tracking for new vertex_ai/llama-3 API models.
1
0
0
@ishaan_jaff
Ishaan
10 days
[QA] Allow viewing redacted standard callback dynamic params. [Docs] - litellm load test benchmarks from latest release
Tweet card summary image
docs.litellm.ai
Benchmarks for LiteLLM Gateway (Proxy Server) tested against a fake OpenAI endpoint.
0
0
0
@ishaan_jaff
Ishaan
10 days
[Azure OpenAI Feature] - Support DefaultAzureCredential without hard-coded environment variables. [Feat] Add fireworks - fireworks/models/kimi-k2-instruct. [Docs] Show correct list of vertex ai mistral models. [Bug Fix] - gemini leaking FD for sync calls with litellm.completion.
1
0
0
@ishaan_jaff
Ishaan
10 days
✨ New Image Generation API on @LiteLLM - today we're launching support for @recraftai . This API is great for designers + developers looking to get use image generation. You can get started with Recraft Image Generation here: Other improvements👇
Tweet media one
1
1
5
@ishaan_jaff
Ishaan
13 days
- [Feat] LLM API Endpoint - Expose OpenAI Compatible /vector_stores/{vector_store_id}/search endpoint. - [Feat] UI Vector Stores - Allow adding Vertex RAG Engine, OpenAI, Azure.
0
0
0
@ishaan_jaff
Ishaan
13 days
- [Feat] Add azure_ai/grok-3 model family + Cost tracking. - [Bug fix] s3 v2 log uploader crashes when using with guardrails. - [Feat] UI - Allow clicking into Vector Stores.
1
0
0
@ishaan_jaff
Ishaan
14 days
[Feat] Bedrock Guardrails - Allow disabling exception on 'BLOCKED' action. [Refactor] Use Existing config structure for bedrock vector stores.
0
0
0
@ishaan_jaff
Ishaan
14 days
Docs to start here: Other improvements on this release:. [Refactor] Vector Stores - Use class VectorStorePreCallHook for all Vector Store Integrations. [Feat] Proxy - New LLM API Routes /v1/vector_stores and /v1/vector_stores/vs_abc123/search.
Tweet card summary image
docs.litellm.ai
1
0
0
@ishaan_jaff
Ishaan
15 days
[Bug Fix] SCIM - add GET /ServiceProviderConfig (h/t Moe Kazem). [Feat] UI - Add end_user filter on UI. [Feat] New Vector Store - PG Vector. [Bug Fix] grok-4 does not support the stop param. [New Model] add together_ai/moonshotai/Kimi-K2-Instruct.
0
0
0