X Tutup
Skip to content

netover/Proxy_api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

72 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ProxyAPI - AI Model Proxy & Discovery Platform

Python 3.11+ License: MIT FastAPI Docker

A comprehensive AI model proxy and discovery platform that provides unified access to multiple AI providers including OpenAI, Anthropic, Azure OpenAI, Cohere, and more.

✨ Key Features

  • πŸ” Automatic Model Discovery: Real-time discovery and cataloging of AI models from all configured providers
  • πŸš€ High-Performance Proxy: Intelligent routing with circuit breakers, caching, and connection pooling
  • πŸ“Š Comprehensive Monitoring: Prometheus metrics, health checks, and detailed analytics
  • πŸ§ͺ Chaos Engineering: Fault injection and resilience testing
  • πŸ’° Cost Optimization: Context condensation and smart caching to reduce API costs
  • πŸ”’ Enterprise Security: Rate limiting, authentication, and audit logging

πŸ“‹ Table of Contents

πŸš€ Quick Start

Docker (Recommended)

# Clone the repository
git clone https://github.com/your-org/proxyapi.git
cd proxyapi

# Start with Docker Compose
docker-compose up -d

# Access the web interface
open http://localhost:8000

Manual Setup

# Install dependencies
pip install -r requirements.txt

# Set environment variables
export OPENAI_API_KEY="your-openai-key"
export API_KEY="your-proxy-key"

# Start the application
python main.py

That's it! Your proxy API is now running at http://localhost:8000.

πŸ“¦ Installation

Prerequisites

  • Python 3.11+
  • Docker & Docker Compose (recommended)
  • 2GB RAM minimum, 4GB recommended

Option 1: Docker Installation (Recommended)

# Clone repository
git clone https://github.com/your-org/proxyapi.git
cd proxyapi

# Configure environment
cp .env.example .env
# Edit .env with your API keys

# Start services
docker-compose up -d

Option 2: Manual Installation

# Install Python dependencies
pip install -r requirements.txt

# For enhanced performance (optional)
pip install httpx[http2] aiofiles watchdog psutil

# Configure providers
cp config.yaml.example config.yaml
# Edit config.yaml with your API keys

# Start application
python main_dynamic.py

Configuration

Create a config.yaml file with your provider configurations:

providers:
  - name: "openai"
    type: "openai"
    api_key_env: "OPENAI_API_KEY"
    models:
      - "gpt-3.5-turbo"
      - "gpt-4"
    enabled: true

  - name: "anthropic"
    type: "anthropic"
    api_key_env: "ANTHROPIC_API_KEY"
    models:
      - "claude-3-haiku"
      - "claude-3-sonnet"
    enabled: true

πŸ’» Basic Usage

Chat Completions

import requests

# Make a chat completion request
response = requests.post(
    "http://localhost:8000/v1/chat/completions",
    headers={
        "Content-Type": "application/json",
        "X-API-Key": "your-proxy-key"
    },
    json={
        "model": "gpt-4",
        "messages": [
            {"role": "user", "content": "Hello, how are you?"}
        ]
    }
)

print(response.json())

Model Discovery

# Get all available models
response = requests.get("http://localhost:8000/v1/models")
models = response.json()

for model in models["data"]:
    print(f"{model['id']}: {model['description']}")

Health Check

# Quick health check
curl http://localhost:8000/health

# Detailed system status
curl http://localhost:8000/v1/health

πŸ“– Documentation

For New Users

For Developers

Advanced Topics

Deployment & Operations

Development

πŸ”§ Advanced Features

Model Discovery System

Automatically discovers and catalogs available AI models from all configured providers with real-time pricing and capabilities.

# Refresh model cache
curl -X POST http://localhost:8000/v1/models/refresh

# Search models by capabilities
curl "http://localhost:8000/v1/models/search?supports_vision=true&max_cost=0.01"

Context Condensation

Automatically summarizes long contexts to reduce API costs and improve performance.

# Long context is automatically handled
messages = [{"role": "user", "content": "Very long text..." * 1000}]

# API automatically condenses if needed
response = requests.post("http://localhost:8000/v1/chat/completions", json={
    "model": "gpt-4",
    "messages": messages
})

Monitoring & Metrics

Comprehensive monitoring with Prometheus metrics and health checks.

# Get metrics
curl http://localhost:8000/metrics

# Prometheus format
curl http://localhost:8000/metrics/prometheus

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

# Clone repository
git clone https://github.com/your-org/proxyapi.git
cd proxyapi

# Install development dependencies
pip install -r requirements-dev.txt

# Run tests
pytest tests/

# Run linting
flake8 src/
black src/
mypy src/

πŸ“ž Support

Getting Help

Community

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • OpenAI for GPT models and API
  • Anthropic for Claude models
  • Microsoft for Azure OpenAI
  • FastAPI for the excellent web framework
  • All contributors who helped make this possible

⭐ Star this repository if you find it useful!

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

Β 
Β 
Β 

Contributors

X Tutup