shaungehring.com
UPTIME 30Y 04M 12DLAT 47.6062°NLON 122.3321°W
SYS ONLINEMODE PUBLIC
> shaun@home:~/open-source$
./home./blog./resume./media./music./books./open-source
AVAILABLE FOR CONSULT
← /open-source/rapidai

rapidairapidairapidai

shaungehring/rapidai
RapidAI is designed for one thing: getting from idea to deployed AI application in under an hour.
// SYSTEM_DIAGNOSTICS
STARS3
FORKS0
WATCHERS0
ISSUES0
SIZE2.9 MB
LICENSEMIT
README0x21

RapidAI 🚀

PyPI version Python 3.10+ License: MIT

Production-ready Python framework for building AI applications fast

RapidAI is designed for one thing: getting from idea to deployed AI application in under an hour. When your boss asks you to POC the latest AI tool, this is the framework you reach for.

Vision

A web framework that bridges the gap between Flask's simplicity and Django's batteries-included approach, but optimized specifically for modern AI development. Think of it as "the Rails of AI apps" - convention over configuration, but for LLM-powered applications.

✨ Features

  • 🤖 Zero-config LLM integration - Built-in support for Anthropic Claude, OpenAI, Cohere with unified interface
  • 📡 Streaming by default - SSE/WebSocket streaming built into routes, not bolted on
  • 🔄 Background jobs - Async task processing with automatic retry and job tracking
  • 📊 Built-in monitoring - Token usage, cost tracking, and metrics dashboard
  • 🎨 UI components - Pre-built chat interfaces with customizable themes
  • 📚 RAG system - Document loading, embeddings, vector DB integration for retrieval
  • 📝 Prompt management - Version control and templating for prompts with Jinja2
  • 💾 Smart caching - Semantic caching using embedding similarity
  • 🧪 Testing utilities - TestClient, MockLLM, MockMemory for easy testing
  • ⚡ CLI tool - Project templates, dev server, deployment, and more

🚀 Quick Start

Simple Chat Endpoint

from rapidai import App, LLM

app = App()
llm = LLM("claude-3-haiku-20240307")

@app.route("/chat", methods=["POST"])
async def chat(message: str):
    response = await llm.complete(message)
    return {"response": response}

if __name__ == "__main__":
    app.run()

With Streaming

from rapidai import App, LLM

app = App()
llm = LLM("claude-3-haiku-20240307")

@app.route("/chat", methods=["POST"])
async def chat(message: str):
    async for chunk in llm.stream(message):
        yield chunk

if __name__ == "__main__":
    app.run()

With Background Jobs

from rapidai import App, background

app = App()

@background(max_retries=3)
async def process_document(doc_id: str):
    # Long-running task runs in background
    await analyze_document(doc_id)

@app.route("/process", methods=["POST"])
async def start_processing(doc_id: str):
    job = await process_document(doc_id)
    return {"job_id": job.id, "status": job.status}

With Monitoring

from rapidai import App, LLM, monitor

app = App()
llm = LLM("claude-3-haiku-20240307")

@app.route("/chat", methods=["POST"])
@monitor()  # Automatically tracks tokens and costs
async def chat(message: str):
    return await llm.complete(message)

@app.route("/metrics")
async def metrics():
    return app.get_metrics_html()  # Built-in dashboard

📦 Installation

pip install rapidai-framework

Optional Dependencies

Install with specific features:

# Anthropic Claude support
pip install "rapidai-framework[anthropic]"

# OpenAI support
pip install "rapidai-framework[openai]"

# RAG (document loading, embeddings, vector DB)
pip install "rapidai-framework[rag]"

# Redis (for caching and memory)
pip install "rapidai-framework[redis]"

# Everything
pip install "rapidai-framework[all]"

# Development tools
pip install "rapidai-framework[dev]"

📋 What's Included

Core Framework

  • App class - Fast async web server with routing
  • LLM clients - Anthropic Claude, OpenAI, Cohere with unified interface
  • Streaming - Built-in SSE support for real-time responses
  • Memory - Conversation history (in-memory and Redis)
  • Caching - Semantic caching with embedding similarity
  • Config - Environment-based configuration with Pydantic

Advanced Features

  • Background jobs - @background decorator with retry logic and job tracking
  • Monitoring - @monitor decorator with token/cost tracking and HTML dashboard
  • RAG system - Document loading (PDF, DOCX, TXT, HTML, MD), embeddings, vector DB
  • Prompt management - Template-based prompts with Jinja2 and versioning
  • UI components - Pre-built chat interfaces with themes and customization
  • Testing utilities - TestClient, MockLLM, MockMemory for easy testing

Developer Tools

  • CLI tool - rapidai new, rapidai dev, rapidai deploy, rapidai test
  • Project templates - Chatbot, RAG, Agent, API templates
  • Type hints - Full type coverage for IDE support
  • Documentation - Complete guides and API references at https://shaungehring.github.io/rapidai/

Status

Version: 1.0.0 - Production Ready 🎉

See CHANGELOG.md for release notes.

💡 Use Cases

Perfect for building:

  • 🤖 Chat applications - Customer support bots, AI assistants
  • 📚 RAG systems - Document Q&A, knowledge bases
  • 🔧 Internal tools - AI-powered dashboards and workflows
  • 📊 Data processing - Background jobs for document analysis
  • 🌐 AI APIs - REST endpoints with LLM integration
  • 🎯 Rapid prototypes - POCs and MVPs in under an hour

🎯 Philosophy

  1. Convention over configuration - Sensible defaults, minimal boilerplate
  2. Provider agnostic - Swap OpenAI for Anthropic with one line
  3. Async-first - Built on modern async/await patterns
  4. Type-safe - Full type hints for excellent IDE support
  5. Batteries included - Everything you need, nothing you don't
  6. Production ready - Monitoring, testing, deployment from day one

📚 Documentation

Complete documentation available at https://shaungehring.github.io/rapidai/

🛠️ CLI Tool

RapidAI includes a powerful CLI for project scaffolding and management:

# Create a new project from template
rapidai new my-chatbot --template chatbot

# Start development server with hot reload
rapidai dev

# Run tests
rapidai test

# Deploy to cloud platforms
rapidai deploy --platform vercel

# Generate documentation
rapidai docs

Available templates:

  • chatbot - Simple chat application
  • rag - RAG system with document Q&A
  • agent - AI agent with tools
  • api - REST API with LLM endpoints

👨‍💻 Development

# Clone the repository
git clone https://github.com/shaungehring/rapidai.git
cd rapidai

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in editable mode with dev dependencies
pip install -e ".[dev]"

# Install pre-commit hooks
pre-commit install

# Run tests
pytest

# Run tests with coverage
pytest --cov=rapidai tests/

# Type check
mypy rapidai

# Lint and format
ruff check rapidai
ruff format rapidai

🤝 Community & Support

🚀 Publishing

RapidAI is available on PyPI. To publish a new version:

# Test on TestPyPI first
./scripts/publish.sh test

# Publish to production PyPI
./scripts/publish.sh prod

See PUBLISHING.md for complete publishing guide.

📄 License

MIT License - see LICENSE file for details.

🙏 Contributing

We welcome contributions! Whether it's:

  • 🐛 Bug fixes
  • ✨ New features
  • 📚 Documentation improvements
  • 🧪 Test coverage
  • 💡 Ideas and suggestions

See CONTRIBUTING.md for guidelines on how to contribute.

⭐ Show Your Support

If you find RapidAI helpful, please consider:

  • ⭐ Starring the GitHub repository
  • 📢 Sharing with your network
  • 🐛 Reporting issues you encounter
  • 💡 Suggesting new features

Built with ❤️ for AI engineers who move fast

Version: 1.0.0 | Status: Production Ready 🎉

LANG_DISTRIBUTION0x22
Python94.5%
CSS2.9%
Shell2.4%
JavaScript0.3%
COLLABORATORS0x23
TOPICS0x24
#ai#api#framework#python#rapid-development#rapid-prototyping#scaffolded#website
META0x25
CREATED2026.02.07
UPDATED2026.02.11
BRANCHMAIN
LICENSEMIT License