**Introducing Open Bias: A Reliable Proxy for AI Agents**
In the rapidly evolving landscape of artificial intelligence (AI) and machine learning (ML), ensuring the reliability and safety of AI agents is crucial. With the increasing reliance on Large Language Models (LLMs) for various applications, it's essential to have a mechanism in place that monitors and enforces policies on their behavior. Enter Open Bias, a transparent proxy that monitors LLM API calls and intervenes automatically when deviations occur.
**What is Open Bias?**
Open Bias is an open-source project designed to provide a reliability layer for AI agents. It wraps around the LiteLLM as its proxy layer, allowing users to define rules in YAML and evaluate every response against built-in or custom rubrics using a sidecar LLM. The judge engine (default) scores each response and intervenes when violations are detected.
**How Does Open Bias Work?**
Open Bias uses a modular architecture, comprising four policy engines: Judge, FSM, LLM, and NeMo Guardrails. Each engine has its unique mechanism for scoring responses and intervening when necessary. The proxy layer is designed to add zero latency to your LLM calls in the default configuration.
**Policy Engines**
* **Judge Engine (Default)**: Scores each response against built-in or custom rubrics using a sidecar LLM. * **FSM (Finite State Machine) Engine**: Evaluates responses based on states, transitions, and constraints defined in YAML. * **LLM Engine**: Uses an LLM to classify responses and detect drifts. * **NeMo Guardrails Engine**: Wraps NVIDIA NeMo Guardrails for content safety, dialog rails, and topical control.
**Configuration**
Everything lives in `openbias.yaml`, which has smart defaults. The minimal config is just a policy list.
**Performance**
Open Bias adds zero latency to your LLM calls in the default configuration:
* Sync pre-call: Applies deferred interventions (prompt string manipulation — microseconds). * LLM call: Forwarded directly to provider via LiteLLM. * Async post-call: Response evaluation runs in a background asyncio.Task.
**Key Features**
* Zero-latency proxy layer * Four policy engines with unique mechanisms for scoring responses and intervening * YAML-first configuration with auto-detection of models and API keys * OpenTelemetry tracing
**Conclusion**
Open Bias is an exciting project that provides a reliable proxy for AI agents, ensuring the safety and reliability of LLMs. Its modular architecture and multiple policy engines make it a versatile tool for developers looking to implement policies on their AI agents. With its open-source nature and active community, Open Bias has tremendous potential to become a standard in the industry.
**Get Started with Open Bias**
* Install Open Bias using pip: `pip install openbias` * Configure your policy in `openbias.yaml` * Point your LLM client at the proxy
Join the conversation on our community forum or contribute to the project on GitHub.