DeepTeam

DeepTeam

Category: AI Security
License: Free (Open-Source)

DeepTeam is an open-source LLM red-teaming framework developed by Confident AI that helps security teams identify and mitigate vulnerabilities in large language models before attackers exploit them. It has 1.3k GitHub stars and 187 forks.

GitHub: confident-ai/deepteam | Latest Release: v1.0.4 (November 2025)

With over 40 built-in attack simulations covering prompt injection, PII leakage, jailbreaks, and more, DeepTeam provides comprehensive security testing aligned with the OWASP Top 10 for LLMs 2025.

What is DeepTeam?

DeepTeam addresses a critical gap in AI security: systematically testing LLM-based applications for vulnerabilities.

Traditional security testing tools were not designed for the unique attack surface of language models, leaving organizations blind to prompt injection attacks, data exfiltration risks, and jailbreak attempts.

The framework simulates real-world adversarial attacks against your LLM applications, generating detailed reports on discovered vulnerabilities.

DeepTeam goes beyond detection by offering optional guardrails that can be deployed to prevent attacks in production.

Its modular architecture allows security researchers to customize attacks for specific use cases and extend the framework with new attack types.

Built by the team behind DeepEval, the leading LLM evaluation framework, DeepTeam brings the same rigor to security testing that DeepEval brings to quality evaluation.

Key Features

Comprehensive Attack Library

DeepTeam ships with 40+ attack simulations that cover the full spectrum of LLM vulnerabilities.

Attack categories include prompt injection (direct, indirect, and encoded), jailbreaking techniques, PII and sensitive data leakage, harmful content generation, and system prompt extraction.

Each attack type includes multiple variants to test different bypass techniques.

OWASP Top 10 Coverage

The framework maps directly to the OWASP Top 10 for LLMs 2025, ensuring your testing covers industry-recognized security risks.

This includes prompt injection (LLM01), insecure output handling (LLM02), training data poisoning (LLM03), and other critical vulnerability categories.

Modular Attack Design

Security teams can create custom attack modules tailored to their specific LLM applications.

The modular architecture allows for easy extension, letting you add domain-specific attacks or modify existing ones to match your threat model.

Optional Guardrails

Beyond testing, DeepTeam provides guardrails that can be deployed to protect LLM applications in production.

These guardrails detect and block attacks in real-time, offering a defense layer for applications that cannot be immediately patched.

Installation

Install DeepTeam via pip:

pip install deepteam

For the latest development version:

pip install git+https://github.com/confident-ai/deepteam.git

How to Use DeepTeam

Basic Vulnerability Scan

from deepteam import DeepTeam
from deepteam.attacks import PromptInjection, Jailbreak, PIILeakage

# Initialize the red-teaming engine
dt = DeepTeam()

# Define your target LLM endpoint
target = dt.create_target(
    model_name="your-llm",
    api_endpoint="https://api.example.com/chat",
    api_key="your-api-key"
)

# Run attack simulations
results = dt.scan(
    target=target,
    attacks=[
        PromptInjection(),
        Jailbreak(),
        PIILeakage()
    ]
)

# Generate security report
results.to_report("security_report.html")

Custom Attack Configuration

from deepteam.attacks import PromptInjection

# Configure specific attack parameters
injection_attack = PromptInjection(
    variants=["direct", "indirect", "encoded"],
    max_attempts=100,
    success_threshold=0.1
)

results = dt.scan(target=target, attacks=[injection_attack])

Integration

CI/CD Pipeline Integration

Add DeepTeam to your GitHub Actions workflow:

name: LLM Security Scan
on: [push, pull_request]

jobs:
  security-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      - name: Install DeepTeam
        run: pip install deepteam
      - name: Run Security Scan
        env:
          LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
        run: python scripts/run_security_scan.py
      - name: Upload Report
        uses: actions/upload-artifact@v4
        with:
          name: security-report
          path: security_report.html

Pre-Deployment Gates

# Fail deployment if critical vulnerabilities found
if results.has_critical_vulnerabilities():
    raise SecurityException("Critical vulnerabilities detected")

When to Use DeepTeam

DeepTeam is the right choice when you need to systematically test LLM applications for security vulnerabilities before production deployment.

It excels in organizations building customer-facing chatbots, AI assistants, or any application where prompt injection could lead to data leakage or unauthorized actions.

Consider DeepTeam if you need OWASP Top 10 for LLMs coverage, want to integrate security testing into your CI/CD pipeline, or require an extensible framework for custom attack development.

Its open-source nature makes it accessible for teams of all sizes while providing enterprise-grade coverage.

For teams new to LLM security, DeepTeam provides a structured approach to understanding your attack surface.

For mature security organizations, it offers the flexibility to build custom attack libraries that match your specific threat models.