Protect AI Guardian is an enterprise ML security gateway that enforces security policies on machine learning models before they enter your production environment.
Scanning over 35 model formats for vulnerabilities including deserialization attacks, architectural backdoors, and malicious code injections, Guardian acts as the gatekeeper for your AI supply chain.
What is Protect AI Guardian?
Guardian addresses a critical vulnerability in ML pipelines: the inherent trust organizations place in downloaded models.
Whether pulling models from Hugging Face, internal registries, or third-party vendors, every model file poses potential security risks.
Pickle files can execute arbitrary code, model architectures can contain hidden backdoors, and even safetensors can harbor malicious payloads.
Guardian provides automated security scanning that integrates into existing ML workflows.
Built on the open-source ModelScan project and powered by threat intelligence from over 17,000 security researchers through the huntr bug bounty community, Guardian identifies vulnerabilities that traditional security tools miss because they were not designed for ML artifacts.
Guardian now features over 500 threat scanners and has contributed 2,520+ CVE records to the security community.
Protect AI was acquired by Palo Alto Networks in 2025, integrating Guardian into their broader cybersecurity portfolio.
Guardian works alongside two complementary Protect AI products: Recon for automated red teaming of AI applications, and Layer for runtime security monitoring.
Key Features
Comprehensive Model Scanning
Guardian scans 35+ model formats including:
- Deep Learning: PyTorch (.pt, .pth), TensorFlow (SavedModel, .h5), ONNX, Keras
- Serialization: Pickle, Joblib, Dill, CloudPickle
- LLM Formats: GGUF, GGML, Safetensors
- Frameworks: Scikit-learn, XGBoost, LightGBM, Hugging Face Transformers
Vulnerability Detection
Guardian identifies critical security issues:
- Deserialization Attacks: Detects code execution payloads in pickle files
- Architectural Backdoors: Identifies hidden layers or modified weights
- Malicious Operations: Flags suspicious TensorFlow/PyTorch operations
- Supply Chain Risks: Tracks model provenance and dependencies
Policy Enforcement
Define and enforce organizational security policies:
# Example policy configuration
policies:
- name: block-pickle
description: Block all pickle-based models
action: deny
rules:
- format: pickle
severity: critical
- name: require-signing
description: Require model signatures from approved sources
action: deny
rules:
- signed: false
source: external
Risk Scoring
Guardian assigns risk scores based on:
- Detected vulnerabilities and their severity
- Model format inherent risks
- Source provenance and trust level
- Historical threat intelligence
How to Use Guardian
Web Console
Access Guardian through the web interface:
- Navigate to the Guardian console
- Upload models or connect to your registry
- Review scan results and risk scores
- Configure policies and enforcement rules
- Export compliance reports
CLI Integration
# Install Guardian CLI
pip install protectai-guardian
# Authenticate
guardian auth login
# Scan a local model
guardian scan ./models/classifier.pkl
# Scan with policy enforcement
guardian scan ./models/ --policy production-policy.yaml
# Generate compliance report
guardian report --format pdf --output compliance-report.pdf
Python SDK
from protectai import Guardian
# Initialize client
client = Guardian(api_key="your-api-key")
# Scan a model file
result = client.scan_model("./models/transformer.pt")
# Check scan results
if result.is_safe:
print("Model passed security scan")
else:
for finding in result.findings:
print(f"Issue: {finding.type} - {finding.severity}")
print(f"Details: {finding.description}")
# Enforce policy
policy_result = client.enforce_policy(
model_path="./models/classifier.pkl",
policy="production"
)
Registry Integration
Connect Guardian to your model registries:
# Scan models from Hugging Face
result = client.scan_from_hub("microsoft/DialoGPT-large")
# Scan from MLflow
result = client.scan_from_mlflow(
tracking_uri="http://mlflow.internal",
model_name="production-classifier",
version=3
)
# Scan from S3
result = client.scan_from_s3(
bucket="ml-models",
key="models/latest/model.pkl"
)
Integration
CI/CD Pipeline
Add Guardian to your ML pipeline:
name: ML Model Security
on: [push]
jobs:
model-security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Guardian
uses: protectai/guardian-action@v1
with:
api-key: ${{ secrets.GUARDIAN_API_KEY }}
- name: Scan Models
run: |
guardian scan ./models/ \
--policy production \
--fail-on high
- name: Upload Results
uses: actions/upload-artifact@v4
with:
name: guardian-results
path: guardian-report.json
MLOps Platform Integration
Guardian integrates with major platforms:
- Hugging Face Hub: Automatic scanning of downloaded models
- MLflow: Model registry integration with pre-deployment gates
- SageMaker: Model package validation
- Databricks: Unity Catalog scanning
- Weights & Biases: Artifact security scanning
Kubernetes Admission Controller
Block unsafe models from deploying:
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: guardian-model-validator
webhooks:
- name: models.guardian.protectai.com
clientConfig:
service:
name: guardian-webhook
namespace: ml-security
When to Use Guardian
Guardian is essential for organizations that consume ML models from external sources and need to validate their security before deployment.
It fits enterprises downloading models from Hugging Face, purchasing models from vendors, or maintaining internal model registries where provenance must be verified.
Consider Guardian if your organization faces regulatory requirements around AI security, operates in industries where model tampering could cause harm, or has experienced supply chain security incidents.
The platform particularly benefits teams with mature MLOps practices who need enterprise-grade security controls.
For organizations already using Protect AI’s open-source tools like ModelScan or LLM Guard, Guardian provides a commercial platform that unifies security capabilities with policy enforcement, compliance reporting, and enterprise integrations.
Note: Acquired by Palo Alto Networks in 2025