Skip to main content
We're releasing openclaw-shield, an open source security plugin that adds guardrails to OpenClaw agents. It prevents secret leaks, PII exposure, and destructive command execution.

The Risk

AI agents operating on behalf of users can access files, run shell commands, and produce text responses. Without guardrails, they can read .env files and output raw API keys, display Social Security numbers or credit card numbers, execute destructive commands like rm -rf, or exfiltrate credentials by embedding them in shell commands. That's what openclaw-shield prevents.
 
It uses five independent layers of defense-in-depth security, each independently toggleable:
 
  • Prompt Guard - injects security policy into the agent context before each turn
  • Output Scanner - redacts secrets and PII from tool output before transcript persistence
  • Tool Blocker - blocks dangerous tool calls at the host level before execution
  • Input Audit - logs inbound messages and flags any secrets users accidentally send
  • Security Gate - requires the agent to call a gate tool before exec or file-read, returning ALLOWED or DENIED
It detects AWS keys, GitHub tokens, Stripe keys, JWTs, private keys, and more. It catches PII including emails, SSNs, credit card numbers, and phone numbers. It also blocks destructive commands like rm, format, mkfs, and dd, plus any custom patterns you define.
 

Installation is One Line:

openclaw plugins install @knostic/openclaw-shield
 
No build step, no external dependencies, no database. Defaults are secure out of the box.
 

Critical Known Limitations

OpenClaw gets updated constantly, and without community updates, openclaw-shield won't stay effective for more than mere days. We've already had to update it several times. PRs are welcome and encouraged.
 

More Open Source Tools to Secure OpenClaw

 
 

Knostic: Discovery and Control for the Agent Layer

If you're looking for visibility and control over your coding agents, MCP servers, and IDE extensions, from Cursor and Claude Code to Copilot, check out what we're building at https://www.getkirin.com/
Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM
The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance
Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover
Extend Microsoft Purview for AI Readiness

See how Knostic strengthens Purview by detecting overshared data, enforcing need-to-know access, and locking down AI-driven exposure.

Download the Brief

copilot-img
Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

miniature-4-min
Real Prompts. Real Risks. Real Lessons.

A creative look at real-world prompt interactions that reveal how sensitive data can slip through AI conversations.

Get the Novella

novella-book-icon
Stop AI Data Leaks Before They Spread

Learn how Knostic detects and remediates oversharing across copilots and search tools, protecting sensitive data in real time.

Download the Brief

LLM-Data-min
Accelerate Copilot Rollouts with Confidence

Equip your clients to adopt Copilot faster with Knostic's AI security layer, boosting trust, compliance, and ROI.

Get the One-Pager

cover 1
Reveal Oversharing Before It Becomes a Breach

See how Knostic detects sensitive data exposure across copilots and search, before compliance and privacy risks emerge.

View the One-Pager

cover 1
Unlock AI Productivity Without Losing Control

Learn how Knostic helps teams harness AI assistants while keeping sensitive and regulated data protected.

Download the Brief

safely-unlock-book-img
Balancing Innovation and Risk in AI Adoption

A research-driven overview of LLM use cases and the security, privacy, and governance gaps enterprises must address.

Read the Study

mockup
Secure Your AI Coding Environment

Discover how Kirin prevents unsafe extensions, misconfigured IDE servers, and risky agent behavior from disrupting your business.

Get the One-Pager

post-widget-13-img
bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

Schedule a demo to see what Knostic can do for you

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.