A decade of platform safety teaches you one pattern: powerful systems with broad permissions and insufficient controls always end the same way. The only question is whether you build the enforcement layer before or after the first major incident.
When I started using AI agents with MCP servers and third-party tools, I recognized that pattern immediately. Broad access, no runtime controls, no audit trail, and an ecosystem growing faster than the governance around it.
AgentWard is the enforcement layer. It scans what your agent can reach, generates least-privilege policies, and enforces them in code — outside the LLM context window, where prompt injection can't reach. Open-source, because this problem is too important to gate behind a paywall.