This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
-
Updated
Feb 19, 2026
This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via CLI wrappers
The comprehensive, community-maintained index of Australian AI Security standards, policies, and frameworks across all 11 jurisdictions.
Artificial Intelligence Regulation Interface & Agreements
Centralized AI IDE rules management for Cursor and GitHub Copilot.
This repository provides comprehensive guidelines, frameworks, and sample policies for the ethical and effective integration of AI in progressive organizations. It serves as a platform for discussion and collaboration on AI governance and ethics.
The standard protocol for defining runtime guardrails for your enterprise agents with a mission of trustworthy and reliable agentic systems 🛡️
Non-Human Identity Disclosure Standard for Healthcare Voice Workflows
Aion-Brain: Epistemic validation infrastructure for AI systems. 4 open-source frameworks (FSVE, AION, ASL, GENESIS, ECF, FCL) enable real-time certainty scoring, fragility mapping, and graduated safety. M-MODERATE status, seeking pilot deployments for validation.
Curated dataset and tools for tracking global AI legislation — US federal, state, and international frameworks.
SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.
Independent research on human-centered AI and LLMs | Policy frameworks for responsible AI | A collaborative space for researchers, innovators, and policymakers advancing ethical, inclusive AI
Comparing AI policies and strategies.
AI-HPP-Standard: an inspection-ready architecture for accountable AI systems. Vendor-neutral. Audit-ready. High-risk gated. Developed via structured multi-model orchestration with human oversight. Designed to support emerging international AI governance.
Enterprise-grade governance and policy enforcement for agentic AI systems.
The AGI Countdown Clock: A symbolic governance signal tracking progress toward Artificial General Intelligence through public milestones and transparent methodology. Currently at 11:58 PM—2 minute to midnight.
Guidelines and frameworks for ethical AI integration and management in progressive organizations.
A comprehensive governance and technical blueprint for the safe development and oversight of Artificial General Intelligence (AGI), based on the "Corralled Superintelligence" paradigm. Includes philosophical foundations, architectural design, and operational protocols.
Public, governed documentation for Keon Systems. Defines claims, architecture, and verification paths for governing AI and automated decisions.
APEX (Action Policy EXecution) is a minimal, external execution boundary for AI systems. It evaluates declared agent intent against explicit, operator-defined policy before execution, enabling deterministic, inspectable control without relying on in-model guardrails.
Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.
To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."