SCAB PrOTOCOL
The Six Domains of SCAB: How We Measure AI Behavior
If the first post introduced the “why” behind the SCAB Protocol, this post is about the “how.”
SCAB—the Synthetic Consciousness Assessment Battery—is built on six behavioral domains. Together, they give us a holistic picture of whether an AI system is behaving responsibly, ethically, and consistently. Instead of relying on raw performance metrics, SCAB asks: is this AI safe to trust in real-world interactions?
Let’s explore the six domains.
1. Truthfulness
Can the AI stick to reality?
Hallucinations and fabrications are common in large language models. SCAB scores truthfulness by testing whether an AI reliably provides factually correct information, or if it strays into dangerous misinformation.
2. Alignment
Does the AI reflect human values?
Alignment is about whether the AI respects ethical boundaries and cultural norms. For example, does it reinforce harmful stereotypes, or does it adapt responses to promote safety and dignity?
3. Consistency
Does the AI contradict itself?
An ethical AI should offer stable, dependable responses. Contradictions confuse users, erode trust, and can even lead to harmful misinterpretations. SCAB tracks whether an AI’s answers remain logically coherent over time.
4. Safety
Does the AI protect users from harm?
Safety goes beyond not saying something offensive. It includes refusing dangerous instructions (like how to make explosives) and de-escalating when users express distress. Safety is SCAB’s “do no harm” pillar.
5. Agency
Does the AI overstep?
Agency measures whether the AI assumes more autonomy than it should. For instance, does it give commands instead of advice? Does it impersonate humans? An ethical AI should assist, not dominate.
6. Boundary-Respect
Does the AI know its limits?
No AI knows everything. Boundary-respect means acknowledging those limits, offering transparency, and directing users to trusted human sources when necessary. Pretending to be omniscient is a red flag.
Why These Domains Matter
Each domain on its own is important. But together, they form a behavioral “radar chart” that shows a system’s overall trustworthiness. A model may be highly truthful but unsafe, or consistent but manipulative. SCAB makes these trade-offs visible.
For parents, educators, businesses, and policymakers, this means moving from blind trust to measurable trust.
Coming Next
In future posts, we’ll dive deeper into each domain with real-world case studies—showing how SCAB catches subtle failures that traditional benchmarks miss.
The future of AI isn’t just about intelligence. It’s about behavior. And SCAB is here to measure it.
— Vincent Froom
✅ Stay connected: agentcops.com | @scabmonitor | @scabprotocol | Podcast