External Signals
Signals
Independent validations, expert interactions, and public resonance points from the development of EVIDE and the </AI> Protocol.
This page collects independent signals, validations, and real-world interactions that emerged during the development of EVIDE and the </AI> Protocol.
These are not testimonials. They are points where the model was challenged, interpreted, and validated by external actors.
Each signal is documented at the level of observable interaction, not claimed outcome.
All external references are included with consent or derived from public interactions. Interpretations are limited to observable alignment points.
Contents
Architecture
Boundary Validations
L2 → L3 Boundary
Layer Boundary Validation with Stone Shi (TGTRACING / CLARIXO)
Stone Shi, founder of TGTRACING and CLARIXO - a live AI SaaS with real agent behavior - engaged in a detailed technical exchange around the L2 → L3 boundary in the EVIDE framework. The validation was conducted on live evidence objects, not hypothetical scenarios.
- Confirmed that internal traceability, even if complete and structured, does not cross into independent evidentiary status without a trust-separated anchoring layer
- Identified the precise closure trigger: transition from behavioral completion to responsibility closure under identified authority
- Validated the distinction between Layer 2 (closure-ready state) and Layer 3 (independently verifiable evidentiary object)
- Produced a field-level mapping from live TGTRACING evidence records into the EVIDE payload, with simulation of the anchoring step and the resulting evidentiary delta
- Validated the two-level model internal to L3: Level 1 determines attribution validity, Level 2 determines the evidentiary strength of the object once externalized
- Identified and tested three operational states: rejected at Level 1, accepted with full evidentiary strength, accepted with degraded evidentiary strength
- Confirmed the cooperation model: TGTRACING establishes runtime truth, EVIDE anchors responsibility at the trust boundary
Key alignment point from the exchange with Stone Shi: "We can do the data layer, but the trust layer is not something we can fully create alone from within the same originating system boundary."
External Validation · v2.0
"Strong architectural move. The important part is not adding more governance machinery, but making the limits of observation themselves part of the evidence. 'Unverifiable' is not failure — it is disciplined honesty when a gate cannot safely claim full visibility. That prevents one of the most dangerous errors in distributed systems: treating formal closure as equivalent to stable closure. v1.9 made the boundary explicit. v2.0 makes the evidentiary quality of that boundary explicit. That is where evidentiary integrity becomes real."
Technical Reference
Technical Reference
Schema Reference
Public Edition
EVIDE API Documentation v2.0
Operational intake architecture for externally anchored evidentiary deposits.
- canonicalization rules and SHA-256 evidentiary hashing
- classification_context v1.8 and threshold attribution structure
- closure-state boundary semantics and boundary_readiness quality layer
- identity-bound intake model
- handoff v2.0: structured boundary_readiness with gate identity, visibility surface and unresolved signals
- schema evolution history (v1.0 → v2.0)
- FEDIS compatibility and interoperability boundary requirements
Technical Note
v1.0 — May 2026
The Hidden Governance Risk: When Evidentiary Integrity Is Not Enough
A technical note on continuity-substrate instability and synthetic coherence in distributed AI governance.
- Introduces synthetic coherence as a distinct governance failure category
- Defines the inversion problem: replayability reinforcing the appearance of integrity rather than exposing degradation
- Frames the "unverifiable" state as a pro-compliance architectural response under EU AI Act Art. 14
- Maps upstream admissibility formation and downstream evidentiary anchoring as complementary continuity layers
Expert Interaction
External Expert Signals
Architecture Collaboration
RANKIGI × EVIDE — Interface Mapping (v0.3)
Direct technical collaboration with Wesley Snow, Founder of RANKIGI, on the interoperability boundary between an execution-layer system (KYA / RANKIGI) and an evidentiary responsibility layer (EVIDE). The collaboration produced a formal interface mapping document defining the handshake between execution proof, responsibility closure, and evidentiary portability.
External Domain Contribution
HR Governance × EVIDE — Schema Co-development (v1.2 → v1.8)
Saly Man, AI Governance Architect specializing in EU AI Act compliance for HR and Recruitment AI systems, contributed as an external domain expert to the development of EVIDE across seven schema iterations. The collaboration emerged from a structured technical exchange on making human oversight demonstrable in high-impact AI decisions — specifically in HR screening, candidate evaluation, and override scenarios.
- Introduced the distinction between intervention traceability and decision accountability, which became foundational to the EVIDE architecture
- Identified the anchoring threshold as an operational governance decision, not a technical constraint
- Introduced the taxonomy drift / inter-reviewer consistency distinction, leading to intervention.taxonomy_version and intervention.classification_status
- Identified authority fragmentation as a different evidentiary condition from authority absence, directly leading to threshold_authority in v1.8
- Introduced the concept of authority incoherence at the closure point — competing conditions that cannot all be satisfied simultaneously — identified as a candidate for explicit modeling in a future EVIDE iteration
Public Architecture Signal
Graham Brimage - Execution Boundary Semantics
Public post that articulated — independently and without prior coordination — the precise distinction between execution authorization and downstream evidentiary preservation. The post framed the governance question as "was this allowed to happen?" rather than "what happened?", and positioned the execution boundary as the point where a decision must stand without reconstruction.
- Identified the shift from upstream model quality to boundary-condition sufficiency as the operative governance question
- Articulated that inputs will remain probabilistic and incomplete — the critical variable is what condition reaches the execution boundary
- Separated execution authorization from evidentiary preservation as distinct but related responsibilities
- Formulated "the proof must be there before the decision moves forward" — independently convergent with EVIDE's reconstruction_independence requirement
"You don't need to trust the system. You need to evaluate whether the condition under which it acted is sufficient."
In Progress
Ongoing Validation Threads
This section tracks active validation threads currently in progress. These are not yet confirmed for public reference, but represent ongoing technical exchanges, boundary tests, and cross-model alignments with external contributors. Signals are included here only once a stable point of convergence is reached or explicit consent for public reference is obtained.
Architecture
EVIDE vs Execution Certification
Boundary Clarification
Why certifying a decision is not the same as certifying its execution
A growing class of systems focuses on proving that an AI pipeline ran correctly: execution logs, reproducible outputs, runtime audit trails. These are necessary. But they answer a different question than EVIDE. Execution systems prove what happened. EVIDE proves who was responsible — and that this responsibility was formally closed and independently verifiable before any dispute arose.
"Execution evidence explains what happened. Responsibility closure explains who stands behind the outcome."
Most systems reconstruct responsibility after dispute. EVIDE + DAPI bind responsibility before dispute. This document explores the architectural boundary in detail.
Press & Media
Press & Media Citations
External sources that have cited or referenced EVIDE, the </AI> Protocol, or related concepts in editorial, journalistic, or research contexts.
Press Citation
PPC Land
Coverage of the collapse of the Brussels AI Act negotiations and the August 2, 2026 enforcement deadline remaining in force.
"The real risk is not the deadline shifting, but being unprepared when it doesn't." — Emanuel Celano, cited in PPC Land
Public Resonance
Public Signals & Adoption Interest
Launch of the </AI> Protocol
35,000+ views
Human in the Loop
Active engagement from AI governance, compliance, and legal professionals
The AI Act requires human supervision
Declaring oversight is not enough
EVIDE JSON 1.7 makes the structure of a decision visible and provable.
Insurance AI - Logged vs. Defensible
Engagement from insurance and compliance professionals on post-decision auditability
HR & Recruitment AI - When a rejected candidate asks why: logged vs. defensible
LEGAL, COMPLIANCE & PROFESSIONAL SERVICE AI - When a regulatory investigation is open: logged vs. defensible
Banking & Financial Services AI - When a rejected application is audited: logged vs. defensible
Architectural Roadmap
EVIDE v2.0 — Roadmap
In Architectural Definition
boundary_readiness Quality Layer — Gate Qualification Framework
v2.0 addresses a structural limitation of the v1.9 string model:
boundary_readiness: "verified" implicitly assumes complete gate visibility — a condition that does not hold in systems with partial telemetry or black-box upstream components.
"v1.9 made the boundary explicit. v2.0 makes the evidentiary quality of the boundary explicit."
The core change:
boundary_readiness is promoted from a string to a structured object with four canonical states — candidate, verified, verified_partial, unverifiable — each with declared gate identity, visibility surface, and unresolved signals. The architectural trigger was a signal from Dan Storbaek identifying the partial visibility problem.