How AI Auto-Remediation Actually Works -- and When to Trust It
When we say acessio.ai auto-remediates 87% of accessibility issues, the natural question is: how does it know when to fix automatically versus when to escalate to a human? The answer is confidence gating -- every potential fix is assigned a confidence score between 0.0 and 1.0.
How confidence scores are calculated
Confidence scoring draws on three signals. First, violation confidence -- how certain is the model that this element violates a specific WCAG criterion? Second, fix confidence -- given the violation, how certain is the proposed fix? Third, context confidence -- is the element in a context where auto-remediation is safe?
Issue types by auto-fix safety tier
Tier 1 -- Always safe (threshold 0.85+): Missing lang attribute, missing alt on decorative images, incorrect role attributes, missing form labels with unambiguous label text available.
Tier 2 -- Safe with high confidence (0.92+): Alt text generation via Gemini Vision, colour contrast fixes via CSS variable updates, focus indicator injection on non-interactive elements.
Tier 3 -- Escalate to human (never auto-fixed): ARIA live region semantics, complex keyboard interaction patterns, skip navigation link placement in app shells.
Setting your threshold
Start at the default 0.92 threshold and monitor your first two weeks. Most teams settle between 0.88 and 0.95 after their first sprint cycle.