Breakdown

Despite the potential of machine learning models like Co-Pilot to produce code, accessibility remains a challenge. In this article, Alastair Campbell explains that the inherent nature of AI (which relies on statistical patterns) fails to prioritize accessibility due to the rarity of truly accessible code. In short, AI's reliance on averages doesn't really help make things accessible.

Key points:

  • Automated accessibility testing tools may not always capture nuanced accessibility issues, such as incorrect alt text for images.

  • While machine learning holds potential for enhancing accessibility testing, practical implementations are yet to fully materialize.

  • AI-generated code may not prioritize accessibility because most code isn't accessible.

Highlights:

Accessibility is by definition non-typical usage, therefore applying an average does not work.

The WebAim analysis of a million homepages is pretty good evidence that most code is not accessible