AI-powered solutions to accessibility: Avoiding discrimination
Part two: Avoiding discrimination through AI
In the previous article, I set out (broadly, with many unanswered questions!) how an AI-aided browser could not only support adherence to WCAG standards, but go further by building a living training dataset of user preferences with varied disabilities, which could be used to adapt and translate communications and modes to suit individual needs.
I’ve since learned that there are already early examples of this application of AI in the real world. Be My Eyes, a well-loved app for people with forms of blindness, utilises an AI-powered assistant to read out labels on products or advise of the arrival of an Uber through number plate recognition, for example.
I also learned of an experimental hardware product, Rabbit R1, a hybrid device that uses Large Action Models (LAM) to act as a personal assistant, enabling Rabbit owners (parents?!) to speak simple commands to achieve multi-step personal tasks such as tax returns, ordering flowers, finding fixes for common household problems or doing the weekly shop.
Other exciting ways AI is starting to support accessibility include:
- Summarising text content in an easily understandable way for people who find it difficult to engage with long or complex information
- Regenerating or creating content in alternative formats – text-to-speech and vice-versa, audio to text, extrapolating meaning from images or moving images
- Isolating speech from other background sounds, interpreting unclear speech into clear speech (e.g. Google Parrotron)
- Sensory applications involving hardware for haptic feedback
While these ground-breaking research applications of AI are great and reveal the great potential of AI when used responsibly and ethically rather than for vapid socials (anthropomorphised pets memes and fake Trump videos, I’m looking at you!), there are some fundamental stumbling blocks to implementing AI in mainstream societal systems.
Why is there a significant risk of discrimination in AI?
Disabilities, of all types and severity, are unique to each person.
Conversely, AI is rule-based and works by calculating decisions based on commonalities, patterns, and averages. Unless correctly trained, it doesn’t understand what a ‘reasonable adjustment’ is, for example. So, the opposite of what is needed to make systems inclusive.
Additionally, most of our societal systems don’t currently prioritise or reflect the needs of people with disabilities or differences, so models that are built from these systems won’t either, unless manually and thoughtfully adapted. This is deeply ‘baked-in’ across all types of public data including written information, academic studies, reports images, videos, audio and more.
In his report about AI and the rights of persons with disabilities, Gerard Quinn, the former former UN Special Rapporteur, confirms this. While AI-enabled systems clearly offer new opportunities for inclusion, there are (avoidable) risks that developers of AI-based tools need to be absolutely aware of. The risks are due to limited capacity of poorly considered AI systems to consider the full range of human diversity in their algorithms and rules.
"The data underlying artificial intelligence algorithms can reflect and incorporate ableist (and ageist) biases. Disability may be 'perceived' by the technology as deviant and therefore undesirable…"
You can read more about it in this article from the United Nations.
Worryingly, many businesses and public sector organisations are currently – and often urgently - planning how they can best use AI to improve their operations, sometimes without an understanding of this concept. The first stop on the road to AI efficiency is often to look at statistical averages, also known as ‘the bell curve’.
Let’s look at reported ways that ‘averages’ can turn out to be discriminatory where AI has been incorrectly/insufficiently trained and then adopted into public-facing systems though:
- Incorrectly interpreted facial recognition due to physical differences or skin colours
- Unreliable biometric voice recognition or incorrectly 'heard' response due to speech impediment (or even regional accent)
- A screening system draws a conclusion about a job candidate due to gaps in their employment history (without knowing why)
- AI-powered financial lending inadvertently discriminates against an applicant with autism because of the different style or content of their response
The above examples are all real-world problems that have occurred and demonstrate types of ‘bias’. Bias occurs when training data is not representative of the real-world population, and rules are created that exclude people based on their response or input - David Walliams’ now infamous ‘Computer Says No’ sketch springs to mind.
Joking aside, even more complex and serious discriminatory risks exist when combinations of different ‘protected characteristics’ such as gender, race, age, ethnicity and disability are assessed in a system, making some people even more vulnerable / marginalised, often with little or no way of understanding why a decision has been made, as the AI system may not have been designed to explain its process.
This is known as ‘intersectional discrimination’.
There are many more well-publicised examples of this phenomenon causing undesired and unethical results across large organisations including Amazon’s AI hiring tool, a US Criminal Justice assessment tool, several national and international financial lending institutions, and commercially available facial recognition software which was deemed to be racially biased.
How to avoid bias and discrimination
Avoiding bias is a complex and multifaceted topic that could make the subject of an entire book, let alone a humble article. However, there are some high-level principles / questions that should be central to the development of any AI solution:
- Is data really free of bias? Don't lock out people with disabilities: Ensure datasets include a wide range of perspectives, demographics, respect 'protected characteristics' from the outset, and address user needs through early and deep research and data testing
- Use the correct learning model: Supervised and unsupervised models can both accommodate inclusion, but it is achieved in different ways
- Consider how ongoing data is collected and whether this method is accessible. Provide multiple ways for data to be input rather than relying on one that may exclude
- Detection tools and designed checks to identify and rectify bias during development, should it be present
- Ongoing human monitoring / auditing of live system outputs to detect emerging bias / change incorrect decisions as data and systems evolve
The future of inclusive AI
This may sound somewhat polarised, but AI really does raise the stakes for new / replacement systems to be either revolutionary or fairly catastrophic.
Done well, with good governance and policy, an AI-supported system can improve efficiency and genuinely celebrate the diversity of our human spectrum, levelling the playing field. It could allow our future technology to live up to the original spirit of the internet articulated by Tim Berners-Lee:
"Freedom of connection, with any application, to any party, is the fundamental social basis of the Internet, and, now, the society based on it."
However, done badly, without governance or appropriate levels of research and testing, AI-supported software has the power not only to present very visible discrimination at a systemic level but also to further compound existing marginalisation and undesirable exclusion. The probable scale of most public AI systems means rework or refactoring will be lengthy, complex and therefore, expensive.
In case you missed it, check out part one of this series which focuses on the ultimate accessibility browser and reframing the problem