Artificial intelligence offers significant potential to address some of the key challenges faced by people with disabilities, particularly by enabling scalable technology solutions and personalized user experiences. However, as with previous technological shifts, there is a risk that people with disabilities may be excluded due to barriers in design, implementation, or access.
Where AI Meets Accessibility: Considerations for Higher Education, a new resource developed by Teach Access in partnership with Every Learner Everywhere, explores the intersection of artificial intelligence and accessibility, with a particular focus on the needs of people with disabilities. The comprehensive resource — drawing from the expertise of fifteen contributors — covers assistive technologies, digital accessibility, frameworks for accessible design, legal considerations, quality considerations, and policy recommendations. It is also a practical toolkit for incorporating accessible AI in higher education, weaving in example activities, discussion questions, and reading lists.
One section of Where AI Meets Accessibility explores the potential risks, limitations, and hazards of AI for people with disabilities, including algorithmic bias and ableist assumptions. For example, one damaging assumption is to forget that each person’s experience with disability is unique. Even if individuals share a similar disability, diagnosis, or condition, their needs, preferences, and challenges may vary significantly. Ableist assumptions like this example can inform design, procurement, and implementation of learning technologies in ways that undermine the potential those technologies have to benefit people with disabilities.
Below are excerpts from the sections of Where AI Meets Accessibility that outline issues influencing the ethical implementation of AI in higher education.
Perpetuating biases and ableist assumptions
Despite AI’s potential to transform education, it often falls short of addressing the needs of people with disabilities. For example, some automated speech recognition systems struggle to accurately interpret speech patterns of people with speech impairments. This shortcoming stems largely from the lack of representation of disabled perspectives in AI development processes. Because AI systems rely on datasets curated by humans, any existing biases or omissions in the data are inevitably reflected in the technology. The underrepresentation of people with disabilities in these datasets can be attributed to two key factors.
First, people with disabilities are often not considered a “profitable” user group. Large technology companies, which hold the resources to develop AI models and applications, tend to prioritize innovations that promise high financial returns. Myths persist in the industry that people with disabilities do not form a significant market or they lack purchasing power. As a result, fewer technologies are designed to meet their unique needs.
Second, people with disabilities are rarely prioritized in the design of AI technologies. While discussions about inclusion in AI often focus on race and gender, they frequently overlook disability. Like other marginalized groups, disabled people are underrepresented on design and development teams. This lack of representation means AI systems fail to reflect their experiences. Moreover, the discrimination disabled people face differs fundamentally from other forms of bias, making it essential to center their voices in inclusion efforts.
When people with disabilities are excluded from the creation of AI systems, the resulting datasets and design processes become less inclusive. This exclusion perpetuates harmful stereotypes and limits access to innovations that could enhance the educational experience for students with disabilities. AI systems often assume a “one-size-fits-all” model, which overlooks the diverse needs of students with disabilities, including the range of accommodations they may need to succeed.
For example, automated grading systems may evaluate student responses based on specific patterns, such as the structure or formatting of written answers. This approach can disadvantage students with cognitive disabilities, dyslexia, or processing difficulties, as they may need more time or have alternative ways of expressing their ideas. As a result, these systems could unfairly penalize students who deviate from the expected norms of response.
Ongoing research by Dr. Vaishnav Kameswaran at the University of Maryland [one of the co-authors of this excerpt] highlights how the use of AI in hiring, particularly automated video interview systems, can discriminate against people with disabilities. These platforms assess candidate suitability based on behavioral, prosodic, and lexical features, such as the amount of eye contact a candidate maintains. These features are then abstracted into qualities like engagement and enthusiasm, which contribute to a candidate’s suitability score. This approach is inherently ableist, as it prioritizes “normative” characteristics that may be discriminatory.
Moreover, these AI tools shift the power dynamic, often overlooking the specific needs of people with disabilities. Traditionally, interviews, including those for educational opportunities, are two-way exchanges, allowing candidates to assess whether institutions can provide the necessary accommodations. With AI systems prioritizing efficiency and objectivity, people with disabilities may be denied this opportunity, further exacerbating inequities in higher education and the labor market. Therefore, it is crucial to explore how AI systems can be made accountable to prevent further discrimination against people with disabilities.
How higher education can work to keep AI accessible
AI policy discussions must always include accessibility and ethical considerations. Faculty and administrators who specialize in these areas should share their insights with policymakers to help them understand the critical importance of inclusion in AI. Frameworks like the Web Content Accessibility Guidelines (WCAG) should be referenced when discussing policies for AI technologies, ensuring that all tools meet the necessary accessibility standards. Additionally, ethical considerations, such as those outlined in the UNESCO Recommendation on the Ethics of Artificial Intelligence, should also be part of these discussions.
Higher education institutions should collaborate with AI product developers to enhance the accessibility of these tools, ensuring that they are inclusive and beneficial for all users. Additionally, faculty in technology-related disciplines should actively encourage student research and innovation aimed at improving accessibility. This includes both making existing AI tools more accessible and developing new tools to address accessibility barriers. Through these initiatives, institutions can contribute to creating more inclusive technologies while empowering students to prioritize accessibility in their future work.
Download Where AI Meets AccessibilityEditor’s note: The material in this article is excerpted and adapted from Where AI Meets Accessibility: Considerations for Higher Education, developed by Teach Access. Contributors to the sections excerpted above include Rua Mae Williams, Tessa Wolf, and Vaishnav Kameswaran.