
AXSChat Podcast
Podcast by Antonio Santos, Debra Ruh, Neil Milliken: Connecting Accessibility, Disability, and Technology
Welcome to a vibrant community where we explore accessibility, disability, assistive technology, diversity, and the future of work. Hosted by Antonio Santos, Debra Ruh, and Neil Milliken, our open online community is committed to crafting an inclusive world for everyone.
Accessibility for All: Our Mission
Believing firmly that accessibility is not just a feature but a right, we leverage the transformative power of social media to foster connections, promote in-depth discussions, and spread vital knowledge about groundbreaking work in access and inclusion.
Weekly Engagements: Interviews, Twitter Chats, and More
Join us for compelling weekly interviews with innovative minds who are making strides in assistive technology. Participate in Twitter chats with contributors dedicated to forging a more inclusive world, enabling greater societal participation for individuals with disabilities.
Diverse Topics: Encouraging Participation and Voice
Our conversations span an array of subjects linked to accessibility, from technology innovations to diverse work environments. Your voice matters! Engage with us by tweeting using the hashtag #axschat and be part of the movement that champions accessibility and inclusivity for all.
Be Part of the Future: Subscribe Today
We invite you to join us in this vital dialogue on accessibility, disability, assistive technology, and the future of diverse work environments. Subscribe today to stay updated on the latest insights and be part of a community that's shaping the future inclusively.
AXSChat Podcast
How We Build AI That Includes The Outliers
AI loves the average—and that’s exactly why too many people get left out. We sit down with David Banes, chair of the Equitable AI Alliance, to explore how we move disability from the margins of tech conversation to the center of how AI is built, funded, and deployed. From Mobile World Congress to major health and education forums, we share what it takes to get lived experience on main stages and why those introductions from sponsors and allies change the room.
We dig into the mechanics of inclusion: design for outliers to include everyone, co‑design with disabled people across the entire product lifecycle, and demand transparency in both datasets and model reasoning. David breaks down where bias shows up most—recruitment tools, university admissions, assessment systems—and how domain‑specific AI can misread faces, voices, and behavior as errors rather than human difference. We talk candidly about the privacy paradox: anonymized data protects people but can hide whether disability is represented at all. The path forward blends informed consent, easy‑read terms, and community governance with rigorous accessibility testing and evaluation against disability‑relevant metrics.
Culture shapes everything, so we confront how ideas like “independence” vary by region and why global perspectives must steer inclusive AI. You’ll hear about the Alliance’s open Resource Hub, growing webinar series, and practical ways organizations can partner to raise standards across industry. If you care about accessible technology, ethical AI, and building systems that actually work for real people, this conversation gives you a roadmap and a reason to act.
Subscribe for future episodes, share this one with a colleague shaping AI policy or product, and leave a review to help more listeners find these conversations. Your introduction might be the bridge that puts disability on the next big stage.
Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com
Debra https://bsky.app/profile/debraruh.bsky.social
Neil https://bsky.app/profile/neilmilliken.bsky.social
axschat https://bsky.app/profile/axschat.bsky.social
LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh
Hello and welcome to AXSChat Show. We're delighted to welcome back David Banes. Dave has been on the show multiple times. We're recording this show just after Eminem. So Dave and I are in different hotel rooms in DC, just to make sure that the rumors don't start. So, Dave, welcome back. Tell us a little bit about what you're up to right now because you've been doing all sorts of things around the world. What are you up to right now?
David Banes:So the the main thing I'm I'm working on is this as chairperson of the Equitable AI Alliance, which is a Zero project and Seneca Trust initiative exploring the impact of AI on the lives of people with disabilities. But alongside that, yeah, other things that I'm working on. Working on various projects in Europe. We've been doing a project with AI for global symbols on creating symbols for communication using a generative AI to develop wider vocabulary and language and cultural appropriate symbols for different communities. And then just yeah, the normal mishmash of things that one does as a consultant.
Neil Milliken:Excellent. And um uh friend of well, we're all friends of Xero Project here and have been a let's say minor contributor to the Exportal AI Alliance. I think that we'd like to do more in that space in the in the coming year, and that's something that I've talked to Zero about. But can you explain a little bit more about some of the things that the Alliance is trying to do in terms of getting the disability topics onto the agenda at some of these sort of discussions that are happening about AI?
David Banes:Yeah. So, I mean, the Alliance was established, Neil, um, really to explore both the opportunities that artificial intelligence offers, but also to look at how we mitigate and address some of the risks and challenges that are implicit and explicit in AI for people with disabilities. And the Alliance is a quite a wide uh diverse set of uh stakeholders. So it includes uh some of the technology companies such as Atos, but also Microsoft and Google. It includes disabled persons organizations, World Blind Union, Deaf and G3 ICT, but also um researchers, consultants, and academics working in the field. And what we really wanted to try to do was one, as I say, was to amplify the opportunities, the things that are actually happening with AI now, make people aware of the many, many benefits that AI is bringing, but also to break out of the disability, diversity circuit of events, conferences, and stakeholders, really to try and raise the issue within other communities. So that includes technology and AI conferences, but also sectorial, such as education, employment, health, independent living conferences, and to place people with lived experience and their allies into those events. We've diversified that a little bit more recently. So we're now running our own webinars, um, which are attracting quite a wide audience, but also looking to place speakers and guests with podcasts and video conferencing video casts, both in terms of the disability community to build capacity within the sector, but again, outside of our echo chamber, uh to where we can have greater influence.
Neil Milliken:Yeah. So whether it be AI or other topics, that that mainstreaming and taking the sort of disability lived experience knowledge and the accessibility practitioner knowledge to some of these mainstream conferences has always been something that has been somewhat of a challenge and has been something and a topic that we all here have discussed as something that's been needed for a for a long time. So so are we I I think I've seen a couple of instances where we've managed to insert people onto the agenda of some of the mainstream conferences. What are some of the target conferences that you would really like to see the disability community being able to have a voice at?
David Banes:I mean, some of the big ones we've been successful at have been things like Mobile World Congress. And we had speakers at Super AI in Singapore. But it is quite challenging to get into those major events. Probably disability and inclusion is not high on the agenda. But I have to say that I would say that that's equally true. We've also spoken at some diversity, quality, and inclusion events. And even at those, quite often the disability perspective is really quite low on the agenda amongst speakers. Other diversity issues are much higher profile. And I don't think that's changed for a number of years. So again, really trying to push that forward. So we're gonna be at uh OpenEDUCA Berlin, which is a mainstream education event in Berlin. We've just recently spoken at GCPR, which is a mainstream health event full of doctors and health professionals, but not specifically disability professionals, much more wide health professionals.
Antoinio Santos:We we know that some of the big uh tech conferences are sponsored by organizations that we know that care about accessibility, care about disability. I'm not going to name the vendors, but that can somehow be a way, like a bridge to get some attention if they are sponsoring, if they are getting time on stage.
David Banes:Using those sorry, Antonio, I'm not hearing you clearly. Uh the volume's a little low for me. I probably should have put my hearing aids in. Okay.
Neil Milliken:Antonio, are you able to repeat? We can't hear you at all. We we've left. Are you muted?
Debra Ruh:Yeah, yeah. There you go. There you go.
Antoinio Santos:We know that some of those tech conferences are sponsored by known vendors that care about accessibility and care about disability. Could they be be a mean of access to those conferences to bring the topic up?
David Banes:Yeah, I think I mean one of the things that is always really important is introductions by people to the conferences. That helps us establish our credibility and encourage people to think, yeah, we can provide high-quality speakers who will be engaging for your audience. So one of the things that we ask people to do is that they if they have a good contact in a major conference, wherever it is in the world, this is a global perspective, please make an introduction for us and we we can follow that up. That ambassadorial role, if you like, from our huge community makes a big difference in terms of us being heard at an early stage in the planning of conferences.
Debra Ruh:David, I know you're not feeling very good. Thank you so much for joining us when you're not feeling very well. So I just want to do that for the audience. But I was looking up the Equitable, Equitable AI Alliance, and I actually found another one that's focused on education. I know the California Education System and San Diego University created something called the Equitable AI Alliance, EAIA. So I want to make sure the audience knows this is that's going to be different, even though we appreciate these universities stepping up because AI is going to change everything. And one thing we want to do is make sure that we humans are involved in what it's going to do. But I think it's interesting, and I was looking at the people, it's very Western focused, but you do have a couple of, I know you have somebody from Kenya and somebody from Austria, for example, but will you be how will you make sure that more of the voices are heard, not from the Western countries, even though we got lots to say still in the Western countries? And also, how do you make sure that you don't confuse it with other initiatives like this uh California one, focused on education? But I think it's a win for everybody no matter what. But I was just curious, how is that going to work? Because there are a lot of people from our community talking about, I don't know if a lot is the right word, but there is efforts being made. I also want to say one more thing. When you were making the comment about, as usual, when we're talking about diversity, even though that's a really weird, weird word in the United States right now, DEI and woke, but regardless, often when we were in those conversations, disability wasn't included or was barely included. But I think also I want to say something that I'm thinking about when I'm talking about AI right now, is it is truly about human inclusion. It's not about diversity and DEI. It is truly about why you would build any technology that doesn't include humans at every single phase of our lives, any way that we can, you know, facilitate as a human being. I think it's illogical. I just want to put that out there. But I was just curious about a few of those things. Excuse me.
David Banes:Yeah, remember them. So yeah, absolutely. We're very aware that the San Diego Alliance and we wish them well. I think, yeah, they come up when I search. So my advice if anybody who's searching for us is to look for a Zero Project Equitable AI Alliance. But also join our LinkedIn and disability inclusive AI group. That's also a very good way, and that is separate from just look searching for the alliance. And we've got about just over a thousand members of the LinkedIn group at the moment, and it's quite active. So that would certainly be one play, one way of getting round some of that barrier. The diversity issue is an interesting one, and I think it's it's interesting. One, I I've always been quite struck by uh the work of uh Yuta Trevor Angus for many, many years for many, many reasons. But I do think the point that she has made for some time. What AI generally does is it looks for the median, it looks for the average. And what it tends to do is exclude what she calls the outliers, the things that don't quite fit the pattern. And I think the point that she's made, which I think I would really talk to developers about, is start from those outliers. If you build to include the outliers, you include everybody in your planning. That might affect your weighting, it might affect the data set, but it also, I think, in terms of the algorithms that you're writing, making sure that you're not excluding uh, if you like, idiosyncratic data as you search and uh interpret it is quite important. I think the questions of discrimination and bias are complex. Let's let's take AI which is being used specifically designed for people with disabilities. And to be honest, there isn't really a great deal uh of discrimination and bias in those tools. Seeing more and more assistive technologies having AI elements built in. So this is why we think there's a lot to be learned from that experience. Where we have a problem to some extent is where we have mainstream tools that should be inclusive. But even here, the I think the scenario is a little more complex. If I ask any of the big tools, Copilot, Gemini, ChatGPT, about disability, most of what it gives back to me is quite reasonable. It's not particularly biased or discriminatory. It draws from quite a wide layout large language models and so on, and it gives me good answers. What are the best ways to support people with disability employment? Any one of those will give you some pretty good accurate information. Where we start to hit a problem is when we become domain-specific. And the classic one that everybody talks about quite rightly is in employment and recruitment, where the data of a person is misinterpreted by the AI because it doesn't fit the normal pattern, whether it's facial disfiguration, whether it's not giving eye contact or speech patterns and so on. But we can see those same patterns beginning to emerge and being problematic in other areas. AI in education would be a really good example where people's interviews to enter higher education will encounter exactly the same questions as employment, but also problems engaging students' work. What's the place of AI in it? How do we measure students' work when it's been written with various forms of AT, including AT that encomplices AI? So these are things that we're beginning to struggle with, but it's where the AI is being honed down. It's being made more specific, both generative AI, energetic AI, and conversational AI, and so on. So these are all really quite big issues and challenges, but it does begin to focus our attention a little bit on where the biggest single problems are. One of the things that I we're talking about a little bit here about M-enabling as well, as well as the data and in the interrogation of that data, it's being is transparency. Transparency of what is in your data set, but also transparency of what's often referred to as chain of thought. So how does the AI get from the data to an answer? What's the chain of thought that it employs? And if we understand transparency in that chain of thought and how it's doing it, then we begin to see whether or not there are issues in that chain of thought that are creating bias and discrimination also. So lots of things to think about.
Antoinio Santos:David, don't you think that the the quality of data that we are getting is somehow a mirror of the areas in which we achieve a progress and the areas where we haven't achieved that progress.
David Banes:Um I think you know the data about disability, I think, is quite widely available. As I say, when we talk at those mainstream things about disabilities and people talking about disability, that's that's that's fine. We have this interesting challenge around privacy and knowing whether the data includes people with disabilities. So here's a problem for the data. If we treat all data as anonymized and private, how do we know which data represents people with disabilities? All they are is outliers. They're the oddities, they don't fit the mainstream pattern, they don't fit the average. And that's why they get excluded. Many, many people that I talk to with disabilities recognize the huge potential that AI brings, and they do bring a slightly different perspective to privacy as a result. But as we move into more and more policy uh around AI regulation, it might be that we're actually creating a challenge in knowing whether data sets have got data from people with disabilities included, because we can't identify them. They're anonymized. So I think for policymakers, this is an interesting problem. And I'm sure it's not the only outlier group that this is true of. We know whether or not the data that's included is men or women or whatever, if it's all been anonymized.
Neil Milliken:Yeah, I th I think that this is a a dilemma that's been around for a long time. We had really quite fierce debates about browser sniffing for AT to try and work out the intent of the developer is that they wish to help improve an experience, but at the same time, many people don't wish their disability status to be known or disclosed. I was struck at the conference by the keynote on, I think, the first day, from Meta's lead around accessibility, and they had the glasses on and they're showing what a wonderful potential it had as an assistive technology. And I thought, this is great. I'm never going to use it. Because I don't trust having Mark Zuckerberg on my face, because you don't have that transparency and that explanation of the train of thought and all of the rest of the stuff. I think the other thing that that you don't have is, and I I do sometimes give away my data because I understand that I'm making an exchange of my personal data for something that is more valuable for me. But I think that people are extremely cautious now because of what's happening in geopolitics, where you can give your data to one organization and it can transfer to another. And a good example of this is in me, the the gene sequencing startup that everybody was really interested. I'm going to find out about where I came from and my ancestry and all of these things. They go bankrupt. The data that people thought they'd given to a company that was going to keep that secure is now free on the open market. So I think that it's it is something that requires really significant thought about how we handle this and how we can ensure that people that do give that data away aren't at a later date potentially making themselves vulnerable because of some of the things that are going on in society right now.
David Banes:Yeah, I do agree. I think one of the I mean, fundamentally, data protection principles should be sufficient for AI data. One of the problems is that most of us give up data without actually reading what it could be used for. And the terms and conditions feel sometimes if they're deliberately written to make sure that you don't read them.
Neil Milliken:Of course.
David Banes:Um one of the nice things you can do if you're about if you're thinking about one of these things, and which I would advocate for, and I've told many people with disabilities to do so, actually stick the terms and conditions into Chat GPT or similar and ask it to tell you the key points. Create an easy read version for me. If the developer, the manufacturer or vendor hasn't done so, we can use AI to give us that feedback and understand what it might be used for. Easy read is actually quite good in some of these terms. But I think the interesting problem we have, and it is an interesting problem, if we don't allow our data to be used as people with disabilities, the AI cannot get better. True. It can only learn from the information we let it use. That's one challenge. I think the second challenge, you you know, you touch upon it there, Neil, is what is the balance point for different groups in society? The balance between benefit and risk may be quite different for some people with disabilities than it is to others. And I think the genetic one is an interesting one because actually there's an I think if you have a history of genetic conditions in your family and you put it through a gene sequencing system and the DNA checks and so on, yes, you you may find out some of those things, as well as your your family history and so on. Where did I come from? But that might, as you say, then be end up in the hands of insurance companies and so on, who may then choose to deny you access to health care and health insurance because you're high risk. What does that mean for people with disabilities? What is the balance of risk and benefit there? In that if they're implying for health insurance, if they have a disability, they probably already have to declare that when they're making those health applications. Um so that actually the risk may actually for them be slightly less than it is for other people. But finding that balance, not for the entire population, but for different sectors of the population, is even more challenging when we have all of these different dimensions and weightings to take into account.
Debra Ruh:David, I I know that we want to be thoughtful and not keep you on air too much, but I want to ask you a really, really hard question. Um these are really hard questions, but first of all, I want to start by saying I appreciate the collaboration that I see coming from the Equitable AI Alliance, only because I believe that the only way moving forward is for more of us to collaborate. So I appreciate you know that you're collaborating. I look forward to seeing even more diversity in what you're doing, but I sh I know that's what you're working on. But uh this is this is why I'm glad that you are there and other groups are there as well, because this is really complicated. So, David, this is a question I have, you know, how do we get AI right when we as society don't even know how to do this ourselves? Society doesn't know how to include the outliers. We know how to technologically make sure that all digital, all technology is accessible. We know how to do that as society, and yet we don't. And so I think I told this story on the air already. So I apologize to the audience if I did, but we were playing a little Google game. It was just a little silly Google game. And what it was is that Google would take a picture, a little app, it would take a picture of me and it would um trash talk me. Oh, your hair is so white, and sort of insult me a little bit, but in good nature stuff. So I took a picture of me and it trashed me, and others in the room did. And then I took a picture of Sarah, my daughter with Down syndrome, and the app came back and said, it is not appropriate to be making fun of people with disabilities. It's really, and I actually got a big lecture from Google about not making fun of my daughter with Down syndrome, but my daughter had actually wanted to play the game too. And so I loved Google that they were so thoughtful about it, but but it my daughter wanted to anyway. So it's such a slippery slope. And so you see these just giving some love to Google. You see Google trying to get it right. How in the world do we get this right when we don't know how to get it right in society? But by the way, let's make sure we get it right with AI. I I I just was saying thank y'all for coming together to explore these things.
David Banes:Well, one of the big problems with AI is it can amplify existing problems because of the way in which it it filters data. Yeah, I think that the humor one is a really interesting problem. And we've seen a growth of comedians with disabilities. There is there is uh uh an interesting, subtle difference between laughing with people and laughing at people. Correct. That gets even more complex when comedians are laughing at themselves. So, you know, how do we how do we work our way through that and so on? I think that one of the biggest problems um is is defining what we mean by right. I think there's some underlying principles and which I think are important. The first one is transparency. I think, you know, regardless of what else we think is right, transparency is an underlying principle that I think is is something which can be applied through the chain of developing AI. I think the other principle which I would apply in terms of people with disabilities is co-design. It's not just about taking their information, it's not just about asking them to do testing, it's actually about involving them through the entire process of development. Now we've known that in other areas of technology. AT companies have been co-designing for most of their their businesses.
Antoinio Santos:Sure.
David Banes:So I think that co-design for AI engaging the outliers in that in that design from the beginning is also very, very helpful. But I think there's another problem as we as we go forward, which is global perspectives. That concept of what is right and wrong and how we both feel about disability as well as other issues, varies according to where in the world we are and culture and so on. Um, what do we mean by independence? As you know, I I spent a long time in the Middle East, and independence of people with disabilities was almost always framed as being independence within the family, which is very different to what is often discussed in the West, which is how do you get away from your family and live in a flat by yourself? So there are lots of concepts that are culturally charged. And I think when we look at data and that concept of co-design and what is right, we need to understand there isn't one single answer to that question.
Neil Milliken:Absolutely. It's far, far more nuanced and far more varied. David, thank you so much for coming and doing this when you're feeling a a little unwell. Uh, we really appreciate you sharing the work with us and and like the feedback from the community, I think, is super important. I think the the last few days it was really a live discussion at the M and Aveling conference.
David Banes:I think one of the things I I think Deborah touched upon it was this need for cooperation and collaboration. Absolutely. So if people come to the to our Equitable AI Alliance website, one of the things they'll find there is our resource hub. So what we've been doing is gathering resources that other people have written and developed and pulling them into a single resource hub that you can use to help build capacity within the disability community to both inform people with disabilities about the issues, but also advocate for what needs to be done for the future, what we believe can be done to improve AI. But that's freely available and openly available to anybody that wants to use it. If people want to know more, we would really say, you know, Curse, if you want us to organise perhaps a webinar for a business, a company, an organization, we would be very happy to do so to go into the issues a bit more depth within a specific context. And join us online, both in terms of the LinkedIn group, but also join us for some of the webinars we're going to run, which will be done in partnerships with others. Our next one will be with you to Trevor Anius, and we will be talking about practical steps to implement inclusive and accessible AI.
Neil Milliken:That's fantastic. And may need to mark my diary for that one because I'm a huge fan and I'll email you. I'll Yeah. Excellent. Thank you so much, David. It's been a real pleasure. Thanks to Amazon for continuing to support us. And we look forward to continuing this work in partnership together.
David Banes:Likewise. Take care, everybody.