AXSChat Podcast

Confusing, Hopeful, and Necessary: Our AI Accessibility Journey

Antonio Santos, Debra Ruh, Neil Milliken

The most valuable accessibility conversations are happening in unexpected places. In this reflection on recent events, the AccessJet team shares how conferences focused on broader topics yielded surprising insights about disability inclusion, corporate transparency, and AI's potential.

At Fair Cultures Barcelona, discussions about corporate transparency revealed why accessibility progress remains challenging despite public commitments. Organizations often present themselves as champions of inclusion while simultaneously supporting trade organizations with conflicting priorities. This disconnect helps explain why even with C-suite buy-in, meaningful accessibility implementation remains difficult as different departments operate with competing objectives.

Meanwhile, the Siemens AI Summit showcased how AI can revolutionize personalization in prosthetics, creating solutions tailored to individual needs rather than one-size-fits-all approaches. The beauty of these conversations? They happened at an AI conference rather than an accessibility event, exposing new audiences to disability-centered thinking.

We explore the concept of "peer normativity" – ensuring disabled users have experiences equal in quality to non-disabled peers, even if those experiences differ. This shift from identical experiences to equivalent quality acknowledges the complexity of meeting diverse needs.

While celebrating AI's potential to transform accessibility, we also grapple with serious questions about data privacy and corporate responsibility. As we increasingly rely on AI assistants that require access to deeply personal information, can we trust the organizations behind these tools? And as these technologies increasingly provide pre-packaged answers, how do we ensure future generations develop the critical thinking skills to question what they're being presented?

Join us for this thought-provoking discussion about finding accessibility insights in unexpected places, the promise and perils of AI, and the importance of critical thinking in our increasingly algorithmic world.

Support the show

Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com

Debra https://bsky.app/profile/debraruh.bsky.social

Neil https://bsky.app/profile/neilmilliken.bsky.social

axschat https://bsky.app/profile/axschat.bsky.social


LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/

Vimeo
https://vimeo.com/akwyz

https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh

Neil Milliken:

Hello and welcome to AXSChat. I'm here today with Debra and Antonio. We are just recording a short one and it's a reflection on some of the events that we've been to over the last couple of weeks, because, although there have been accessibility-centric events we've just had Global Accessibility Awareness me. I, for me think from the conversation I've just had with Dactania before we came online, some of the richest conversations and the most stimulating ideas have come from events where accessibility hasn't been the main factor in the event. So I went to an event in Barcelona called Fair Cultures, which was looking at a range of topics, from DEI to regulations to lobbying, and what it did was it brought together a mixture of technology, people that were thinking about policy, people that were implementing different programs and looking at how we can create a culture of fairness and equity, but also trying to understand the new world that we're operating in now. And it's multiple worlds that we're operating in now, because you've got the world that has been turned on, if you like, in North America, with the changes in attitudes towards diversity, equity, inclusion that have driven. Then been politically driven you've got the counterbalance to that that's going on in Europe, where there was actually discussions about how you might accelerate and defend what you might consider European values. There's a Brit who considers himself still to be European.

Neil Milliken:

That was of interest to me and I was really very interested by the sort of whole idea of lobbying and transparency. So one of the most interesting talks for me was a professor at one of the big business schools that also runs something called Good Lobby, and this was really looking at corporate transparency and really mapping what organizations do versus what they say. And this was really interesting to me because organizations spend a lot of time talking about what they want, positioning themselves to be sustainable, inclusive and all of these things. But they also belong to trade bodies and different organizations and when you start looking at what those organizations that they put money into are asking for, quite often there are odds with the stated aims of the organizations. So what this particular organization is doing is building a map of this and helping people to understand the complexity of what organizations are doing and also understanding the complexity.

Neil Milliken:

That and a statement on diversity and inclusion from the HR team or whoever's responsible, another one on CSR. But they may be at odds with each other and I think that this plays in very nicely to some of the talks that we've had recently, which is why is it so hard to make progress on accessibility, disability inclusion, why is it that when we make statements of intent, when we have, you know, c-level buy-in, that it's still hard to do this stuff? And I think some of that is because organizations are diverse and they have different needs and different priorities within those same organizations. So for me, this was really a fascinating topic. There were lots of other great speakers as well, but I thought that those organizational dynamics and the idea of transparency and mapping and reporting are topics that are going to come more and more to the forefront, certainly in the EU, over the next coming years, and I know, antonio, you were also at a conference and were looking at different intersections.

Antonio Santos:

Yes Siemens AI Summit at the , there was a track on people and culture and there were a few examples focused on how to make sure that we create and adjust technology to the needs of people, and it was particularly focused on prosthetics how can prosthetics be adjusted to the needs of the person? Sometimes, when people go into that journey, they have different questions. A parent can ask will I be able to carry my daughter in the arms? Will I be able to run? Other people might have questions will I be able to run? Other people might have questions Will I be able to practice sports?

Antonio Santos:

And they were trying to find ways where they can answer to those questions and use AI to adjust prosthetics to the needs of each individual, regardless of their age and based on their profile. And the fact that you are able to talk about this, the fact that you're able to talk about this at an event on AI that is not about accessibility. It actually connects a lot with the people who often don't attend accessibility conferences, so I think that's one of the best elements. People end up connecting with the fact that users have this type of needs and became more aware that they're able to use AI not just to create a bot, but to do other things.

Antonio Santos:

I'm just a little confused because I don't understand why we would ever be creating anything for, say, a prosthetic, and we were not considering the person we were creating it for. I just find the conversations at a lot of these conferences very interesting right now. I also think this is certainly not just about big corporations, which neither one of you were saying. But I'm also hopeful. I have not attended any of those types of conversations or conferences lately, sort of deliberately, but I was recently speaking at the World Vaccine Congress. But I just think that one thing that I'm hopeful about, I'm glad that big corporate brands are starting to figure out that you got to really really consider personalization and I know people already were, but it's just don't y'all think that sounds weird, that it's like, oh, and they're figuring out. We really need to make sure prosthetics, for example, are being personalized. So I think a lot of this confusion that we're experiencing as a world maybe is adding to these conversations in a way that makes them more valuable for human beings. So I think it's so interesting because I've gone to so many accessibility conferences and I don't really find them that valuable for me just where I am in my life. But I think the accessibility conferences are valuable to a lot of people, but at the same time, I think that I think the conversations have to shift and change.

Antonio Santos:

And I know, neil, you just had a conversation with Steve Tyler that Antonio and I couldn't join. Where you were he was talking aboutall, were both talking about sort of some accidental findings of accessibility and I know with GAD we celebrated. We have a lot that we can celebrate, but I still think a lot of the conversations that I hear happening right now with a lot of brands, they're very interesting and a little bit confusing, because I think people are trying to figure out how do we do our job despite all this stuff? How do we have better impacts? You know what are the right ways to go? I think it's a really good time for our community. I really do.

Neil Milliken:

So it was interesting because I thought we were going to talk about accidental accessibility and actually what it was was unexpected accessibility because clearly the companies that have provided the products to Steve were very intentional. He said that his new router turned up and it had Braille on all of the ethernet ports and all of this kind of stuff. That doesn't happen by accident. Neither does an app that works out that you're using assistive tech. That's intentional. But what it is is it's it's coming in maybe areas that have been unexpected before, maybe because we've spent too long expecting the accessibility not to be good so that when companies actually do it, it comes as a surprise rather than an expectation. But it was interesting to see that.

Neil Milliken:

And we also talked about the idea of peer normativity. Now, that's a slightly academic term. It was coined for accessibility by Kevin Kerry, who was chair of the RNIB, friend of mine, and has been on Access Chat a few years back and the idea is that not everybody has necessarily exactly the same experience, but you should have the equivalent quality of experience as your peers who don't have a disability. So if everybody's experience sucks, it's equal treatment. If your experience sucks too, right, that's peer normative equivalence of experience. But if the average user, the non-disabled users experience is fantastic, then we should expect, as users of assistive tech, that we have an equally fantastic experience, even if it is a different one.

Neil Milliken:

And so I think that that sometimes we we get hung up about the idea that you can make one app that's going to work perfectly for everyone, right, and sometimes that's not possible because you've got all of these different needs. And that harks back to the conversation we had a couple of weeks back with Tristan Lavender, where, you know, we were talking about cognitive accessibility and there the needs don't meet and sometimes you have to meet in the middle. Meets don't meet and sometimes you have to meet in the middle. So I think that sometimes we actually have to think about what the intent is right and whether it's compliance or it's delivering quality experiences for people. And I think that what Antonio was getting at was not that we don't understand that different people have different needs, but that by using AI, you can probably deliver that personalization more quickly and better. And you're bringing people that were not in the disability field but coming to it through the application of AI and waking them up to thinking about this, and that's what's exciting about going with people.

Debra Ruh:

And AI allows you to do things that you could not do before. It's just interesting watching it unfold because, you know, with accessibility we're still a small team comparatively around the world, but it's like we need everybody going into conferences that aren't accessibility and, you know, helping them understand Like, once again, the vaccines. I don't usually go and speak at a vaccine conference, but I was really, really surprised at the amount of healthcare providers that were stepping in, going all right, how do we improve the patient experience? How do we do? And so I'm actually getting a little hopeful about it, and I'm also hopeful Neal I'm just going to say this and that I love all of the work that so many big brands have made. I'm so grateful for it, but I also am excited about seeing medium brands and smaller brands get more innovative and join these conversations, whether they're being had with words like accessibility, disability, inclusion, whether or not, but that we're actually being more successful making human lives better using technology, because I do believe in that and I know that AI can improve it.

Debra Ruh:

I also saw that there's some recent, of course, ai in accessibility, some tools coming up specifically for accessibility in AI, and I know there's been some controversy in our community talking about stuff like this. But I do think that's interesting as well, because I don't really care who wins at the end of the day, as long as we make things accessible to the community and society. I'm not as interested in one accessibility brand winning over another. I'm more interested in the impact, as I know we all are. That's why we do this show for 11 years. But it is hopeful. I think I'm being a little hopeful lately. What do you all think?

Neil Milliken:

I swing between being hopeful and despairing. I think that there's really amazing capability tools and some of the stuff that we're doing with AI is wonderful, but I'm also mindful of the way that some of the companies that own these AI tools have been mining data and how they not necessarily behave responsibly. So when we're thinking about the impact on society, some of those things make me feel somewhat uncomfortable. So, as we start getting to sort of using AI agents more agentic AI being like the real buzz right now and we're really sort of having multiple-layered, complex AI assistants able to do things on your behalf, you're surrendering huge amounts of your very deeply personal data to these organizations, and that requires a level of trust and security beyond anything that we've seen before, and I don't know that the organizations that are asking us to surrender this data have really proven that they can be trusted on some of this stuff.

Debra Ruh:

I don't trust.

Debra Ruh:

I think it's important to everyone who is leading organizations or making decisions in organizations to have a personal perspective and so on, on these tools. Why is that? Because in the beginning we all get excited with technology no thing and then we go to a curve. And then when we have this hand-on and this experience, when then we start to realize the benefits and we also start to understand the weaknesses and the problems. So we were so excited at the beginning and then some of that excitement might fade away. Or you might say, oh, I was really excited with ChatGPT in the beginning and today ChatGPT is actually not my to-go app to get the best results, I prefer others. So having this hands-on then have some critical thinking about how all this works is basically really, really important in order to have a big picture and to apply some, to have these views, to allow us to make better decisions and use the tools better.

Antonio Santos:

I agree, and I'm just going to say I just want to say I'm in love with ChatGPT. I am absolutely in love with ChatGPT. I just am absolutely in love with ChatGPT. Training it yourself, all those rich conversations I've had with Chat GPT add value to the conversation I get, whereas I don't have that relationship with the five others I'm using, and how much I've been talking with ChatGPT for years. I also use several others. But I just really, and I've yeah, so I'm using five others, I also think sometimes it's how I'm going to say I'm going to have to say I love Chat GPT. I'm having wonderful, wonderful results. So I sometimes just don't know. I don't know everything. So I sometimes don't know how you build that. That's just me not knowing. But I am very, very hopeful about what it can do just to So I'm actually hopeful, but back over to you, Neil, thank you. You know what's happening, especially in Europe. improve these conversations so that it's less about compliance, which we've done so aggressively in the United States with accessibility.

Neil Milliken:

I'm not beating on AI tools also lots of unresolved things that we need to look at.

Neil Milliken:

And this is where ethicists really need to come in and come to the fore. We need to engage early in this stuff, and this is something that organizations need to be taking seriously. And that was another area that the conference was covering. It was sort of around corporate ethics and stuff like that. So I thought it was a really interesting blend. It sort of started us off thinking about more and about how you might govern around this stuff and also the sort of impact that organizations are having on society.

Neil Milliken:

Final thought before we leave because I we're a short one today is yeah, we're of an older generation, grew up without the technology. We grew up having to question, having to read, having to do references. What you're being fed now, when you're typing stuff into a search window, you get a summary, and so what you don't get is is often all of the references and everything else. So we need to be teaching younger generations critical thinking and the ability to question and sort of dig behind the results, because otherwise what you do is you get reinforcement and you could quite easily end up with people sort of taking everything just as it's presented to them. So so that. So that's something that I think we as societies need to be thinking about in terms of how we enable people with the mental tools and the social capacity to be able to use these new technology tools. So, with that, I'd like to thank our friends at Amazon and MyClearTex for keeping us on air, keeping us captured, and look forward to continuing the conversation on social media.

People on this episode