
AXSChat Podcast
Podcast by Antonio Santos, Debra Ruh, Neil Milliken: Connecting Accessibility, Disability, and Technology
Welcome to a vibrant community where we explore accessibility, disability, assistive technology, diversity, and the future of work. Hosted by Antonio Santos, Debra Ruh, and Neil Milliken, our open online community is committed to crafting an inclusive world for everyone.
Accessibility for All: Our Mission
Believing firmly that accessibility is not just a feature but a right, we leverage the transformative power of social media to foster connections, promote in-depth discussions, and spread vital knowledge about groundbreaking work in access and inclusion.
Weekly Engagements: Interviews, Twitter Chats, and More
Join us for compelling weekly interviews with innovative minds who are making strides in assistive technology. Participate in Twitter chats with contributors dedicated to forging a more inclusive world, enabling greater societal participation for individuals with disabilities.
Diverse Topics: Encouraging Participation and Voice
Our conversations span an array of subjects linked to accessibility, from technology innovations to diverse work environments. Your voice matters! Engage with us by tweeting using the hashtag #axschat and be part of the movement that champions accessibility and inclusivity for all.
Be Part of the Future: Subscribe Today
We invite you to join us in this vital dialogue on accessibility, disability, assistive technology, and the future of diverse work environments. Subscribe today to stay updated on the latest insights and be part of a community that's shaping the future inclusively.
AXSChat Podcast
Accessibility Washing Won't Fix Broken Systems
Lia Raquel Neves,, founder of EITIC Consulting, offers a thought-provoking exploration into the ethical dimensions of artificial intelligence and its profound implications for accessibility and inclusion. Drawing from her background in philosophy and bioethics, Lia challenges the common assumption that technology is neutral, instead arguing that our creations inherently reflect our values, biases, and blind spots.
The conversation delves into crucial gaps between AI regulations and accessibility requirements. Lia points out that the European AI Act doesn't explicitly define disability as a risk factor, meaning systems that significantly impact disabled users might not be classified as high-risk. "This is not just a legal oversight," she explains, "it's an ethical failure." Without structural requirements prioritizing accessibility, technologies from virtual assistants to facial recognition systems continue to exclude people with disabilities.
When discussing data ethics, Lia confronts the uncomfortable reality of historical bias. Training AI on decades-old data inevitably reproduces historical patterns of discrimination and inequality. While diversity in datasets helps, Lia emphasizes it's insufficient alone: “We must actively detect offensive or discriminatory language and prevent models from amplifying harmful content.” She advocates for continuous human oversight, transparency, and creating mechanisms for people to challenge automated outcomes.
Perhaps most powerful is Lia's reflection on representation: "Digital accessibility is still seen as a technical requirement when it is, in fact, a matter of social justice." She notes how the invisibility of people with disabilities in media, business, and technology perpetuates exclusion, creating a cycle where decision-makers don't prioritize what they rarely encounter. True inclusion means asking who's missing from the data, who's excluded by design, and who's absent when systems are being developed.
Ready to dive deeper into creating ethical, inclusive technology? Connect with Lia on LinkedIn and join the conversation about building technology that truly serves everyone.
Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com
Debra https://bsky.app/profile/debraruh.bsky.social
Neil https://bsky.app/profile/neilmilliken.bsky.social
axschat https://bsky.app/profile/axschat.bsky.social
LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh
Hello and welcome to Access Chat. I'm delighted that we're joined today by Lia Rikard-ness, who is the founder of EITIC Consulting. Now, if that sounds a bit like ethics, that's because it is related to ethics. So welcome Lia. Can you please tell us a little bit about AITIC, what you're doing, and a bit of your background and how? Ethics and what you're doing relates to disability, because obviously we talk a lot on Access Chat about disability, inclusion and accessibility etc. So welcome to the show.
Lia Raquel Neves:Hello, so my name is Lia. I live in Lisbon, in Portugal. I'm a light skinned woman with a straight, short blue length, dark brown. I wear glasses. I have brown eyes. I'm 1m69 tall. Today I'm wearing a light brown blazer with a white shirt. I'm in an indoor space with a neutral background, and it's a pleasure to be here with all of you.
Lia Raquel Neves:What I can say about my journey? So my journey into AI ethics began with a background in philosophy, where I developed a critical thinking and an ethical lens that continues to guide my work today, and later I pursued a master's degree in public health focused on bioethics, which gave me a very practical understanding of how ethical dilemmas play out in science, medicine and technology. Eventually I moved into consulting, where I encountered more agile environments and real challenges that organizations face, particularly around AI governance, accessibility and bias mitigation. That is what led me to found ATIC to help companies and institutions bring ethics into action, not just a branding exercise or a legal requirement, you know. So to me, ai ethics is not just about complying with the AI Act or the Accessibility Act. With the AI Act or the Accessibility Act, it is about ensuring that technology serves people fairly, inclusively and responsibly. But I think that this starts by recognizing that technology is not neutral. It reflects zealous choices and even the absence of those who design it, and I have tried to bring this perspective into projects I have worked on, or from conferences like Collision and Web Summit to talk about co-creating inclusive policies with a range of stakeholders. You know, because, at the end of the day, ethics it's not about doing what is right, it's about making it work, it's on practice, you know.
Lia Raquel Neves:But one thing is this issue of disability and diversity has always been part of my life. I grew up with an education, embrace, a difference, but also with a new creative awareness of the barriers that surround us. My grandmother, for instance, part of her vision gradually due to glaucoma, and my wife has keratoconus degenerative high condition, and these and other experiences gave me a very direct understanding of challenges that go far behind physical barriers, incompeting accessibility in healthcare, transport, social integration and, of course, fundamental rights. By coincidence or not, I started my career in research, always working at the intersection of health, technology and social sciences, and one of the most transformative projects I was involved in was called Intimacy and Disability.
Lia Raquel Neves:It was there that I began exploring the social model of disability from Michael Oliver and came to face with the structural problems that we have, from the lack of research on hate crimes against disabled people to the absence of medical training tailored to the specific needs. You know, and one of the main recommendations of that project over a decade ago remains relevant today the need of the expand of rehabilitation services, assistive technologies and support for independent living. And in 2021,. For instance, I have worked with an NPO entirely run by people with multiple disabilities promoting the independent living philosophy. The everyday reality in Portugal was extremely difficult, from the lack of access to personal assistance and often an affordable to absence of basic, accessible tools. You know.
Neil Milliken:I was interested by your statement that technology isn't neutral. Right, and quite often people say that technology is neutral, it's the application of technology that isn't. So I was interested for you to say that you actually thought that the technology itself wasn't neutral. So I think that you know, like with ai, right, um, there are structural issues with ai, but but you can apply it for either good purposes or ill intent, right? So I I guess what?
Neil Milliken:so where in your statement? So how? How is it that that you think there is this lack of neutrality in technology itself, or is it in the application?
Lia Raquel Neves:This is a tricky question, you know, because I think, I believe that emerging technologies can be powerful and transform the innovation, and this can expand the possibilities, but only if they don't reproduce or amplify old exclusions. That is where ethics and governance frameworks become essential, not to slow things down, but to make sure we are not leaving people behind. Ethics, governance frameworks, is a system of policies, procedures and structures that promote ethical behavior in an organization. Technology is not natural in my perspective, not in action.
Neil Milliken:No, no, not in action, no, but I mean, and I think that to a certain extent also technology, our implicit biases and our experiences shape us in how we're creating technology. But again, the actions that we take, can you know, we can be informed through ethics into how we design technology. And also, when you were mentioning about the difference between ethics and compliance, you know it's usually the ethical thinking that gets you to the point of the formulation of the guidelines and the regulations and so on. So I know that Debra is itching to ask a question, but maybe the last little thing is around. You were talking about bias. What are some of the things that you've been working on that are addressing those biases? And then I'll hand over to Deborah.
Debra Ruh:We have been talking about ethics and conscious and unconscious bias and AI and technology For as long as I've been in the field.
Debra Ruh:I know, Neil and Antonio. We've had many, many conversations in Access chat about this. But what I don't know and when I think about it too much, Lia, it makes me nervous have we gotten better because of everything we've tried to do to raise the awareness about it? I know we haven't done enough, but have we made progress? I'd like to think we have, because I'm internally optimistic, but I would just add that to Neil's question too.
Lia Raquel Neves:Thanks, Thank you. So we can spend here all the weekend talking about this, but I want to highlight one point about this. We have done a lot of advances related to technology guidelines, wcag guidelines, etc.
Lia Raquel Neves:We must always keep in mind that multiple institutions operate in this space, from national market of surveillance authorities to the European Commission, to the AI office and the European Data Protection Supervisory, for instance. On top of that, each country brings its own regulatory and implemented context, which can lead to inconsistencies across the countries, you know. So, even acknowledging the efforts of the regulators, I still find Disability Forum, for instance, and other advocacy groups, of course, repeatedly emphasized the importance of a human-centered approach, one that truly reflects the motto nothing about us without us. And, as I learned early in my career, there is a big difference between making policies with people and making policies for people, like technology. Now we look at the intersection between AI, hack and Accessibility Act. We see ethical challenges on operational, legal and social forms. Although both aim to safeguard, of course, fundamental rights, I think they are not fully aligned. I think this disconnect compromises digital inclusion. But one key operational issue is that AI doesn't explicitly define disability as a risk factor. Even systems that have a major impact on disabled users, like employment or education, might not be flagged as a high risk, and that is not just a legal oversight, it's an ethical failure, in my view.
Lia Raquel Neves:I mean without clear incentives or harmonized standards, accessibility can fail off the priority list, not necessarily out of bad intent I'm not saying this but because it's not structurally required. This is how we end up with interfaces like virtual assistants or facial recognition tools that exclude people with disabilities. But of course, there are also legal complexity, if you usually do these kind of questions. If an AI system causes harm, who is the responsible? The developer, the provider, the operator? So this fragmentation can make accountability more difficult. And let's not forget the Accessibility Act, while robust in many areas, doesn't fully cover emerging AI systems like adaptive machine learning or voice-based platforms. Even AI is used to assess accessibility, for instance scanning websites for WACA, waca compliance. We must consider privacy, transparency and false positives. These are real risks.
Lia Raquel Neves:It reminds me of accessibility overlays tools that promise to fix everything, but often why deeper, systematic accessibility failures. It's used to call this accessibility washing. I don't know if I answered your question, but what I can say is accessibility can be a last minute fix. You know. It needs to be embedded from the start, with the direct involve, involvement of people with disabilities across the entire development life cycle. Ethics and accessibility are not two separate concerns if we want ai to serve society fairly. Accessibility must become a core compliance and design principle, not the decorative layer. You know, antonio.
Debra Ruh:I know you had a question.
Antonio Santos:Well, I think one of the big difficulties that we have here is that we are particularly with AI is we are using a new technology that comes into the hands of a consumer, of the hands of every one of us, but then not even those who created the systems know where the data is coming from.
Antonio Santos:It's not just they don't know where the data is coming from, it's also that the data has an history that can go back 20 years, 30 years, sometimes even more, and it's not really possible to make historical data ethical. It's basically impossible, and a good start is just to admit that. No, you can't really make data that was collected in the 50s, in the 60s, in the 70s or in the 80s ethical, because at the source there was large bias. If I could just go to the example of historical data from the United States, from the human rights movement from those days, if we dump that data into the web, of course it's completely unethical and there's no way to fix it. The only way is not to use it. So the dilemma is what are we going to do? How are we going to accept this? Are we going to say this is going to delay technology advancements?
Antonio Santos:to say is this is going to delay technology advancements. The issue is, how are we going to accommodate all this? Because in the end, probably there's a developer looking at the data no idea what the data is about, but also on the line there's an executive say deliver me results, I want to see results. So it's quite a difficult issue because technology is now being completely embedded in our social fabric. This is not about a machine talking with a machine. It's a machine talking with humans, with effects that are unknown.
Lia Raquel Neves:Yes, I totally agree with your point about the data, the historical data, because the risks of bias and the reproduction of stereotypes in AI, especially in search engines or virtual assistants, are real. On ongoing ethical challenges, I agree with that. We can have. I have some suggestions from a ethical point of view, because these systems, like I already said, are not neutral. They operate with the data that reflects historical patterns. Like I already said, are not neutral. They operate with data that reflects historical patterns, like you are saying, of discrimination and inequality and for this reason, we need ongoing ethical strategies to mitigate these kind of failures.
Lia Raquel Neves:So, from a theoretical perspective, we could approach this through different philosophical traditions, which would make for an efficient conversation, for sure, but in practical terms, it starts with something very simple including people with disabilities and other underrepresented communities in the design projects, in technology development and also in policymaking At least. Then, of course, we must talk about data, systems must be trained with diverse and representative data sets across gender, ethnicity and other frequently marginalized factors. But don't get me wrong, but diversity alone is not enough. But don't get me wrong, but diversity alone is not enough. So we must also actively detect offensive or discriminatory language and prevent the model from amplifying harmful conduct. Sometimes we forget that the scale of the Internet does not reflect the scales of real life, and that is a serious trap, since many communities are underrepresented in online data and therefore risk being misunderstood, misrepresented or ignored.
Neil Milliken:To your point about people being ignored and, you know, not represented on the internet, which is where the data has been scraped for these large language models is really, you know? Talk about pictures of people with disabilities, right, there are far too few. They're just not represented on the images on the internet, right? And so what people are proposing, often when there is this lack of diversity data, is to create synthetic data. Right, but there is also, as one of our former access chat guests, utah troger, and has said, synthetic data is just fake data, right. So how do now? Now, there may be use cases where you can create stuff that that can help you shape things, but how do we deal with the ethical issues of making stuff up? You know, because the synthetic data is just making stuff up. So what are the ethical questions around making up data in order to try and address some of these historical underrepresentations in the data that's already prevalent on the internet?
Lia Raquel Neves:This is a great question and it's difficult to answer this question about synthetic data. What I can say from the ethical perspective is that we need to balance sample methods, balanced sample methods. I know that this does not respond to your question, because I can give you an option, an ethical option, that is, stress testing for ethical risks such as bias, exclusion or misclassification, and safeguards to ensure minority perspectives are not diluted in statistical noise. But this is not enough and I know this idea. I know and you probably also know this is not enough. When it's not possible to fully remove bias from training, data models usually must undergo rigorous audits and regular evaluations and real-world testing with diverse users real users. We also need tools like customized instructions, the biasing methods such as counterfactual data argumentation and fairness-aware algorithms to detect and fix structural flaws.
Lia Raquel Neves:But all of this depends on one key condition continuous human oversight, and AI systems should not be treated as autonomous or uncustomable entities and this is a problem and they must remain open to human intervention and contestation. We must ensure that systems learn from human feedback, adapt to the different contexts and evolve over the time to improve inclusiveness. This brings us to the core of AI ethics. It's not enough to recognize the problems. Of course, we must ensure this transparency, auditability and accountability, and that means explaining how decisions are made, allowing human review and making sure people can challenge automated outcomes Lia, I think that Neil and I must be on the same page today, because I was well, I guess all three of us are but I had a question.
Debra Ruh:but I find this a very interesting conversation that we're having because once again, we've been working on nothing ever since I we've just been working on it forever. And I said in the chat window is it a synthetic output, when all the data was programmed by humans? I don't know that answer. I just was thinking in my head Because once again, I get when it takes, ai takes and it puts things together to make a picture or a graphic. Anyway, I just think we have a lot to explore as humans over the next few times.
Debra Ruh:And Neil said in the chat how can we have human intervention and oversight when the speed of decision is making it so fast? And so just to bring that in too, Lia, you definitely need to come back on and talk about this again. But also, I was thinking the question I was going to ask was but at the same time, ai it seems like AI can also AI for good can help us get our hands around the ethics, but we as human beings aren't always ethical. So it's just such an important conversation, but I'm just so glad we have brilliant people like you working on it, Lia, because it feels a little over my head still.
Lia Raquel Neves:You know about this. I would love to come back and talk a little bit more about this. So I have a kind of an analogy, because when we are talking about these files and subjects, we are talking about real people. It's like when we talk about diversity, equity, inclusion and accessibility. When we talk about diversity, equity, inclusion and accessibility, AI plays a paradoxical role. In fact, it can either promote accessibility or reinforce exclusion, depending how it's designed and implemented. Personally, I find it difficult to accept that digital accessibility is still seen as a technical requirement when it is, in fact, a matter of social justice. But this is my personal perspective. I mean, fundamental rights are not negotiable. This is what I think, but I know that I have other people that don't think like me, and this is the problem. This is why an ethical approach to AI must go far behind regulatory or technical compliance, Because I must foster a permanent conversation about responsibility, social impact, because it is in these gray areas that most ethical failures emerge For me.
Antonio Santos:Let me bring some provocative thoughts to say.
Debra Ruh:They don't necessarily relate with AI but they relate with visibility as an example.
Antonio Santos:yeah, in the Portuguese television, people with disabilities have no visibility. There's no visibility of people with disabilities in the media. There's no visibility with people with disabilities in many startups environments around the world. So entrepreneurs they don't really know who is a person with disability unless they have. For some reason they have a family related who has a? Disability or if they themselves have disabilities, the lack of visibility. Don't put these topics on people's heads.
Antonio Santos:The other is we still have many people, many events around disability who are basically very close on themselves, people with disabilities talking with people. Now, everybody knows the problem, that's it. Very few you know the Web Summit technology events. They very rarely approach the topic of disability and inclusion. So it's almost like you don't really see it, so you don't really think why should I take care of a problem that is not really part of something that is in?
Antonio Santos:front of me I think this also relates with some of the conversation that we're having here as well. You don't see it? Why should AI ethics care about visibility?
Lia Raquel Neves:This is tough, very difficult to approach because it's the same strategy for LGBT people. So if I don't see, it's not a problem. And it's the mindset of the people that usually I always ask for data, because when we have data we know and we can argue saying that give me your data, so you have a problem. But people prefer to say I don't have a lot of data, but this is not a problem. And this is very curious because data goes hand in hand with the issue of accessibility in companies, the business side. We know, according to the World Health Organization, more than 1.3 billion people worldwide with some form of significant disability. In Portugal, our data from the last 2021 census shows that 10.9%, I think, of resident population aged 5 or over.
Antonio Santos:Yeah, but Lia on that census. They made the change in relation to the previous census and suddenly a large number of people who disappeared just disappears, because they reframed the question.
Lia Raquel Neves:Yeah, this is the problem. We know that a lot of the numbers don't mean, don't correspond to reality, and this data has ethical also ethical, social and economical implications.
Lia Raquel Neves:If you want to build AI systems and ethical digital infrastructures, we must face these systems and ethical digital infrastructure. We must face these numbers and act with them, not these numbers that we have, because the numbers are a lot bigger and we know this. This is the business side. Just wanted to. I don't want to say this in a not appropriate way, but the question is innovation whom you know, at the end of the day, entrepreneurs, if technology are not designed with accessibility in mind. We already know risk, reproduction or even number fine exclusion, but when we know about, when we talk about antonio, about everyone, what we are saying? When we talk about everyone, I don't mean an abstract or idealized public. I mean real, diverse people with different bodies, with terraces, geographies, lived experiences. So this is the problem People with disabilities, racialized communities, lgbtq, high A+ individuals, older adults, people with low digital access or digital literacy and those living in rural or under-connected areas. Counting heads.
Lia Raquel Neves:It's about ensuring that the historic calendar represented in data sets, testing panels and policy rooms are not just visible but actively shaping the outcomes. So for me, inclusion means asking who is missing from the data, who is excluded by design, who is not in the room when AI systems are not being developed, regulated or deployed. So I think we can agree that representativeness is not a checkbox. It's a kind of a commitment to intersectionality and how we share knowledge, distribute responsibility and build the future.
Neil Milliken:So that's a great point to end it on, because we've unfortunately reached the end of our half hour. It's been fascinating. We're definitely going to have to have you back. You know, thinking about who needs to be in the room, who needs to be represented. I need to thank Amazon and MyClearText for keeping us on air, keeping us captioned, and thank you, Lia, for a fascinating conversation, Really thought-provoking stuff here. I look forward to this conversation continuing on social media. And please, just one last thing Tell people, tell our guests how they can find you.
Lia Raquel Neves:So thank you so much for having me. It's been a real pleasure to be part of this conversation. I would be happy to come back whenever the opportunity arises. If anyone would like to continue this discussion, feel free to reach out. I'm happy to share my contacts online with the site or find me on LinkedIn. I'm always welcome to talk to people. So thank you so much.
Neil Milliken:The website is https//eiticxyz correct, correct, super. Thank you so much and see you next time.