AXSChat Podcast

Are We Losing Ourselves to Artificial Intelligence?

Antonio Santos, Debra Ruh, Neil Milliken

The fascinating yet troubling relationship between artificial intelligence and human connection takes center stage in this thought-provoking discussion. Debra, Neil, and Antonio dive deep into the paradox of technology that promises connection while potentially undermining genuine human relationships.

When does AI shift from helpful tool to harmful substitute? We examine real-world cases where AI companions have impacted loneliness in unexpected ways, and confront the disturbing reality of deaths connected to AI counseling services. The conversation reveals how many companies developing mental health technologies often prioritize technological innovation over psychological expertise, creating potentially dangerous gaps in understanding.

Beyond personal relationships, we explore how algorithmic bubbles increasingly narrow our information landscape, potentially weakening critical thinking skills and societal cohesion. The discussion turns particularly passionate when addressing how screen time and AI might be reshaping brain development, especially for younger generations who may be losing essential language processing and memory skills.

What becomes clear throughout our conversation is the urgent need for thoughtful, ethical approaches to AI development that prioritize human wellbeing over market capture. We challenge the quasi-religious belief in AI as the solution to everything, advocating instead for technology that genuinely serves humanity rather than replacing human connection or diminishing human capability.

Join us for this timely exploration of what it means to be human in an increasingly AI-powered world, and discover why protecting our humanity might be the most important consideration as we navigate this technological revolution.

Support the show

Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com

Debra https://bsky.app/profile/debraruh.bsky.social

Neil https://bsky.app/profile/neilmilliken.bsky.social

axschat https://bsky.app/profile/axschat.bsky.social


LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/

Vimeo
https://vimeo.com/akwyz

https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh

Neil Milliken:

Hello and welcome to axschat to just the three of us today, but we're covering, I think, what is a really interesting and important topic. We've seen an awful lot about AI in recent weeks and months, whether it be AI powering, accessibility, whether it's the opportunities for creating new assistive technologies, but also AI and how we interact with technology and humans right. Can AI supplement human relationships and interactions? Should we be using AI counselors? We've seen in the news recently the potential harms of some of that stuff, with reports of people taking their lives because they've been using AI counselors. We know that there's benefits and harms for sure, but let's dive a little deeper. I know, Deborah, you were talking about this before we came on air. You know, talking about loneliness and obviously loneliness has a huge impact on people's health. Obviously, loneliness has a huge impact on people's health, yeah, and you know what?

Debra Ruh:

It's such a fun topic too and a big topic, and I've really been digging into it because I'm working on some work about AI. But I had made a decision that maybe AI can't help us with loneliness, and I had seen studies and reports that have said that, and so I remember I was telling you, off air, there was an artificial intelligence seal that they created. It was like a real seal, but it was not real.

Neil Milliken:

I remember it. Yes.

Debra Ruh:

Yeah, it was really cool and they were putting it in nursing homes and stuff and they thought this is going to help the residents feel less lonely. But they found, after doing it and studying it, that was not happening. But when the person was playing with a seal which I will play with it right now and then they go out into the common area, it would attract human beings to them to say what is that? Oh, that's so cool and that would help with the loneliness. So I had decided in my own head AI cannot you know, it cannot solve human loneliness. But then you're starting to see all this different stuff.

Debra Ruh:

I still am not sure that. I think that it's the wrong question. I was saying Can AI solve human loneliness? I don't think that's the question, the right question, but it is interesting because, like you said, you know there are people that are killing themselves because of AI. There are people already. There is a man that has gotten married to his AI and you know, humans are aggravating. We, like, have all these different moods and we have all these different opinions, and so I can see why certain people would be more drawn to AI. That tells us everything we want to hear. Anyway, it's just so interesting that the human dilemmas that we're going to deal with and accessibility I love that we're going to address it with accessibility, I love that we can address it with assistive technology, but I hope that what we do instead is do that as well, but is that we're going to focus on true human inclusion, and I think that's something we don't even know what it?

Debra Ruh:

I believe we don't even know what it means to truly be human now and so then, expanding what it means to be human along with this AI stuff that's here, it's not going away. I think we're going to learn so much as human beings, but I hope we always, always, always consider humans in every aspect of the AI, and I don't see us doing that In all cases. I see and even my country Americans I see them focusing only on let's get it the fastest as we can, let's make as much money, as opposed to wait a minute, wait a minute, wait a minute. Is this something that is going to help society and help humans? So I think these are important times Over to you, Antonio.

Antonio santos:

I think that there's a lot of noise, as usual on these topics. You have people want to succeed. They want their companies to succeed, so they pay no effort in selling you all the steak oil. You need all the snake oil. You need to make sure that you buy into their narratives and I'm sure they know well at the end that some of the things that they say are not real. That is just part of the story that they need to put in order to sell it to the investors and to sell it to other companies.

Antonio santos:

We know that it's a known fact that many of the existing AI projects that are taking place at companies are not succeeding. So this makes the case that you know it's a young technology. It doesn't have enough maturity. People are jumping into it without realizing the consequences of the failure and what might come out of this. But this is an area that is particularly important for everyone to read and study about it, to make their own opinions and to reflect on it. This is not something that you just oh, I'm going to read this from this person. No, this is how it is. You need to have this kind of interaction, feedback, look at different sources in order to make up your own opinion.

Antonio santos:

About three years ago, I was at the Web Summit in Lisbon and there was a number of startups trying to address mental health issues with apps and AI systems and, as you know, I get curious. So let's see who is part of your team. Let's see who is in there and the part the psychology element was outsourced. So in the teams they had no one expert on the fields of mental health. This is something that they would outsource to the cheapest possible so they might have doctors in different regions of the world working with them trying to develop the systems. Because they wouldn't get an expert doctor, they would just go on the cheapest way possible to get advisors and they were not really concerned on the fact that not having an expert on mental health in their team, they wouldn't see that as an issue or a problem, and that's where I exactly saw the issue.

Neil Milliken:

Yeah, I think that the whole question around technology ethics is more pressing than ever because of the speed of deployment and the society level impacts that it's having. It's really pressing that we do have some of these frameworks, the society level impacts that it's having. It's really pressing that we do have some of these frameworks, and I think that a lot of the tech companies criticize Europe for wrapping them up in red tape. But if you look at the intent of that regulation, the regulations there to protect human beings and well-being and the preciously delicate society that we live in, we've seen how delicate that balance is in society because it's been destabilized by algorithms in the last decade. Because, whilst generative ai is is the hot topic now, algorithms and ai goes much, much deeper. You know it's been.

Neil Milliken:

It's the algorithms that power assistive tech. It's the algorithms that power what we see, what we consume and like have set up this uh, you know the attention economy right that are driving people to be more and more polarized, because that's what gets reactions. So so algorithms are already there and so even if and and I think you know antony mentioned previously that lots of AI projects and companies fail, the Gen AI stuff may well sort of crash and burn a bit and we go into a dip where, you know the hype hasn't met the expectations and people sort of swear off it. The power of the algorithm is going to continue and the impacts are already upon us, because a great example is AI-powered search, the AI search summaries. Antonia, you were talking about this before, about Google serving you up geo-specific bubbles of information, but at least you got links before that you could then go and research. Now people are just getting stuff presented to them and not everybody questions it. I know you want to come back on that.

Antonio santos:

And because when you are building and you're using these systems, some of these systems they start to know you. They start to know your political beliefs, they know your opinions, they start to know your political beliefs, they know your opinions, and then they will tend to narrow down that information to you based on the knowledge that they have from you. So it means that you will get less divergent knowledge from the one that you are pursuing over and over time and over time. But also on the topic of enterprise and companies, there is this pressure just to go after AI that people might miss other things that are also relevant and other things that are important in order to do business. So everyone seems to be somehow focused so much on AI and leaving behind other areas with no sustainability and other business topics that are also relevant, because everyone goes after AI.

Debra Ruh:

And you know what? Also that is annoying me just a little bit is that they're calling everything AI, Everything's AI. Now. So it's like what? No, we were doing it anyway. So it's like everything is now AI. We don't talk about technology or digital Everything's AI, Okay, so I'm sorry. So what we're saying is, you know, so programmed and it's machine learning, and it's not all, Anyway. So I think that we're just taking this term AI and we're just putting it everywhere now too, which is just also adding to the confusion. You know what?

Debra Ruh:

I have a lot of empathy for parents right now, and I know quite a few parents that are like they are not introducing screens to their children. They are not going to let them do it. They're being thoughtful about it in ways that I mean, I remember because I'm so much older, but I remember we were told don't take your kids and park them in front of the television set. Gosh, you don't want to do that. So now we've got screens everywhere, and so we don't even understand as humans what this is doing to our brains and the studies we're seeing right now. It's very interesting how our brains are responding to just using AI or something like a chat. Gbt gets all of the words but a Gemini, a chat, GBT, any of the ones that are coming up.

Debra Ruh:

But it's just interesting because, honestly, we do not know what is really going to happen with our brains with the AI. We've already seen some really really scary things happen. We've seen deaths, We've seen abuse, and we just started. We just started and so I hope, if nothing else, people are protecting themselves, their brains and their families' brains and their children's brains, Because we don't know what they're programming in and we know what they did before. There was a documentary called the Social Dilemma where these programmers are saying we had no idea, we didn't know what was going to happen. We don't let our own children use these devices until they're in high school. So I, if nothing else, I say to all of the human beings out there protect yourself from this technology. We don't know what's being coded into it. I don't want to be paranoid, but I'm totally paranoid about it now.

Neil Milliken:

Yeah, that's a fair point. I mean, it's not just about AI and Gen AI, it's about how we live our lives through screens right now, and I was a technology optimist for most of my life.

Debra Ruh:

Me too.

Neil Milliken:

You know most of my adult life because actually, you know, growing up in the 70s there wasn't that much computing tech, right, but I had that road to Damascus moment and I was like, wow, this stuff's great, I love it. And we were on a journey of progress. And it's much more recently that I've become more jaded about this. But I used to read Asimov, right, the big scientific, scientific, science fiction author you know, wrote I robot foundation, these like massive, really well recognized series of books and he wrote one, I think it was called the naked sun, which was about extreme isolation and it was. There was a population of people on a planet, but they were so widely dispersed that they only communicated through holograms and holographic communication.

Neil Milliken:

Not now. What are we doing? We're all spending all of our time not convening in person but talking through screens. It's been really handy for us. It's empowering for people with disabilities. Remote work possibility is fantastic. I have an international team. I couldn't coordinate with them in the same way if we didn't have these tools. But at the same time, you know to have grown up during the COVID period don't have the social skills. You know some of the younger ones. They're years behind on their language skills. So this does have an impact on how people's brains are wired and everything else, and we don't teach children the kind of rhetorical questionings, approaches and philosophical approaches that I think they're going to really need in a world of misinformation. So I think what we need to teach people is changing rapidly and we need to have that curiosity, the reminders constantly to question and also to step away from the screen and have some human interaction. You know, stop fubbing people when you're in a restaurant. I'm guilty of this playing on my phone whilst I'm out for lunch with my wife.

Debra Ruh:

Yeah, yeah, I think we're all guilty of it, but you know, I just think protecting ourselves, as this is unfolding, and protecting our brains. And also, when I'm doing I'm researching for my book, for a new book, one thing I was seeing is also, we don't think we're not able to process and think the same. People are losing the ability to pull language into their brains and use words and language because you're, I mean, there is some real, real dangers to our human brains. We have to continue to. You know, remember how to spell things. I have ADHD and dyslexia, so it's like my brain does weird things to me already, but we are losing basic skills that we do need. We need languages and it's nice that we can talk the same language and all that stuff, but we need to use our brains, our brains. For example, I'll give you an example.

Debra Ruh:

I am a flautist. I have played a flute the flute since I was a very young girl. I stopped playing it for years. I picked it up again, but what I did when I was learning it was I memorized, I memorized. I was in the marching band and every week we had three new songs we had to memorize. I had to memorize three songs on my flute every single week. Not a problem, I did it. It was what I had to do. It's important to use our brains. Even if AI can help us appear even smarter, we still have to use our brains and we have to engage in ways that empower our brains. So it's some big topics. These are some big topics that we're so.

Antonio santos:

I have a question for you about this. Let's say, tomorrow somebody is going to invite you to speak at an event on AI the same to you, neil or somebody is going to invite you to a call about AI. So do you feel compelled to say good things about AI or do you feel compelled to be divergent about it? Ai or do you feel compelled to be divergent about it?

Debra Ruh:

Well, I feel compelled to be honest about it. I don't know if I'm answering that question, but I am obnoxiously optimistic, as Neil has coined me. I am, but at the same time I think being like that is just stupid. Now we just can't do that, because the reality is there are human lives at stake and we are deciding how our cultures are going to work, our societies are going to work. It's like so. No, I would be both excited because, neil, you had said something about.

Debra Ruh:

You know, when you were growing up as a child, you weren't being exposed to technology because we didn't have a lot of it. But even as a child, my dad worked in technology. My dad did an AT&T computer, managed it. That was the size of three city blocks and we would go to work with them. Sometimes it just was all these metal boxes everywhere. So my dad taught me to love the bleeding edge of technology. But then I learned not to love that because that's painful.

Debra Ruh:

But we are really at the bleeding edge of technology right now and I think I love AI. We can't put it back in the box. But I also know that we have got to think about what it means to be human. I mean this ridiculousness? Yes, I'm going to go, you know, with the DEI and woke that we're seeing, say, in the United States In the first place. I don't even know why we have to have conversations about DEI and woke when the reality is why would you ever, ever, build or create or program any technology that doesn't work for humans and every single aspect of their life? Why would you not do that? I was a programmer. I programmed so that everybody could use my programs to the best of my ability at the time. Why would you design anything that doesn't work with humans? Why are you building AI and you're not bringing in the humans that know about it, like the mental health examples? Anyway, all right, so I'm going to be quiet now. See Going on mute right now.

Neil Milliken:

I quite like you telling off the tech industry. I think we should be building for humanity. Why are we building tech if it puts humans out of a job or puts humans of humanity and society at a disadvantage? That's short-term, even for the billionaires. Ultimately, things will turn on them too, because there's nothing to support their businesses left. So we ought to be looking holistically at this stuff and, as Antonio observed, and you rightly as well, there has been this sort of laser focus on AI is the thing. We need to engage in the arms race for AI, because we need to win this, because we're going to be the dominant in the space forever and I can sit on my pile of cash, thank you, but it's having a massive impact on my pile of cash right?

Debra Ruh:

Thank you.

Neil Milliken:

But it's having a massive impact on actually some of the things that these other, some of the other things that these companies are doing. Yes, like sustainability, but also quality, right? I mean, I use technology daily and the speed at which we develop technology now stops us from producing high quality products. I think that you know I'm going to give Apple credit. Where credit is due is that they take a lot more time, they're a lot more cautious in their approach to implementing new technologies than others and, as a result, not as much stuff breaks. I mean, stuff does break, right, there are still bugs, there's all sorts of things.

Neil Milliken:

What I'm seeing at the moment is there are daily changes in my working environment and I never know what's where what's working and you know it's unstable. Stuff just doesn't work in the same way, and you know we're paying for this. We're consumers. It's not like we're paying to be someone's lab. Yeah, we are paying to be these companies' lab rats now, and you know that was okay when it was freeware. You kind of accepted it, because it's like, yes, you know we're the product and we're getting something for free, but when you're paying for the products and they're still cruddy, then that's. I think that that's a loss of focus on behalf of these large organizations.

Debra Ruh:

Well, and also, neil, don't you think I'm also tired of some of these brands? I'm not going to, but thinking that we owe them because I know, you know I'm. I'm just really tired of brands thinking that we all owe them something because you bothered to make something accessible. But you know, the reality is you need to be making things accessible for all of your human customers and if you aren't, we need to be talking about the ones that are the ones that are. So the elitism of the corporations are getting to me. I'm tired of that, so I'm not going to be brand loyal in the same way I am. I'm rethinking. What does it mean to me to be brand loyal? I wrote a book on inclusion branding. It was certainly never meant for these brands to oh, I'll do that, then I can work. No, we actually wanted to be included and we don't want anyway. So I'm very disappointed in your corporations, but more disappointed in the AI developers. But okay, so yeah.

Neil Milliken:

Is it the developers or is it the leadership of the organizations?

Debra Ruh:

Because the developers are just doing what they're told to do.

Neil Milliken:

It's the leadership of the organizations, you're right, and it's also the finance behind it, because the way that we structure finance of technology development, we go for market capture and we do this whole blitz scale to capture, you know, and we're going for the platform economy and we must have the dominant platform that creates a certain type of behavior.

Debra Ruh:

It's not the best technology that wins. It's the most well-funded, I agree, and also I want to say that, but where we do have those of us that have programmed and developed, and what we do have is we can pick better employers. I mean, maybe you're an employer right now that's asking you to do things that you know is not going to be good for humanity. Please, please, please, look for another job, please be ethical, please, care about the human race and don't work for these mad men that think that it's all about the money. It's not. It's not all about the money. So I think that we, each of us as human beings, do need to really understand the roles that we're going to play right now. They won't forget us these times. They're not going to forget us because we're deciding where human race is going to go, and it's not looking so positive right now. Yeah, and I'm optimistic.

Neil Milliken:

This is cherry, isn't it?

Debra Ruh:

No, there's some really good things that's happening with it as well. It really is. It's like I love it, I love it, I love it, I use it constantly, I debate with it, I talk to it, I challenge it, I have fun with it. But I hate what it's done to some behavior problems I've had with my grown child, one of my grown children. So yeah, so it's a love-hate relationship, I guess.

Neil Milliken:

Yeah, love-hate relationship? I guess, yeah, and I think that's maybe. Maybe this is what you were trying to get at in your question to Deborah Antonio was there is a sort of messianic belief in parts of the industry that AI was going to solve everything and it's the answer to everything, and it becomes quasi-religious, right. So you must believe in the religion of AI, you know, and if you don't, or you have a nuanced view on something, then actually it's surprising. People are attacking each other for their views on AI, like the Catholics versus the Protestants. This is technology, people. It's not God.

Debra Ruh:

Yeah, so it's an interesting time to be alive.

Neil Milliken:

It may have godlike powers.

Debra Ruh:

Yes, yeah, let's hear about the human race though. I know it's not perfect, but my God and the planet.

Antonio santos:

It's actually it's very difficult to find someone that is able to pause and think about it in a kind of a more. All these debates on AI there seems to be really, really light. You know, now we are talking here for half an hour, but sometimes you have this kind of debates with 10 people. Everyone comes with a very light approach Nobody. There's no really deep thinking. It's very rare to find someone doing that.

Debra Ruh:

Yeah, and if you do, let me know, because I'm looking for them. But what I'm hearing right now, oh my gosh, some of these debates 90 of jobs will be gone in two years. What come on?

Neil Milliken:

I'm not saying that they're not going, but yeah I think that that the impact is going to be very sexual as well. We've seen the impact in copywriting.

Debra Ruh:

Yes, yes.

Neil Milliken:

People in sort of creative writing jobs are impacted? Yes, because….

Debra Ruh:

Marketing has been impacted.

Neil Milliken:

Absolutely. But what we've also seen is the quality of the content that gets produced is slop. Right Now, you can use AI to create great content, but it requires some tuning and all the rest of it. But we are seeing that the whole of the internet like spewed out with ai slop and you know it all sounds the same and what that's doing is is is suppressing diversity because people don't have their own voice anymore. Yeah, so I think we've reached the end of our time. We've gone on wild tangents, but it's been fun. I need to thank MyClearText and Amazon for supporting us to stay on air and stay captioned.

Debra Ruh:

Yes.

Neil Milliken:

So look forward to the next time we speak.

Debra Ruh:

I'm hopeful.

People on this episode