
AXSChat Podcast
Podcast by Antonio Santos, Debra Ruh, Neil Milliken: Connecting Accessibility, Disability, and Technology
Welcome to a vibrant community where we explore accessibility, disability, assistive technology, diversity, and the future of work. Hosted by Antonio Santos, Debra Ruh, and Neil Milliken, our open online community is committed to crafting an inclusive world for everyone.
Accessibility for All: Our Mission
Believing firmly that accessibility is not just a feature but a right, we leverage the transformative power of social media to foster connections, promote in-depth discussions, and spread vital knowledge about groundbreaking work in access and inclusion.
Weekly Engagements: Interviews, Twitter Chats, and More
Join us for compelling weekly interviews with innovative minds who are making strides in assistive technology. Participate in Twitter chats with contributors dedicated to forging a more inclusive world, enabling greater societal participation for individuals with disabilities.
Diverse Topics: Encouraging Participation and Voice
Our conversations span an array of subjects linked to accessibility, from technology innovations to diverse work environments. Your voice matters! Engage with us by tweeting using the hashtag #axschat and be part of the movement that champions accessibility and inclusivity for all.
Be Part of the Future: Subscribe Today
We invite you to join us in this vital dialogue on accessibility, disability, assistive technology, and the future of diverse work environments. Subscribe today to stay updated on the latest insights and be part of a community that's shaping the future inclusively.
AXSChat Podcast
Democratizing AI for an Inclusive Society
We're thrilled to have Andrew Rogoyski from the Surrey Institute for People-Centered AI join us for a deep and insightful conversation about the transformative power of AI. Andrew brings with him the weight of the University of Surrey's three decades of AI research and a potent vision for harnessing this technology for the betterment of human lives. We embark on a journey, examining a new project dedicated to training the next generation of AI PhDs, with a special focus on creating social benefits and accessibility.
The conversation gets even more intriguing as we navigate the nuanced layers of AI, weighing its potential against the challenges it presents. Andrew helps us deconstruct the importance of keeping AI low-resource, democratizing access, and judiciously regulating its use. We delve into a fascinating exploration of how AI can be a game-changer for inclusive media, particularly for individuals with neurodiverse needs. From simplifying narratives and modifying information presentation to decluttering video content, the possibilities are endless. And finally, we mull over how AI could be the missing link that bridges different scientific disciplines, heralding a new era in academia.
Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com
Debra https://bsky.app/profile/debraruh.bsky.social
Neil https://bsky.app/profile/neilmilliken.bsky.social
axschat https://bsky.app/profile/axschat.bsky.social
LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh
AXSCHAT Andrew Rogoyski 11.08.23
NEIL:Hello and welcome to AXSChat, I'm delighted to be joined by Andrew Rogoyski of the Surrey AI Institute. I had a chat recently with Andrew about the work that they are doing on AI for Good and the particular project that they are looking to bring in and to train up PhDs in AI, particularly looking at stuff that will bring about social benefit and accessibility and so on. Andrew, it is great to have you with us, can you tell us a bit more about yourself, your work and the project you are working to get off the ground right now.
ANDREW:Sure. Thank you, thank you, Neil. Thank you, everyone. So, Andrew, Andrew Rogoyski from the AI Institute of Surrey. So to give you the full title, the Surrey Institute for People Centred AI so the University of Surrey has been doing AI for 35, 36 years. We've been, long before it was fashionable to do so, in fact in previous fashions. We've been leaders, we are probably the lead in the UK for areas like machine perception. So, video understanding, image understanding, body understanding. A couple years ago we decided we needed to decide where we were going to go next with our AI work. So we have about 200 researchers at the university. Which is, you know, a decent-sized group. We really wanted to refocus our research efforts on how we bring AI to benefit human beings. We were very conscious that AI was gaining momentum in the commercial sector and scale and pace as we now see was something that we kind of thought was going to happen. We wanted to focus our efforts on what can we do as an academic institution do. So came up with the idea to create a new institute which is focused on people-centric benefit AI, so where can AI really make a different to human lives and to the human experience and so on. Also within the university, instead of being pigeon holed away in the computer, science and math group, to become pan-university in our outlook. So now taking AI to pretty much every corner of the university. We have an extended group of over 100 academics who are now using AI in their own specialist field of research, whether it is healthcare, whether it is performing arts, as well as traditional science and engineering and so on. So that is kind of what the institute was set up to do. Then recently, I guess this is where Neil and I first started this conversation, in fact last year, in October, my role is what we call Innovation Director, so essentially building bridges between the outside world and the academic institute. I spend half of my career or more than half of my career in industry. I started in academia, went to industry, came back and survived the previous AI winter in the ‘90s having been involved in expert systems and very early neural networks which were tiny in comparison to what we deal with now. But, we had, I arranged a workshop with a number of different companies, I had a bank, I had a local council, I had the BBC and some others and with a little bit of design, a little bit of happenstance we decided to focus it on accessibility issues, accessibility, just to explore different companies, how they were dealing with them. And with these different, we kind of triangulated between the different organisations and realised that there were some really big, unsolved issues that were causing people to not gain access to all sorts of information, media, et cetera. And that our expertise in AI could really help some of those things. So some months later, there is a big research call and we put in a bid now with help from organisations such as yourselves and Neil, who has been one of our corporate champions to help us get this over the line, we which hope we will do, really looking at using AI to improve digital media inclusion. So, what does that mean? You pick it apart ... you realise that we are all consumers of digital media, if it is written, video, audio, web, everything that we depend on a day-by-day basis there is a double-digit percentage of people that this information does not get to. So that means that they don't necessarily get decent entertainment, they don't necessarily get decent educational opportunities, they don't necessarily get healthcare information, and so on, so it really matters. There is a whole variety of different reasons why people don't get that access. So, what we proposed is to build this centre with a large bunch of PhD students running over a number of years, who really look at the detailed nuts and bolts of not only how to leverage AI to solve some of the problems and we can talk about what some of the problems may be but how to imbue in them, to bake into the DNA, so that when they are thinking when they go on to great careers in AI in the years to come, that they will be thinking about inclusive design, they will be thinking about personalisation on a needs basis and they will be thinking about bringing a completely different attitude to how technology is going to reach more people. At the moment, there is a danger that technology focuses on fewer people, you know, rather than actually focusing on actually bringing a benefit to all. That is kind of where we got to with our conversation with Neil. So, very happy, I shall stop talking and give you guys a chance to talk to me.
NEIL:Thank you. So, I think that we have talked quite a bit about AI on AXSChat over recent weeks and months and only a couple of weeks back we were featuring work from Robotica that is looking at creating sign language avatars to give people better access to basic information. And so your focus in particular around media but you explained that the definition of what institutes media is pretty wide. So if people are interested in you know, maybe PhD research or submitting problems to be solved, you know part of the issue is understanding if it fits within your definition of media, can you perhaps explain a bit about how you are defining media and how you widen the envelope to cover as much as possible?
ANDREW:It’s a great question. Our intention is to cover as much as possible. We don't want to be prescriptive about what we include in media. It’s essentially about how we get information across to human beings so whatever form that takes, you know, if it is traditional written text, whether it is video, or whether it is some other combination of that, is all fair game. It is all to be included. So, any of that matters, whether it is playing a game, whether it is watching your favourite programme on the TV, whether it is looking up, you know, your health information from your GP or whether it is your banking information, helping you apply for a loan, whatever. All of that uses media in some shape or form. So it is just electronic communication.>> Andrew, thank you so much for first of all, everything that you are doing, I also, I'm excited and terrified of Artificial Intelligence, like many people in the world. I look at the potential that I can think of what we can already improve for our community, I can see oh, this can improve captioning, it will improve sign language, it will improve so much but at the same time it is such a big, huge change and I just, I wonder, one question I was going to ask you and then you answered it, and drew, I heard you talk a lot about the academics, I appreciate you are in academics and you worked in corporate, that makes me feel comfortable you have that experience, because of course it is all about us. One thing I can't seem to wrap my head around, and drew, is how do we really take a breath, take a pause and really look at what we are trying to solve? And then use AI to compliment what we are trying to solve, when everybody is so terrified and,“I'm not going to talk to you about that, Andrew, as I don't want you to take away my business from me.” That is one thing that it appears that society is breaking down, rethinking, revamping, it is an interesting time for Artificial Intelligence to be really more meaningfully included within the conversation. So, that will be something that is just something that I am thinking so much about. And I'm writing a book I am writing a chapter in a book about digital health and thinking of that too, from the perspective of the humans. So, first of all, thank you, Andrew for caring about the humans and you all looking at it from humans and for partnering with you know, brands like ATOS and hopefully other brands but how do we even get our minds around the conversations when they are just so big, so complicated ... I guess it is liking eating an elephant, right? So, I will let you tell us about how to do it, Andrew, but thank you for what you are doing. It is such a great question. There is a lot to unpack there. I think essentially is how to keep this human-focused to make sure that AI is for the benefit of us poor old human beings, part 1 and it covers so many different topics. First, in an academic organisation, we are thinking how do we remain relevant? How do we have impact in a world of AI when the Silicon Valley, the hyper-scalers are spending billions to develop AI at pace, as for them it is all about achieving market dominance? And it is not clear where they are going to end up. And there is this kind of hint that the real game is to achieve artificial general intelligence because the prize is so big there, that anything goes. That's a gross generalisation but there is a huge push in that direction. And academic institutions will say,"How do we add relevance to that?" I came across a paper recently that shows 10 years ago, something like 75% of all of the major AI implementations globally was done by academia. So, you know, academics were leading the way in a number of AI fields. Now it is 4% ... so, we are kind of just, you know, we are just being outpaced now, outgunned by the huge amounts of money, the pace and the scale that has been pushed into that. So, in that sort of fire hose of innovation that is driven by the hyper-scalers, how to remain relevant? It is by asking some of the questions and looking at some of the topics that need examination, you know, about trustworthy AI, about safety AI, safety, about remaining and looking at the impact of AI, so understanding the future of work, some of those deep questions that need understanding and that's one of the reasons why we came up with this inclusion theme - how do we really capitalise on the good things that AI can do, in order to improve people's lives? AI's being used every day and has been done in for years and years in all sorts of wonderful ways, you know, from drug discovery to, you know, improvement in healthcare, diagnostics and so on and we kind of take that for granted and we wouldn't want to give that back. But, I think that the generative AI revolution, starting, obviously it started a number of years ago but really came to the public fore in November of last year, November 2022, it has really fired people's imaginations and got a lot of businesses very excited about the potential of AI and suddenly it is like a new thing and having lived through a previous wave of AI and sort of thinking, OK, hold your horses here, there is hype, there is also a great excitement, also great opportunity and also some risk. So, we need to remain cognizant of all of that.
So going back to your question:How do we remain even-focused? We have got to look at questions like how do we control and regulate AI? Should we? In what ways? How do we do that in a global context where we in the UK, frankly, would not get much of say as to what happens in the US and China? How do we influence those steam rollers in ways that help us in the long run? We can influence local policy within the UK, then what impact does it have on business? And compact the economy when we are all working in global environments? It’s a really complex, deep question, what we want to do is to be the clarion voice that keep bringing it back to,“OK, this technology should be the benefit of human beings, it should be solving problems like climate change, like poverty, like food production, disease-control, so on.” So keep bringing people back to those kind of questions, how do we make AI low resource so we are not burning gigawatts of electricity to run that's large hanger models? How do we democratise AI so that people can use it in their own sphere of experience and need? So, as an academic institution that's what we figure we are for, that, and producing the bright young minds that will then go into these jobs and industry hopefully baked into their DNA with different attitudes that, from the ground up sort of says,"We should not be doing it this way, we should be doing it differently, we have to remember what it is all for.">> Andrew, welcome. Around a month ago, I was talking with an AI start-up from Germany which is competing with Open AI and ChatGPT. I was asking them, how do you plan to go forward? You are claiming that we have an open DNA, privacy, all of those concerns but you are not making the product available to consumers? You know, Open AI is, so anyone can open an account on ChatGPT in 10 minutes, so you are claiming all of that but you are not making it available to larger audiences that can follow, somehow, your ethical path? And they were not able to reply,"Oh, we have other priorities." So, don't you think that somehow, we are criticising the Americans in some areas but in fact, they were the ones to bring this forward and somehow to democratise it. So, what do you think are the credits that they deserve? And what can we learn, from sometimes, I have this feeling, once again, we are behind? How can we look at ourselves to see how can we improve to avoid these things continuing to happen? Where they are able to come forward, to make things happen and we are still waiting for something and don't really make them happen?
ANDREW:That's another great question. I was reflecting on this the other day, with how big science, big technology used to be driven by governments in the 1950s, the 1960s, all of the big tech, when we are talking about space programmes, early computing, things like that, the nuclear programmes and all, government was at the front, it was driving this. It seems that globally we have ceded leadership in technology to large corporates. And that has its consequences, it has its benefits but it has consequences. You are right, we can't stand at the sidelines and say,"Oh, it is terrible, we are being dominated by these big companies." Don't get me wrong, they have done some fantastic work, really interesting, really exciting work, and I believe that quite a lot of them are well-motivated. Some of them have very clear corporate dominance targets but some of them are doing good things. Spin-outs from open AI in the form of Anthropic, they have a very interesting model in Claude II, in the way they are trying to build safety in the large language model, the recent switch to Open Source is interesting. And there are schools of thought about what should and should not be done with large language models becoming Open Source, so Meta making Llama or Llama II, their large language model Open Source, interestingly, not just for research purposes but for commercial purposes as well, that is a big change. You know, you can think about what the corporate strategy is behind that but I learned that AliBaba have done the same, they have just released their big large language models on an Open-Source basis. So there is this deep competition to be at the leading edge, you know, not to miss out on the next big thing, to be the dominant platform that everyone goes to. ChatGPT has already got great brand value. Everyone talks about ChatGPT, as sort of short hand for large language models but there is a bunch of large language models available, and related technologies that in layers sit and do the equivalent thing. It is a very exciting ecosystem. I think that the concern is that in this febrile atmosphere of you know, wanting to be first, wanting to be the greatest, wanting to be dominant that mistakes will be made, not enough time is put into thinking about you know, some of the risks that we incur. And frankly, government regulation, if you are talking about Europeans, which are probably at the leading edge of doing something, with the EDO AI Act and their ambition to get that signed into law by the end of this year, I believe, is a really accelerated programme. There are some good bits and bits you may not agree with but it is really quite powerful in leading that. The Americans, even the Chinese have strong ideas about what the generative AI should be doing and AI in general but they are struggling to keep up. As ever, technology moves much faster than our ability to regulate it and so on. So we have to think about how do we prevent harms and risks to human beings from this emerging technology? It is an incredibly powerful technology and it has the potential to transform economies, jobs of the future, existing jobs and so on. So, you know, how do we get this right? Having seen some of the mistakes we have made with previous generations with technology, it is a really hard problem. There is no global authority on this, there is no magic wand, somebody has got to take charge to make sure we do it right. That is why as an academic institution and other academic institutions kind of take it on ourselves to be the conscious, to ask some of the questions that say: You know, we need to be thinking about X, we have ideas about Y, to remind people of what we should be doing with these technologies. So, that's what we are trying to do.
NEIL:I think that, yes, we are all very much concerned about the potential harms, clearly there are massive potential benefits as well, and we all know that regulation, usually, lags well behind, so the speed at which the EU is acting is quite unprecedented, actually, in terms of if we look at both other technologies and so on. But then the speed of adoption is also unprecedented. So, everything is speeding up. So, that's a challenge. You mentioned about the Open Sourcing of large language models and I think this is one of the things that, where that double edge sword applies probably the most. Because by democratising AI and opening up the language models more and more people can develop on it, and innovative on it, and at the same time, what you are doing is you're loosening the ties of regulation, you are loosening the control of something that is inordinately powerful and some people have said,"Oh, well, you know it's a bit like nuclear." There is enormous power but there will be this understanding that you need restraint, like the whole mad thing that which was mutually assured destruction, so you have got nuclear weapons they can destroy the planet but everyone knows if you start a nuclear war, you are not going to benefit either, right? I think it is much harder to say that unleashing some maline AI is going to result in your destruction in the same way that creating a nuclear weapon or a dirty bomb might do. Therefore, the restraint that people might have had from creating weapons of mass destruction and neutralising them, are not necessarily the same with AI, that's why it’s really important to have not just regulation but regulation is useless without some kind of enforcement and enforcement and regulation require understanding and frameworks, which is where academia comes back in. So, you know, so what are we going to train people about, not just how do we make AI? How do we govern it? Conceptualise it? What are the fundamentals that you are teaching your next generation of academics as a foundation of topics that they need to consider when they are researching AI?
ANDREW:Great question. I’ll circle back to that question, I just wanted to pick up on something you said in your pre-amble.
NEIL:Sure.
ANDREW:That there are some more tangible risks that are happening right now. The first one I point to, is, you hinted at, it was the pace of adoption. I think there is a risk of shocks to some parts of the economy, to some business sectors, because change is happening so rapidly, that any hope of adaptation or adoption in some areas, are going to be blown away by how some organisations, you know, this are business sectors that, frankly, in 12 months, probably won't exist because generative AI will be used by people to replace them. I was struck, I had a conversation earlier this week with a young video production company, saying, "We used to operate on a two-week cycle. Go into a client, they give us an idea, we mock up videos and get it back to them in a couple of weeks' time and that is our pitch and they would buy at that point. Now we are dealing with competitors doing it in a day as they are using Generative AI, so it completely wrecked our business model, compressing 10 days into one or two, and we don't know how to adapt to that and we don’t know if there is a way out of that." It is a tiny, a minuscule example but I had another conversation with a student, at another university doing a photography degree, starting 3-year degree, saying,"Is there any point in me finishing this degree? Everything I thought I would be doing, can now be done by Generative AI?" I said, "Sure there is, you are a creative human being, you have to learn to adapt to using these tools, there will still be that demand." But it is really tough. It is the pace of change that I think is a potential friction point. The other example is the growing impact on truth and information sources. And the potential for Generative AI to really pollute and distort everyone's perception of what is going on in the world. We’ve seen the impact of misuse, malign use of social media over recent years and the impact on Western democracies, frankly, that we thought were pretty rock solid, now in some cases are very fragile. The idea that AI could be played into that to make those more unstable is fairly frightening and very real. But it is examples of, you know, it does not have to be killer robots, it can be subtle, as in, where do I get my information, my truth from? What does news look like? In terms what we teach our students for inclusion, we are looking at it in three ways. We are looking at the AI technologies that can help solve problems, we are looking at the creative process, so, how to we create media, how do we help media creators to be inclusive to think about what they are doing and then to look at the design of those things, so to look at how we make sure that people are building in from the ground up, inclusive approaches so that, what happens quite a lot with media creation, whatever form of media you are talking about, is people design it, build it for the main audience, as it were, and then bolt on additions, after thoughts, to allow that media to be accessed by people with different needs. And what we want to do is to get, to make sure that people who are designing and building that media with new tools that allow them to build it in from the ground up, so that every time they make a cut of the video or a programme that they are putting together that they are layering in options and different ways of presenting information. So, you know it is those three parts but giving examples of what we are looking at and what we hope to be looking at, thinking about how we design inclusive media for people with neurodiverse needs. So, does that mean, for example, perhaps simplifying some narratives or voice, text? Does it mean changing the way that certain forms of information are presented? How do you do all of that? What is the most appropriate action to take to improve the experience, to de-clutter video? To make it visually less intrusive for some people? All of those things can be done using AI. We need to learn from communities who have needs and opinions about this so that we can really focus our efforts on what makes the biggest difference and then design, bring in the AI tools, develop the AI tools, develop new AI science that helps to solve the problems in a way that is as natural as possible in the creative process of creating that media, so that the creatives who are doing the original work, it is second-nature to them to reach. That they have tools and so on, that they can say,"We are making this programme." It automatically has a whole range, a whole spectrum of needs it is addressing so that people can consume it in different ways. I think that is one of the early realisations that we made a few months ago, is that actually we all have needs. We would like that we consume things in different ways, we have different preferences. So it is not about creating a marginalised community that says,“Here are people who are special and we need to develop technology that helps them.” Actually, it is realising that everyone is special and there are some people who have more challenging needs who are less-well served by conventional media. So how to bring the tools right throughout the process of creating media from the early, artistic concepts through to the distribution, to the consumption and so on, how do we use AI to deliver all of that? All of that is the kind of thing that we are teaching our students and exploring with our students. And we don't have all of the answers. We are setting out with open minds and open hearts. We want to hear about people who have opinions, who have got experience, who have got real-world challenges that they want to bring to us that can feed into our challenges that we want to address. So, if you are interested, please reach out.>> One of the things that I would like you to tell us in this journey of trying to solve problems with AI for a good purpose, for many years, Andrew, and in most of the 20th century we have different branches of sciences, sometimes keeping them apart but this is actually bringing them all together. How do you feel that this type of change that AI is creating is also bringing together different areas of sciences and how do you see this influencing the future of the academia and the way that we even consider sciences as separate entities? That is a really great question. It is something that is very close to our hearts. When we created the AI Institute. We deliberately created something that is multidisciplinary. It kind of sounds obvious to the outside world, why wouldn't you do that? But in academic institutions it is fairly alien. A lot of academic institutions are very siloed, there is very deep expertise in a very narrow field and they are kind of encouraged not to collaborate, to go across disciplines. We decided to go counter-cultural to that with the institute. With this centre for doctoral training that we want to create, if we are funded, fingers crossed, it is all about multidisciplinary. One of the innovations that we put into the programme is instead of looking at individual studentships concentrating on a single subject and building our own micro-silos for all of those students, they will all be doing their own research but throughout their time they will be working on group projects which are stimulated and inspired by outside organisations. So, we are going to have, every few months, certainly once a year, to have a process where we put together multidisciplinary projects where we will pull students who may have psychology backgrounds, may have performing arts backgrounds, may have computer science backgrounds, to put them together in small groups to solve a partner inspired problem, as part of their training, their learning experience as doctoral candidates, and that way they come out with not only great research credentials but they have had really good team experiences, really good multidisciplinary experiences and they have worked with outside organisations that don't work like academia and are not afraid to ask some of the simple, blunt questions that sometimes we skirt around. So as an experience, four students, they will be really supercharged when they come out of this process with a great wealth of experience and we hope this is a model that we see increasingly, or we will see, we want to seed, to inspire other universities to follow the same sort of idea and to your question which is what do disciplines mean in the future? How multi-disciplinary? I think it is a great question. And then getting into what impact will AI have on education, the way we deliver education? It is a whole other topic we can discuss, as I think education can be one of the most highly impacted sectors from AI and we kind of, we don't realise it yet. So that, and the question about multi-disciplinarity, I think, it is really important. And something that many academic institutions struggle with but it has to be on the future roadmap, I think, in order for university's tertiary education university to remain relevant.
NEIL:Excellent. Thank you. Great comments. It has been a fascinating talk. We hit the end of our time without me even noticing. I would like to say thank you very much very much Andrew, thank you to Amazon for supporting us and for MyClearText with helping us with captions and staying accessible. We really look forward to continuing the discussion on Twitter and to seeing what develops next and how those skills develop, so, thank you very much.
ANDREW:Thank you.