AXSChat Podcast

AXSChat Podcast with David, Jeremy, Cyan & Jeff from McGill

September 15, 2022 Antonio Santos, Debra Ruh, Neil Milliken talk with David, Jeremy, Cyan & Jeff from McGill
AXSChat Podcast
AXSChat Podcast with David, Jeremy, Cyan & Jeff from McGill
AXSChat Podcast +
Help us continue making great content for listeners everywhere.
Starting at $3/month
Support
Show Notes Transcript

David Brun
Founder Gateway Navigation CCC Ltd. Specialist designing indoor and outdoor audio based navigation networks. Drawing on my past experience as a small business owner, twenty-years with TD Canada Trust in the Branch Network and Pacific Regional Office and my authentic experience as a blind person both as an advocate and not-for-profit board member, provided me with the perspective, passion and resources to work with a dynamic team of like-minded colleagues and partners to enhance mobility, employment and social independence for blind and disabled persons. Join us in creating real change for all Canadians.

Jeremy R. Cooperstock is a professor in the department of Electrical and Computer Engineering, a member of the Centre for Intelligent Machines, and a founding member of the Centre for Interdisciplinary Research in Music Media and Technology at McGill University. He directs the Shared Reality Lab, which focuses on computer mediation to facilitate high-fidelity human communication and the synthesis of perceptually engaging, multimodal, immersive environments. He led the development of the Intelligent Classroom, the world’s first Internet streaming demonstrations of Dolby Digital 5.1, multiple simultaneous streams of uncompressed high-definition video, a high-fidelity orchestra rehearsal simulator, a simulation environment that renders graphic, audio, and vibrotactile effects in response to footsteps, and a mobile game treatment for amblyopia. Cooperstock’s work on the Ultra-Videoconferencing system was recognized by an award for Most Innovative Use of New Technology from ACM/IEEE Supercomputing and a Distinction Award from the Audio Engineering Society. The research he supervised on the Autour project earned the Hochhausen Research Award from the Canadian National Institute for the Blind and an Impact Award from the Canadian Internet Registry Association, and his Real-Time Emergency Response project won the Gold Prize (brainstorm round) of the Mozilla Ignite Challenge.

Cyan Kuo is a research professional with an eclectic background in education, information technology, and the arts. In the past, they have worked on projects such as a benchmarking paradigm for walking interfaces in virtual reality and using virtual reality for rehabilitating those with vestibular system sensory disorders. At McGill University’s Shared Reality Lab, they manage user testing, participant and community outreach on the IMAGE project, as well as making sure day-to-day lab activities run smoothly. They have an interest in multisensory aspects of video games and interactive media, and are a strong believer in technology for social good and engineering for inclusivity. Cyan has an Honours B.A. in Dramatic Arts and Humanities, an Honours B.Sc in Cognitive Science and Computer Science and an MSc in Computer Engineering.

Jeffrey R. Blum has worked in mobile software for over 25 years, starting as a Program Manager on Microsoft’s Windows Mobile team, followed by his role as Director of Product Design at Mindsurf Networks, a startup building PDA software for use in schools. After developing several mobile products for professional photog

Support the show

Follow axschat on social media
Twitter:

https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh

LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/

Vimeo
https://vimeo.com/akwyz




This is a draft transcript produced live at the event and corrected for spelling and basic errors. It is not a commercial transcript AXSCHAT with David, Jeff, Jeremy, Cyan

NEIL:

Hello and welcome to Axschat. We have a completely full house today. So, what can I say, I don't think we have actually ever had quite so many people on one chat before. So, welcome to David, Jeremy, Jeff and Cyan. We are delighted to have you with us and to tell us about your work on accessible images and this is really exciting but also, seemingly a bit of a theme coming on because last week we were doing accessible SVG's and so, it's obviously we are having a moment here in terms of accessibility and images. So, David, maybe if you want to start and tell us a bit about projects and led on to Jeremy and Jeff and Cyan, so that you can take us through what it is your doing because I think it's really...

DEBRA:

And also, Neil, I'm sorry to interrupt as I just did, but could we just take like two minutes for each of them to introduce themselves.

NEIL:

Absolutely, yes.

DEBRA:

Sorry, I know I'm so rude. But just so that you all can introduce yourself quickly and then we can get into it we are excited about the topic.

NEIL:

That's quite alright, maybe the Kings' English didn't actually translate because that's exactly what I meant.

DEBRA:

Very good. I love how you said that. Good job.

DAVID:

Great, well good morning, at least in Vancouver here. I founded along with the Canadian Council of the Blind, a community contribution company called Gateway Navigation CCCLTD, about five years ago to really try to address some of the accessibility issues around both access to the built environment which then also led to this project which is about access to the internet and the project here, which we have been collaborating with McGill University and the shared reality lab is called image, which stands for internet multi modal, access to graphical exploration and with that I'll pass on to Jeremy for his introduction.

JEREMY:

And I always impressed that David actually remembers the acronym and to credit, he is the one who came up with it and we love it. So, thanks to David getting in touch with us many years back and talking about some of the issues faced by the blind and low vision community. We steered our discussion towards internet access, as David mentioned and in particular graphics. So, those with screen readers have access to all the text on my pages but when you encounter an image, whether it be a photograph, a map, a chart, a line drawing or so forth you're generally limited, unless the person who has created that page has put in effective alt text that describes what is there and we set out to say well, let's try and remedy the problem where a user is accessing a web page and they don't get any description of that content and we have been working for the last roughly year and a half, using a variety of tools and technologies to try to interpret the contents of those graphics and render them through either a rich audio description that isn't just words, spoken words that uses audio cues, so non speech audio and representing things in a spatial audio matter so you hear directionality and also optionally, rendering information through tactile displace, the sense of touch and right now, we have been working with two different technologies to do that. One being the dog pad, the pressure ball pinner and the second being a lower cost more consumer affordable technology out of Montreal which is called the Haply 2DIY it's a two link plainer robot arm sort of system that can track your position, where you are moving the mouse with your finger and can exert force feedback and we are working with the company to render that as an effective interface so that you can experience graphics both with sound and touch. We have with us two members of our team here, Jeff Blum, who is the project manager, who deals with all things technical and Cyan Crowe, who is our usability expert and user facing engineer dealing with all of the testing and understanding of the user experience. And thank you for inviting us to participate today. It's a pleasure for us to speak with you.

NEIL:

Jeff, please tell us a little bit about you and your experience on this project?

JEFF:

Sure, so, I'm Jeff Blum. As Jeremy mentioned, I'm technical project manager. So, one of the things I like to point out in addition to what David and Jeremy have already pointed out is that image is designed to be open source and so we work in the public and anyone who wants to use our tools or expand upon them they can do that because they can get all of our source code and use it. In addition, we have created something that we view as a platform. So, we know that our research group, it's a relatively small group is not going to be able to do every type of graphic on the entire internet. That's just too large a task. So, we've seen a lot of research projects kind of spin up and handle one particular type of graphic and they'd run some user tests and then they write a paper and then it disappears. So, the other element of image that interests me the most is really that it's a platform where it's very modular. So, if you are a researcher and you want to extract information related to product photos or jewellery or memes or any number of other types of graphics, you can create a module that plugs into our architecture to do just that particular piece and you get access for all the tools for the audio spatialisation, for talking to the haptic devices and you can sit on top of that. So, we view it as an ongoing platform as a basis for this type of research and deployment to the community.

NEIL:

Okay. That's sounds really interesting, and we will probably want to come back to that but let's first do the round table and get to Cyan.

CYAN:

All right. Hi, I'm Cyan. I guess I'm the usability expert here, with the image projects and I would like to add on to what Jeff was saying, that we have a lot of, right now we are doing a lot of user research and we are trying to make sure that everything is accessible to our intended audience. But even further than that we are currently talking about expanding the tools so that they can be used, so that we can create content specifically so that people who are blind and low vision can create content that works in the image architecture because I think the ideas that we, for the most part, the lab is bunch of sighted people and we want to give that power back to people who you know, are experienced vision loss.

NEIL:

Excellent. Thank you. So, there is a couple of things that sort of piqued my interest. One is to think the open-source bit is important and now I understand the connection with Mike because Mike Gifford from open source, has been a big advocate of open-source accessibility for a long time but also the intention that is it is something that's sustainable and scalable. It's really interesting to me. The second bit I've been fascinated with the potential for an awful long time is haptics. I'm really interested in haptics and particularly some of the sort of emergent technologies like Mid-air Haptics and ultrasonic arrays. Are you working with that yet because that's an area that I think has, you know enormous potential, long term for being able to enable people to engage with all kinds of things in a really sort of quite deep way.

JEREMY:

So, we have not done any work with the ultra-haptics technology or other ultra-sonic haptics that I agree with you, it is very compelling, being able to manipulate or interact in a polymetric environment is compelling. I think the technology is still obviously developing to reach a point where it's strong enough and compelling enough to render those types of experiences. But for our needs, although we have dabbled in discussion with the idea well, images often are expressing depth. You have a scenery, a picture of scenery, where there's obviously foreground and background. Perhaps it will help in conveying understanding if you experience that with depth. But by and large we are talking about two-dimensional content in a web page and therefore rendering that with a two-dimensional haptic reproduction system or something where you experience in a plane and have either raised pins or your experiencing courses is probably adequate. But we would certainly be keen to have the ability for plug ins as Jeff mentioned, like if somebody wants to build an ultrasonic plug into our architecture to render haptics through whatever hardware or software platform should hopefully support that.

NEIL:

I mean, I think it's maybe still early days if you think about the Gardner height curve. It's probably on the height of the height curve and we go through that trough of disillusionment before too long. But I still am hyped about it, and I still see enormous potential and actually the technology is relatively affordable. So, I wonder how much of that might get embedded into mainstream devices like we have with Lydar and mobile phones.

JEREMY:

Once a mobile phone manufacturer says we are adding haptics it becomes widespread.

NEIL:

Well, certainly haptics is there but not to, so, I remember going to an RNIB conference, what 15 years ago and they were talking about haptic feedback in mobile phones then and you know, if you think about what happens on an iPhone these days, you've got the haptics on the device because there are so few buttons. It's all virtual. It's all the illusion of touch that you get there rather than the actual pressing of buttons and everything else. So, I think that maybe but even so and I will stop monopolising the conversation on haptics in a second but even with something like a mobile phone, if you're saying that you only need two dimensional could you convey the information that you're collecting on your project because a lot of people use their mobiles, a lot of blind users use their mobiles, could you use your project and feed those haptics through a mobile device?

JEREMY:

Jeff, this is really your neck of the woods.

JEFF:

Sure, so just to be very clear, unfortunately images do not currently work on mobile, so I'll get that right out there. It's a browser extension that currently works on laptop or desktop browsers like Chrome and Firefox and such. But we obviously have our eye on IOS, we hear it again and again that this is the primary interface that many people use to access content. So, fortunately Apple has made their browser extensions available in Safari, this is on IOS, this is late last year. And so, we are exploring that now, in order to extend there and obviously the experience is going to have to be somewhat different on the mobile device, it just works differently. So, for example, we use a context menu so you can tell us this is the photograph that I'm interested in. That's not going to work on a mobile. So, we have a project starting up with some students, hopefully very soon, to create that mobile experience both from the design side, how does it have to work differently as well as the platform in order to get it to work with that extension and one of the key things there is the haptic engine on Apple devices is super good. So, one of the original things we had considered is, can we use that device maybe even in tandem with your desktop browser to give you a haptic empierce while using other devices. So, I think it's not even just a matter of using it on those mobile devices directly but maybe using it as a haptic device you already have, so it's essentially free to make the experience richer even if you are using your desktop. So, I think there's a number of ways we can go there.

DEBRA:

So, this is Debra Ruh, and I'm a little tiny bit lost. So, would it be okay if we just backed up because I know that we talked off air. But can we just, I'm like searching for an image in the background. So, could we just talk about, what is the project about and I apologise if I'm being simple but and also why it was important to David to get involved. But I know we are talking about haptics and objects and the internet but I just I know we were talking about alt-tag, Jeremy but I'm just getting a little bit confused about the tool and what you know, I apologise.

DAVID:

Sure, it's David, thank you Debra. The whole idea of the projects is developed out of the conversation I had with Jeremy about four years ago in which I had been at the large architectural design show here in Vancouver called Build Ex. It was myself Bill Taggart who is a blind instructor in architectural design and Albert Ruelle from the Canadian Council of the Blind get together with tech guy, and we were going to the VC Convention Centre, the Vancouver Convention Centre and so I went online to just get an understanding of where the room was going to be that we were going to meet in and of course, what I encountered even though they commented on how accessible the venue is, the actual information was all graphical. So, it was, I was just trying to understand how to try the find the room, you know, where were the washrooms and that sort of stuff and that was not accessible and it's something that happens, I think to everybody around the world who is blind on a daily basis. So, when I sort of raised this with Jeremy was what can we do to sort of address this issue. So, at some point I can go on to a website for a building or a shopping centre or someplace I'm going and if there is a map how can that be accessible to me. And so that was really what initiated this whole conversation and again, with that I'll pass it back over to the image team who have taken my complaints and attempted to develop a solution.

JEFF:

Cyan, do you want to give a description, or should I give a background?

CYAN:

Actually, I figured that it probably would be best if we just, can I share my screen on this platform?

NEIL:

You should be able to. At the bottom of the meeting there is a share option.

CYAN:

Give me one second. I don't think it's allowing me to do that. No. Yeah, just let me sort out the technical issues here.

JEFF:

While you're doing that maybe I can just give kind of a nuts-and-bolts background to what images from a practical perspective. So, if you go to our website which is image.a11y.mcgill.ca. You'll see at the top there, there's a download the image chrome browser extension. So, you fire up your chrome browser, you install our extension and what that's going to do is it's going to give you a new option on photographs. So, if you go to a photograph and I use a mouse, so I right click but every operating system has its own keyboard equivalent for bringing up a context menu on a photograph. You do that and you get a new item there called get image rendering and when you trigger that it sends that photograph and some other information up to a server that we are running and it goes to a couple of stages, where there is a stage that it extracts information from it, identifies what objects are there. If it's a map, it looks at where that is and creates some geographic data bases to find out what streets are around and what points of interest are there and then it goes through another stage where it actually renders that spatialised audio and the haptic experiences and then it returns that back to your browser where you can interact with it. And so, for one photograph you might actually get multiple renderings. You might get one that is focused on the objects, and you might get another one that is just a text description of it. So, all of these modules on our server kind of work together to create that experience. So, from a nuts-and-bolts perspective that's how you would actually interact with it. It's live now, so you can go there and give it a try today. I will comment that our main server might be going down for updates in a few minutes. But it is up right now. There might be a little.

CYAN:

I was going to say, Jeff, why would you do that to me now?

JEFF:

The texts are coming; they show up and they show up.

CYAN:

Okay. Yes, I figured that it would probably be best if I just walk you through a basic example. So, right now, I'm just on our web page because I know these images work by the way and all I'm doing is I'm bringing up a context menu and I'm selecting get image rendering on this photograph ."Image request sent ." Are you able to hear that?"Processing data." okay."Processing data, processing data. Image results arrived." So, on my screen over here is a pop up and you get a text description of what the machine learning spits out and what sort of, the cool part over here is you can get spatialised audio of the different components in the picture. So, I'm just going to play the rendering, the photo that I just ran it on is a picture of my hometown, Toronto, Canada and it's a specific street, it's an intersection there. So, I'm just going to play this."This outdoor photo contains the following outlines of regions. Building [music] car [music] and signal. It also contains the following objects or people: six people, two umbrellas, two traffic lights, four cars, a bus and a backpack." So, if you're wearing headphones like I am, you should be able to hear the sound kind of sweeping left to right and some people are able to pick out like the elevation of the objects and if you can't it's also reinforced by pitch as well. So, that's kind of, so what we are trying to create the non textual, sort of visceral description of where things are so you can understand the composition of elements in a photograph.

JEREMY:

I just want to jump in and comment because Cyan is probably not hearing it, through the system it's not actually spatialised so, through this Podcast system I'm not actually hearing it in locations. So, it just sounds like it's coming through mono to me right here. So, I would encourage people who are listening, if it's not spatialised in your head go to our website and you can get these demos there either as recordings or install the extension and you can get them live.

DAVID:

Great. And the other thing that would because this was an example of the spatial audio but part of when I met Mike Gifford at the NFB convention in New Orleans was I was out front where Image was giving, where people from McGill were giving demonstrations of Image, which also included our use of the dot prototype tablet which is the refreshable peneray that Jeremy referred to earlier. And so, we were getting a lot of people and interest on that. But what also transpired, which we found out at the conference was that human ware and American printing house for the blind are working with the dot prototype to create a refreshable braille and graphic display that will be coming out in a year or two. So, it was also very interesting that the haptic prototype which is made in South Korea was, it looks like it may have a much larger life than what we initially thought with what was being done at McGill. Anyways, I will pass it back to the group now.

DEBRA:

Great comments though, those are very helpful David, very helpful also. Go ahead Antonio.

ANTONIO:

I'm very curious how this can be on the day-to-day life of people use the internet. Now, when I'm navigating on Twitter, when I'm going to LinkedIn. How can this lead to social apps.

DAVID:

Just from a user perspective, one of the things that we identified very early on in this project is that on many websites, about 80% of content is graphical and where there is alt text that has been put in there by people doing really good work there is a level of understanding with that. But that is not always available and also, it doesn't really give an ability to explore the information. It's more that the information is just being told to you. So, for things like graphics. So, in the case of the image project, the team has been looking at the use of pie charts. So, it's a format that provides the background information that the system can then be set up to take that information and then provide it in a way to the user that they can explore that information in more detail. An example is like with a pie chart of whether for a sighted person because at one time I was sighted, you could pick up information very quickly from a graphic like that. So, part of this is to use spatial audio in providing that nuance that quickly provides you the information and comparison of size and so forth done through spatial audio. The other issue is with image in which it's accessing things like Google maps which has been largely not accessible, the graphic itself and the team here has used really clever engineering to make it accessible using image and actually with that because I'm not the expert on it I'll pass it over to Jeff who can explain that a little bit.

JEFF:

So, it's funny because for the photographs like Cyan was demonstrating right, we use machine learning that extracts and detected umbrellas and things in the photograph and such. For maps, we decided to do a bit of an end run. So instead of taking a picture of the map in JPEG or whatever and trying to make sense of that, what we do is we try and define the latitude and longitude of the centre point of that map and then we query additional databases on our server in order to figure out the street layout and what is around there, so that we can render it and that has a lot of benefits in terms of accuracy because we know exactly where it is if we can determine that. So, we try and be clever about how we use machine learning that's not always the solution to the problem and as David alluded to for charts as well, we use a format called high charts, where we can get to the underlying data. And to expand on what David was saying for like a pie chart there is really a difference right? So, you might, through a screen reader you might get a table of the data but the information designer for that page choose not to show it necessarily as a table. They showed it as a pie chart because they wanted to emphasise certain aspects and relationships in the data and that's where we go beyond just what a screen reader might do of just reading out the information to try and give you that same visceral experience of the pie chart by sonifying the sizes of the different wedges, in a kind of a pattern around your head and last before I shut up, Antonio, just to speak directly to your social media comment. Social media is like any other content to us, if there is a photograph there you can right click it or context menu and you can send up and get these kinds of experiences for that content as well. But one of the things that excite me for the future is maybe there are specific aspects of social media and I know Jeremy hates this example, but I like to bring up the one of memes and such. What if we had a specialised module on the server that recognised those memes and instead of just saying this is a Rick Astley video it actually gave you a customised experience of a Rick Roll. Things like that. So, I think there is more that can be done there. But again, that's why we are a platform. So, eventually if we get to scale right, maybe Twitter decides they want to make a module for Image or run Image on their own sight and create these experiences using the tools we have. That would be part of the dream of the future of Image.

NEIL:

Excellent. So, I think Cyan you wanted to just quickly demo the pie chart if you want to share your screen again?

CYAN:

Now that I know how to do it.

NEIL:

Yeah.

CYAN:

Also, I wanted to add, it would be a special treat for me if someone were to make a meme handler. Just saying. Yeah, so the nice thing about a pie chart is that you are getting, for sighted people like us, we are getting proportional data, right? And so, our standard as designers have tried to replicate the feeling of seeing all the proportional data and I'm just going to play this for you and you're going to hear different wedges as a sound that plays proportional to and the amount of time it takes for the sound to play is proportional to the slice of the wedge. So, if you just listen in and I should have probably done that beforehand. But anyway.

JEFF:

I'm not hearing that unfortunately.

CYAN:

You're not?

JEFF:

Your sound was working before but I'm not getting it now.

CYAN:

Let me try again. Share screen and system audio. There we go. So, yeah. Let me."Image request set. Image results arrive." 09 years 7.5%, 10 19 years 9.9%, 20/29 years 16.8%, 30/39years 16.7%, 40/49 years 16%, 50/59 years 12.6%. 60/69 years 7.4%, 70/79 years. But you can hear the different wedges take different amounts of time, right? And I think that's particularly cool about our pie charts experience.

DAVID:

Sorry, Jeff, I was also going to say that you know if you are using stereo headsets and you're on your extension, you would get that also in a spatial audio. So, anyways. Jeff?

JEFF:

Taken the words out of my mouth David, and you are hearing that around your head, and it creates a different experience not just the timing of it but the space around you as it's moving around your head. So, just to be clear, what you were hearing there, that's COVID data for the number of COVID cases in different age groups and to be clear, this a live demo so, I'm watching our server now and I'm seeing these requests come in. This is not something, but we had to create that manual. This is the actual COVID data website. We used the pie charts format. We're actually sending that data up to our server and rendering that and sending it back in one second or so that you saw that come back.

DAVID:

Yeah, and I would also add that we don't retain that information.

JEFF:

Yeah. Data privacy is very important to us, so we do the queries and then we get rid of that data. You can say we can keep it temporarily, but we don't know what is coming up, so.

NEIL:

I think there's a transparency about data privacy is hugely important for you know, a number of reasons that we, Antonio and I both work for a European company and we constantly have challenges where data is being sent outside of the EU or outside of the UK in my case. So, that transparency and the data attention policies and all of this kind of stuff make it possible for large organisations to green light the use of some of these tools because we know that people could benefit from these things but in an enterprise environment if you can't document where the data is going or what the privacy rules are or how it's processed then the likelihood is you will never get the tool allowed or committed in that environment. So, kudos to you for having recognised that.

JEREMY:

I don't think in fairness that kudos are necessarily deserved in the sense that as academic researchers in a university, we have a research ethics board that has strict requirements for what we are about to do, and we have to go through all sorts of checks and safety concerns there. Then there is the add on when we make this available for Google play store, Google have their own policies that are related to data privacy that we have to respect as well. Now, we are happy to follow those, don't get me wrong. But I don't think we should take undeserved credit for doing so.

NEIL:

Yeah, fair enough. So, this is a fascinating topic, and I could carry on for ages but unfortunately, we have reached the end of the allotted half an hour. But I think that what you are doing is really interesting and I think that I have already downloaded the plug in, and I'll have a play with it in Chrome. Oh, go on Debra you want to ask one more question?

DEBRA:

And as I went to unmute, I turned my camera off but the last question I would like to ask and thank you Neil. What can the community do to help you all with this? What can we do to really get behind you. We have a powerful community that believes that I like what I'm seeing as we are rethinking what accessibility means means and blending it into, how do we really make sure David has real access and so I like that you are engaging with our community, bravo for that. But yeah, what is your ask? Come on what is the ask?

CYAN:

Please download this extension, try it out and let us know. Communicate with us. I think that's the biggest thing. We can design all sorts of tests and tools to evaluate how good this is but it's only as good as the feedback we get really. Yeah, that's a big one.

JEFF:

I would say that Cyan nailed it and realised that we consider this is a beta, this is the beginning . So, we have got the platform going, we have some experiences, but we know they are not broad enough, we know they're not deep enough but what we really need is that feedback of what is stopping you from using it. Is it the usability issue that you can't overcome. Is it the interface that we have. Is it that we don't do the types of content you like. That's kind of opaque to us. So, as Cyan said, that feedback don't just try it and say not interesting for me and be done. Say, what was it that would have made it really compelling for you.

CYAN:

One thing that happens a lot in usability is you come up with all sorts of tests and they are canned, they're very limited in what they can do, the best thing to do is, give it into people's hands. Take it out into places where people are using it, where real people and not just you know not just in the laboratory.

DEBRA:

And the clarity, when you say people using it, are you talking about individuals that are blind and have vision loss or are you talking about designers that want to make sure that they are giving us graphics that work for people better, who is the‘they’ and then I promise I'll go on mute before Neil kicks me off.

DAVID:

From the person who is connected with as a user, of course my focus has always been with the team is that this is only impactful if it makes sense to the users. So, we really need users to access it to give the feedback to the team on what is working, what is not working. So, that they can apply all of their expertise and knowledge to get it to that point in which it really is impactful for the user and from my own belief, if it's impactful to the user then the rest will follow.

DEBRA:

I agree. Great job. Great job everyone.

NEIL:

Yeah. Really interesting stuff. So, thank you for sharing with us. Thank you to My Clear Text for helping to keep us captioned and accessible all of this time. Also need to give a shout out to say thank you to those people who have already contributed to the Axschat funding. If you haven't already, please do share this. What we will say also is that we don't want anyone in our community to put themselves in difficulty by contributing to what we do, so this is not a please go fund us at any expense. But if you can share it and find people in deep pockets that want to keep us on air, fantastic. So, on that I look forward to you joining us on Twitter for what will be, I'm sure a little discussion.

JEFF:

Thank you.

JEREMY:

Thanks for having us on and we're looking forward to Tuesday.

DAVID:

Thank you.