AXSChat Podcast

What if deaf people could have truly private phone calls?

Antonio Santos, Debra Ruh, Neil Milliken

Communication is everything, but what if you couldn't make a private phone call? Tomer Aharoni, CEO and co-founder of Nagish, joins AXSChat to reveal how his team is revolutionizing accessibility for deaf and hard of hearing individuals.

For millions with hearing loss, the simple act of making a phone call has historically required dependence on others—interpreters, captioners, or family members. Tomer shares how Nagish disrupts this paradigm with AI-powered technology that enables truly private conversations. The service provides real-time captioning for incoming and outgoing calls while allowing non-verbal users to type responses that are converted to speech.

What began as a Columbia University side project has transformed into an FCC-certified service used daily by tens of thousands. Tomer details their remarkable journey—from building the first prototype in Facebook Messenger to securing critical certification after years of regulatory navigation. Throughout this evolution, one principle remained constant: developing directly alongside the deaf community.

The conversation explores Nagish's expanding ecosystem beyond phone calls, including solutions for in-person conversations, workplace communication, and even their groundbreaking virtual reality meditation experience "Silent Flow." Tomer explains their thoughtful design approach, making technology accessible without stigma, particularly for older users who might resist acknowledging hearing loss.

Looking toward the future, Tomer discusses how they're balancing innovation with privacy, using AI to improve context and accuracy without storing sensitive conversation data. While currently available only in the US due to regulatory requirements, Nagish has ambitious plans for international expansion.

Ready to experience communication transformed? Download Nagish from nagish.com and join the revolution in accessible technology that's breaking down barriers one conversation at a time.

Support the show

Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com

Debra https://bsky.app/profile/debraruh.bsky.social

Neil https://bsky.app/profile/neilmilliken.bsky.social

axschat https://bsky.app/profile/axschat.bsky.social


LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/

Vimeo
https://vimeo.com/akwyz

https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh

Neil Milliken:

Hello and welcome to AXS Chat. I'm delighted that we're joined today by Tomer Aharoni, who is the CEO of Nagish, and Tomer will explain to you what Nagish is shortly. Toma, we've talked before coming on air and I know you've already met Debra in person at Zero Project and I somehow managed to miss you because I was there this year as well. But welcome to the show. Can you tell us a bit about what Negish is and the work that you're doing, because it does sound really, really interesting.

Tomer Aharoni:

Of course. Hi, neil, thank you for having me. So, as you said, I'm Tomer. I'm the co-founder and CEO of Negish, which means accessible in Hebrew, and that's exactly what we're doing. We're making communication accessible for people with hearing loss. So, for those who don't know, today, if you're deaf, if you have a significant hearing loss, if you have a speech disability, your only way to communicate effectively is to rely on another person A stenographer, a captioner, sign language interpreter, maybe a family member.

Tomer Aharoni:

When we first came into this, it felt insane to us that deaf and hard of hearing people cannot have private conversations. We wanted to humanize the experience of communication. We know how crucial communication is during emergencies and during really any time, like communication is everything. So we decided to build something, and that something was the first iteration of Nagish, which is a completely private service certified by the FCC, that allows deaf and hard of hearing individuals to place and receive calls privately. So you can call any number, receive phone calls from any number. Nagish would caption your calls in real time using AI. It's extremely accurate, extremely fast, and if you're nonverbal or you prefer not to use your speech, you can actually use the keyboard to communicate and Nagish would generate speech for you. So we've been doing this as a side project for quite a long time, made it a company in 2022, got certified by the FCC in 2024, and then again in 2025. And now we've been offering this service to tens of thousands of people that are using it daily to communicate.

Neil Milliken:

And you said we got certified by the FCC. I noticed there was a two-year gap there, because actually getting something certified by an organization such as the FCC is not a trivial thing. Maybe you can tell us a bit about what that involves, because I think as a relay service, relay services are the services that deaf and hard of hearing people use in order to communicate, and they usually involve humans, fcc being the Federal Communications Commission, which is the telecommunications regulator in the US. We have similar regulators in the UK, so maybe you could just tell us a bit about that. And then I know Antonio probably have a follow-up question.

Tomer Aharoni:

Absolutely so. Like you said, the FCC is the Federal Communication Commission. They're in charge of everything telecom and more In the US. They have equivalents in almost every developed country and really they're in charge of everything from putting a stamp on an iPhone to making sure that the new wifi signal is approved for the market and doesn't cause cancer or efficient enough, to operating or maintaining the regulation on relay services. So relay services are really these class of services that are meant to let deaf and hard of hearing people communicate in a similar way to how hearing people communicate. So if I'm deaf in, actually in Israel, I'm originally from Israel, so that's a great example. If I'm deaf in Israel today and I want to place a phone call, I really have no options. I need to find someone who can help translate between me and a hearing person. Luckily, in the US that's not the case, because the FCC came in and said we are going to reimburse providers who can actually help us reach these gaps. So over the last 30 years there have been a bunch of providers not too many, just a few that operated call centers with either captioners people that actually type or stenographers, which are a slightly more professional version of that, or sign language interpreters that go on a video call and translate between a deaf person and a hearing person.

Tomer Aharoni:

Now, when we first heard about this being an engineer myself, I grew up in Israel, moved to New York 11 years ago, studied computer science at Columbia, and while being at Columbia, together with Alon, my co-founder, also an engineer we learned about this and thought we could probably do better than this. The year was 2019 when we first started working on this and it just didn't make sense to us that a deaf individual cannot have a private conversation. So we decided that we're going to be the first AI-powered relay service. When we said that, everyone told us yeah, that's not going to happen. Regulation moves slow. It's going to take you 10 years to get certified. There are only four or five providers that are certified. Just don't do it.

Tomer Aharoni:

But we couldn't stop because we started by putting the service out there. We built it actually with the deaf community. So the first thing we did is we reached out to the deaf community, started contacting people and asked them what do you need from your communication services? What do you want? And people actually at the time didn't even know what they wanted because they didn't know it was possible. They didn't know it was possible to have a private conversation like that. So we built something.

Tomer Aharoni:

We hired Matt, who's still with us. He was our first employee and our head of community. Today Matt is deaf himself. And we built the first version of Nagish. So we just put it out there, no charge to users. But over time it became expensive and very time demanding and that's when we said, ok, we're taking the risk, we're leaving our jobs, we are applying to get certified by the FCC. And you're right, it did take us more than two years actually two and a half years for certification one and three and a half years for the second certification. But we did eventually get it and now we can offer service to every single individual in the US who's impacted by hearing loss completely free of charge.

Antonio Santos:

You mentioned that one of the members of your team is deaf and is leading the community. Can you tell us about the process of developing the solution to reach the level that you are where you are today? What was the dynamics? How have you engaged with users in order to perfect the product?

Tomer Aharoni:

Absolutely so. One of the first experiences that we had when we started, we tried to contact someone If I recall correctly, his name was Jack and he was based in New York. He was running the local chapter of the National Association National Deaf Association in New York and we called Jack and a woman answered the phone and we were very surprised because we thought that Jack was a name of a guy, not a woman Classified genders. But we just had a feeling that we're not talking with the right person and we said hello, is this Jack? And she said yes, and we said okay, hi, jack, we wanted to let you know about this new service that we want to develop that would allow you to have phone calls. That's how we framed it Back. At the time. We didn't know which terms we should use and she couldn't stop herself and she immediately said Jack already has something like that. In hindsight, we realized that that was the interpreter, jumping in and telling us that Jack is already relying on hair services and he's covered. Now it's not something that happens. Usually. Most interpreters are very professional, but it was such a unique point at time and we were calling with so much excitement that she just couldn't stop herself, and we left this call, on one hand, very confused not sure if Jack actually got our message, on the other hand, but on the other hand, we realized that we have to develop this.

Tomer Aharoni:

So the next thing that we did, we started reaching out. We joined a bunch of groups on Facebook. There's like a bunch of groups like Deaf Night Out, a group of deaf individuals that go out together in different cities across the US, and we just pinged members of those groups and we told them hey, we're two students from Columbia, we want to build something. We don't want to make any assumptions. We want to build this with the community. Can you tell us what you need? And we got a bunch of feedback, ranging from I don't answer the phone, I just decline calls. Or when someone calls me, I decline the call, I ask them to text me. I don't even have a mobile phone, I don't have a smartphone. Calls me, I decline a call, I ask them to text me. I don't even have a mobile phone, I don't have a smartphone. Because it's like really wide array of answers and we decided to focus on the ones that we can deal with, the ones that told us I have a smartphone. I wish I could place phone calls, but right now my only option is to wait for an interpreter to become available.

Tomer Aharoni:

So the first iteration was very dummy. It was actually. It used Facebook Messenger, so there was no app. You'd log into Facebook Messenger, search for Nagish and then there was a bot behind the scenes that you would tell it start a call. The bot would ask you which phone number would you like to call. The Nagish backend would call that number and then you just communicate with someone over Facebook Messenger, which felt really clunky and people hated it.

Tomer Aharoni:

So we got a bunch of feedback. The main feedback was you need to develop an app. We need an app for this. So we built an app, put it on the app store and then, because we worked so closely with the deaf community, we forgot that being deaf is not a binary condition. It's a wide range of conditions and people have different needs. And one of the first feedback points that we got was people have different needs.

Tomer Aharoni:

And one of the first feedback points that we got was I can speak. Why do I need to use the keyboard? And we're like oh, the first few users that we built it for preferred not to speak. They were profoundly deaf, so we never added audio capabilities. There's no audio, you have to type. And people complained. They're like listen, I need your service, I need captioning for phone calls, but I don't want to use the keyboard, I want to speak using my voice. So we built what we call Nagish V2, which included an actual phone call behind the scenes. So you have the phone call plus you have captioning, plus you have the keyboard and you can type to speak. And then we got another set of feedback that the voices sounded robotic and weren't good enough and the captioning engine wasn't accurate enough. So we built a captioning engine and we included a new set of voices that sounded natural. And then, honestly, it's exactly the same thing today. We just keep getting feedback from the community and we keep improving and building more features and more products. So the last product that we built, nagish Live, is exactly what we did for phone calls, for in-person conversations, so now people can take Nagish out and have a conversation with someone in real life. We are now building Nagish for the workplace. So exactly the same thing, only as an add-on for Zoom, microsoft Teams, google Meet, and last week actually two days ago we introduced Silent Flow, which is something we are extremely passionate about.

Tomer Aharoni:

Silentflow is a completely new product, something that we've never done before. It uses virtual reality to let deaf individuals experience things that are really hard for them to experience on a day-to-day. So let me explain. Silentflow, specifically, is a meditation experience. You put VR glasses on your face and you go into this really calm world where you meet Glowy, which is a really cute, kind of like a ghost, looks a bit like a ghost and it teaches you meditation in sign language Using AI. It sees your hands. You can actually move your hands. We use hand gestures recognition and we also look at your face to make sure that you follow. And what's really cool about it is that if you're deaf today and you want to have a meditation class, I really don't know many deaf meditation teachers, so you need to go with an interpreter, and it's not the best experience to go to a meditation class with an interpreter. Oui and Silent Flow is just a two and a half minute experience that we built to show what would be possible with technology going forward.

Debra Ruh:

Well, I'm so excited. Every time I've talked to you for years, you always get me so stoked about where we could go. But I'd like to back up for a second but then also talk about some of the specifics too, because, being here in the United States, I believe I'm correct, tomer it's our 511.

Tomer Aharoni:

What do you mean by?

Debra Ruh:

511? And so I might not be correct. I know that there are different numbers. Of course we have in the United States to do different things, yeah for TTY and RTT. That might be 511, might be something else. There's a number that we have specifically for our deaf community where they can call and get help to go into the systems that you're talking about. What is the number?

Tomer Aharoni:

So I believe that historically it was 711.

Debra Ruh:

711. All right, I'm so embarrassed. 511 is if you want to dig and not dig up the electricity.

Tomer Aharoni:

Yeah, I think it's 711. But today most services actually offer mobile apps, so people just download the app and they mostly use it.

Debra Ruh:

Right, but it was such a huge, huge, huge system for our country for so long, and so I just wanted to, you know, talk about that a little bit. So you are part, are you also part of that humongous infrastructure? No, we actually had no part in this, because when we when we decided not to go that path, probably because it was older and we were getting rid of it.

Tomer Aharoni:

Exactly.

Debra Ruh:

So that I just wanted to ask in case any other older American had thought about that. I just wanted to ask in case any other older American had thought about that, and I also want to agree with what you and Neil both said. It's a big deal to get an FCC certification. It's a very, very big deal. But Neil just had something smart to say in the chat, so I'm just going to say 7-Eleven, right, it's not about the 7-Eleven or stores.

Tomer Aharoni:

No.

Debra Ruh:

Right Alec In the back to make me laugh. So anyway, but this is what I'm curious about, tomer, and I think I've asked you before and I've forgotten why. Why did y'all even start doing this? First of all, thank you for being such a big part of the community and for reaching out to the community and understanding probably in ways most of us don't, how diverse and proud this community is. I have lots of friends in the community and they all think differently about it. They all have different opinions. It's really quite amazing, so I applaud it. But I wonder why did you even decide to start going there? And I've watched y'all do it. I see how committed you are to it and to the community, and I just also wanted to applaud that, and so let me give the floor to you.

Tomer Aharoni:

Thank you. I honestly don't know. We were pulled into it. We were students, both me and Alon, and we learned about this problem and as engineers, I think it picked two things. One, there was a problem that we felt like we could I don't want to use the term fix, but we could make better, we could make a slightly better reality. And on the other hand, there was a technical challenge which personally challenged us.

Tomer Aharoni:

Honestly, we had no intention whatsoever to make something that is ongoing out of this. I'm always saying like we had no intention to make it a company. We had no intention to even make it a project. We wanted to build one thing, show that it's possible and have someone else pick up on it. But as soon as we started reaching out to people from the deaf community, we felt the need was so strong we just couldn't stop working on it. So we found ourselves working nights and weekends and I graduated and accepted a full-time job and the first day on the job I asked their permission to keep working on that and again work nights and weekends and saying applies for a loan.

Tomer Aharoni:

And then COVID happened. And when COVID happened, that's actually when we almost killed the project because it became so expensive and we were so burnt out we were working like crazy that we said either we leave everything and do this or we just kill the project. We looked at it back then and we had probably like a hundred users, like something very small. But because we only had a hundred users, we could see their faces. We knew each of them by name and we couldn't leave with the idea that we're going to turn off the service on them. So we decided to leave our jobs, take the risk, make it a company. We were fortunate to have initially Comcast partnering with us. They actually invested in a geish and became our first investors and allowed us to scale it in the very early days. And later on we were fortunate to bring a bunch of other investors, some of the best investors out there, actually not just impact investors, but real best in class investors that believe that you can build a sustainable business and, at the same time, make a change.

Tomer Aharoni:

Yeah, it just became the dream job.

Neil Milliken:

That's what it is so it's a fantastic story and really have to applaud you for being able to navigate through the complex regulatory systems because they are really significant hurdles that you have to overcome.

Neil Milliken:

So you mentioned the diversity of different within the deaf and hard of hearing community and the different needs, and I'm thinking in the case of both my parents, right, they've aged into hearing loss so they're not used to using captions. They're not used to certainly wouldn't consider using text relay services or third parties to help them with phone calls calls even though they themselves have been dealing with older relatives that have had even more profound hearing loss and have complained about the difficulties of communication. But they do use smartphones. So is this service available outside of the US or is it still a US-only app? And how do you sort of onboard older users and that population of people who are losing their hearing onto a platform like nookie? They may not consider themselves deaf, they just got a little bit, a little bit of difficulty hearing you. How do you reach that audience and how do you sort of onboard maybe the sort of less tech savvy onto your, onto your platform?

Tomer Aharoni:

it's a really good question and one of the biggest challenges. So if you look at the numbers in the US today, you have approximately 40 million Americans with some degree of hearing loss. If you look at the numbers of people who use relay services, we don't know the exact number, but it's way less. It's approximately two to 5% of that 40 million figure. The reason for that is, first, denial it takes on average. If someone starts to lose their hearing up to the point where they need hearing aids, it takes them, on average, seven to eight years to take action.

Tomer Aharoni:

There is so much unnecessary stigma related to hearing loss. I've been dealing with it personally at home with my father and now with my mom, and I've seen so much of it. And when we designed Nagish, this was one of our guiding principles. This needs to feel exactly like the native phone app on your device. It cannot look different, it cannot feel different, it cannot be disorienting. It's a phone call experience. Like you know, we did a benefit of captioning and it's been a guiding principle since then. All of our buttons are really big. We support larger font sizes.

Tomer Aharoni:

We don't make drastic changes because they can disorient our users. We call it senior mode, but really we just don't do drastic changes for anyone. Even now, like I'll give you an example, we wanted to change the icon of the app. We just have a slightly less different design. We are spending eight weeks slightly tweaking the icon every few days so that people don't get disoriented. So we take all of these very, very designated choices to make sure that people find it easy to use. But it's only about usage. That's after we got to the people. The second aspect of it is how do we get to these people? How do we position ourselves in a place where they understand? There's no stigma. You just download an app. No one needs to know that you're using it. It will give you captions. It will give you captions. It will give you freedom. It will be easier for you to communicate. It's our challenge, it's what we're doing every day and it's been working pretty well. But we have a lot of work to do.

Neil Milliken:

So I know Antonio's got a question, but I have to just praise you for that approach, that incrementalism and that care for taking people through the change, because this is something that is not happening in tech in general and change is actually very disabling for an awful lot of users. So thank you for that. I think it's absolutely best practice. Over to you, antonio.

Antonio Santos:

I was thinking on emergency services and if you look at our helplines and work around the world, this is a big problem for people because most of the services available are through a phone call. You call an emergency service and somebody will reply back on the other side and many people are restricted and are unable to do that and in some regions it's very difficult to have the support services to identify. Oh, there's a deaf person on the other side that is calling that needs a different type of assistance and a different type of support. How do you find ways to integrate and try to address this solution in existing systems through the technology that you have developed?

Tomer Aharoni:

Yeah, so that's almost the same answer. When we developed Nagish, we said we cannot rely on the other side on a hearing person that may have never communicated with a deaf person to install something or make adaptions. That would be the best case, but in most cases you cannot expect corporates, you cannot expect small companies, you cannot expect small businesses to make adaptions for the greater good. So the way we designed Nagish was again to increase the success rate of each call. But only one side needs to use the service. So a deaf or hard of hearing person may download the app. We have a set of tools for them, powered by AI, to make sure that the conversation goes smoothly.

Tomer Aharoni:

So one example of that for our non-verbal users. They can use a set of quick replies. As soon as a call connects, the GISH would send a message on their behalf, which is optional they don't have to do it, but many choose to. It says you're speaking to a deaf person, please be patient. Or even during the call, just a second, I'm reading your response, please hold on. So we added a set of responses. We added a bunch of sonic cues that a hearing person hears so they know what happens on the other side. We can actually give them examples of the background, noises and everything that is happening in the deaf environment and that really improved the success rate of each call. And just like with anything else, we have a lot of work to do. We have it's progress. It's not perfection overnight, but we're getting there and it's getting better and better all the time without putting any requirements on the collie on the other side.

Debra Ruh:

I would assume it's already starting to add a lot of value, but at the same time it can also I'm seeing add so much confusion and can be so misunderstood what we're even meaning by that AI.

Tomer Aharoni:

Exactly so. It's actually funny because we've been using AI from day one. But AI is this umbrella term that became very, very popular in the last two years with the introduction of large language models, llms, tools like ChetGPT. Now, when all of this first came out, the first thing we wanted to do, like many other app developers, is just put it in the app. But then we started thinking what is going to be the value to the user from having this experience in the app for throwing AI everywhere? So we said we are actually going to improve our product using AI, but it's going to be completely invisible to the user. We don't want to confuse anyone. So we added a bunch of AI layers to improve the accuracy of captions, to include context, which was one of the biggest issues with automated speech recognition in the past.

Tomer Aharoni:

So I always give the example. Take the sentence I'm not allergic to penicillin. If you want to check the accuracy of someone captioning that sentence, usually you would use word error rate. How many words were added, removed or substituted incorrectly from the sentence? Now a human may say the person on the other end said that they're not allergic to penicillin. Technically, they completely changed the sentence From a word error rate. The accuracy is horrible. It's like 10%, but the meaning is there. They rely on the exactly same meaning. An automated speech recognition engine may say I am allergic to penicillin. It only substituted one word instead of the entire sentence, but from a word error rate perspective it's more accurate, which is absurd, of course. So LLMs actually give you that semi-human context window where you can have an AI analyzing the call at every single moment and making sure that the context makes sense. That really improved the service significantly and that's how we mostly use AI today.

Antonio Santos:

We know that over the last couple of years, over the last year, the topic of data sovereignty has become really an important topic. So, in terms to scale and to comply with privacy regulation around the world, how do you see that happening? Do you feel that regulation in some cases is an obstacle and a challenge, but where do you see the possibilities of bringing that, your solution, to other regions?

Tomer Aharoni:

Yeah. So in our case, the choice was very easy. We really never needed data for any training purposes. So from day one, we designed a product in a way that is end-to-end secure, meaning that even if we want it, we don't have access to the call contents. We transcribe it in our cloud, but once we send you the text, it's being deleted on the spot. So we never have access in transit and on storage level, we never have access to the data.

Tomer Aharoni:

When we applied for FCC certification, it was actually a requirement, so there was a win-win. We said you know what? It's actually nice that we don't have to contend because investors ask you like do you have any data play here, are you planning to do something with the data? And it was, in a way, very comforting that we said actually, the FCC doesn't allow that, so we're not going to do anything with the data, even though we never did anything with it before. Here with the data, Even though we never did anything with it before.

Tomer Aharoni:

Here, it's where regulation came in and made a lot of sense. I'm not going to say the regulation doesn't slow you down, because sometimes it's not enough that you don't retain data. You also need to prove that you don't retain data and that can be really time consuming. But overall it just allows us to give our users the assurances that they need, because replacing your phone app is a pretty big deal. You have your most sensitive calls using your phone app, so people need to be able to trust us. That regulation, that FCC stamp, those FCC requirements and annual audits that we go through allow us to give our customer those reassurances.

Neil Milliken:

Thank you. It's a fascinating conversation, and I know that we could probably ask a load more questions. We've already hit the half-hour mark, so it remains for me to thank our friends at MyClearText and Amazon for keeping us on air, keeping us captioned, of course, something that's very important to you, right, and I look forward to us continuing this conversation at some point. One last thing Can you tell people where they can get NAGIF and which countries it's available in? Because we've talked about the FCC and so it's clearly available in North America. Is it only in North America at the moment and, if so, where can we get it? But if it's in other countries, where can people get it? Again?

Tomer Aharoni:

Yeah, so talking about regulation, nagif is currently only available in the US and US territories. This is an FCC requirement because there is a cost associated with every minute of service. We cannot offer the service abroad and then expect the FCC to pay for it, but it's something we're working on. We want to release the service in as many countries as possible. We will find a way to do that using potentially new companies that will spawn up, which would have a different financial system, but we will offer the service in more countries in the coming years.

Neil Milliken:

Wonderful. Thank you so much, Tomer. It's been a great pleasure talking to you.

Tomer Aharoni:

And, by the way, if people want to try Nagish, it's available on nagishcom. That's N-A-G-I-S-H.

People on this episode