CRAZY New Hearing Aid Technology! | πŸ”₯ Oticon More πŸ”₯

– In this video, I'm gonna
show you how crazy advancements in technology are completely changing the future of
hearing aids, coming up. (upbeat music) Hi guys Cliff Olson, Doctor of Audiology and founder of Applied Hearing Solutions in Phoenix, Arizona. And on this channel, I cover a bunch of hearing
related information to help make you a
better informed consumer. So if you're into that, make sure you hit that subscribe button. And don't forget to click the bell to receive the notification
every time I post a new video.

In case you have not
been paying attention, technology is changing at the
speed of light and the effects of this technological
advancement is impacting nearly everything and hearing
aids are no exception. Sure, some of these
advancements are really small, but some of them are so big that they literally change the trajectory of all future hearing
aids that are developed. For instance, take the development
of digital hearing aids. 1996 was the first year
that we saw you 100% digital hearing aid hit the market.

Here we are over 20 years later and it is almost impossible
to find a hearing aid that does not use
digital sound processing. What about Bluetooth? Bluetooth hearing aids
first hit the market in 2014 and now you would be hard
pressed to find a hearing aid that does not use
Bluetooth to stream audio from your favorite smart
device into both of your ears. Just when you think
there isn't anything else that engineers could come up with to improve the performance
of hearing aids, they always come up
with some crazy feature that completely surprises us all. Yet, with all of this
technological advancement, hearing aids still rely on engineers writing algorithms to
tell those hearing aids which sounds are important and which sounds are not
important to amplify.

Now, I do not want to come across as implying that hearing
aid engineers are not good at what they do because
they absolutely are, but they are faced with an enormous task of having to tell a hearing
aid exactly which sounds to amplify and which sounds not too. This means that hearing aids are limited, based on what engineers
can tell them to do. Let me give you an example. If I told you to describe the difference between a cat and a dog,
how would you do it? Well, you might say a cat has
a tail, it is covered in fur. It walks on four legs
and it has sharp teeth. A dog on the other hand has
a tail, is covered in fur, for walks on four legs
and has sharp teeth. You get my point, right? It might be inherently
easy for you to identify the differences between a cat and a dog but it is much more difficult
when you have to describe these characteristics to
someone or something else, so they can actually understand. This is the problem presented
the hearing aid engineers as they try to tell a hearing
aid which characteristics of sound are important to
amplify and which ones are not.

This is why Oticon, today's video sponsor, is now using deep learning
to train their new Oticon More hearing
aids to do this better. Now you might be thinking,
what is deep learning? Well, let me explain. Deep learning is a subset
of machine learning, which is a subset of
artificial intelligence. Artificial intelligence is a
technique to allow a machine to mimic human behavior. Machine learning is a technique to achieve artificial intelligence through algorithms, trained with data.

Deep learning is a type of
machine learning that is inspired by the structure of the human brain. And this structure is called an artificial deep neural network. Let me give you an
example of deep learning with something you're
already familiar with, the post office. When you write an address on an envelope, if you're anything like me, the handwriting is barely legible to the point where you
can hardly even read it. Post offices do not use
human beings to sort letters. They use sorting machines
and these sorting machines need to recognize all different variations of handwritten letters and numbers. It would be virtually impossible for a human being to
program a sorting machine to be able to recognize all
of these different variations. Fortunately, engineers and programmers were able to develop
a deep neural network. And then when trained with enough data, a sorting machine could
become smart enough to identify all the different variations of letters and numbers. In fact, you could make the argument that a post office sorting machine trained with a deep neural
network, would do a better job at identifying the different variations of letters and numbers,
compared to a human.

And of course it would do
this much faster as well. Now that all seems great but what the heck does this have to do with deep learning inside of
the Oticon More hearing aids? It's the same concept, but
instead of Oticon engineers developing a deep neural network to identify the differences between a cat and a dog or whether or not this letter's supposed to
go to my parents or not, it's trained on how to
identify different sounds. How much did they train it? Well, Oticon decided to train
their deep neural network with a staggering 12 million sounds recorded from all over the world. This gave the deep neural
network and abundance of different sounds for it to
learn what makes speech speech and what makes noise noise.

So what components did the network use to determine exactly what components of sound were used to identify speech? That's the thing, we don't know. It might seem crazy, but when you develop a deep
neural network that can learn on its own, you grade the
network based on its performance rather than how it accomplishes its task. A deep neural network learns a lot like how a child acquires language. It's not because they
were directly taught. Rather, most of language is acquired through interaction and
exposure to language. The more you expose the child to language, the more opportunity they have to learn and develop properly. Once Oticon's deep neural
network has been trained with these 12 million sounds, it has developed the ability
to identify the characteristics that make each one of these sounds unique. Not only does this type of deep learning require a lot of computing power, it also required a lot of man
and woman power to develop.

In fact, the development
of this deep neural network and the Oticon More hearing aid, took anywhere between 20 and 50 employees over 500 man years to complete. And when all of the deep learning of the deep neural network is complete, all of this information is uploaded onto the Polaris platform chip inside of the Oticon More hearing aid, which makes this hearing
aid one of the smartest, if not the smartest hearing aid out there. Armed with everything it's learned, the Oticon More can deliver
30% sound to the brain and increase speech understanding
by an additional 15% over the previous Opn S
one devices that relied on sound processing algorithms,
programmed by engineers.

More simply stated, a hearing aid that uses deep learning
outperforms a hearing aid that uses algorithms
developed by engineers. When all is said and
done, it is hard to know if you are in the middle of a
technological paradigm shift but it is my belief that we will look back
to this moment right now, as the turning point
where deep neural networks and deep learning completely change the future of how we hear. Until that time, I'm just
gonna continue to be amazed by all the crazy hearing aid technology that's out there right now. That's it for this video,
if you have any questions, leave them in the comment section below.

If you like the video, please share it, and if you wanna see other
videos just like this one, go ahead and hit that subscribe button. Also feel free to check out my website, DrCliffaud.com. (upbeat music).

You May Also Like