Sound Evolution from EPOS – UC Today

Posted: July 31, 2020 at 6:34 pm

As any hearing aid user learns fast, the auditory experience is created in the brain, not the ear. Surrounded by continual background noise in almost all environments, the human mind instinctively filters and pays attention to what is important, whether completely ignoring familiar traffic noises in favour of conversation, or picking out a mention of your own name across a crowded room.

Once you introduce the mediation of a device headphones or a hearing device however this delicate interaction between our eardrums and our brain is disrupted, and any background noise has more impact. While headphones can physically block some ambient sounds, background noise from transport to media to other people has an inevitable impact on concentration and productivity, particularly in the workplace.

Active and hybrid noise-cancelling technologies are improving all the time, and by filtering out lower-frequency sound waves and using feed-forward technology to directly cancel out unwanted inputs in the high frequency spectrum, impressive results can be achieved, optimised for different environments. But for truly effective noise cancellation, EPOS is deploying artificial intelligence (AI) in its latest headsets like the ADAPT 600, mimicking the way the human brain learns to filter and prioritise audio inputs like that flight path noise in your new apartment, which after a few weeks you simply dont notice.

Jesper Kock

As VP of R&D, Jesper Kock, described in a recent interview, the AI learns what the user needs it to do, in a process which is not unlike human learning, like when youre a parent to a small child you teach them how to ride a bike by teaching them the basics You look after them and give them feedback.

We start in just the same way with an AI neural network we teach it about EPOS sound quality and performance in our products. Then we teach it what we want to aim for, and the system ultimately becomes self-learning, arriving at solutions and detail that we couldnt have programmed.

So instead of having maybe 10 pre-configured noise reduction settings in a really high-end hybrid noise reduction headset, the AI can react in real-time to continually changing inputs in the users actual surroundings. It can intelligently optimise not only for voice pick-up, but to block out repetitive sounds and distractions enabling users to eliminate the continental disruption and distraction which plagues the modern workplace and so many other environments.

In an era where many of us are learning to work from new locations, having tech which comes on the journey with us both literally and metaphorically is a powerful success factor. Adaptive noise cancellation technology which continually adjusts to the changes as you experience them brings a new dimension of focus and peace, in any environment.

And the future will only get smarter, as Kock elaborated:

I can see AI providing input on other parameters, for example, reacting to the way you talk: your tone of voice, the words that you use, identifying if you are tired or angry or anxious

The device will know more about yourself than you do, and will be able to provide advice to you as a result.

These kinds of advanced biometrics may sound dangerously deep in the uncanny valley, but the explosion of AI voice assistants in the home as well as the workplace demonstrates growing acceptance of voice-driven technologies, which are enhancing our environments in undreamed of ways, and recent global events have only accelerated existing trends.

I think the 2020s are set to be a truly transformative decade when it comes to tech empowering the workplace, Kock continued. The overarching objective to encourage greater tech collaboration and integration in our daily lives will never have felt so present. These trends will not only optimise work performance and productivity but also dramatically improve employee health and well-being.

Excerpt from:

Sound Evolution from EPOS - UC Today

Related Posts