Artificial intelligence app helps blind people

A team of university students from the University of Auckland and Auckland University of Technology have created a smartphone app which uses artificial intelligence to help blind people make better visual sense of the world.

The app, which has been designed for Windows phones, allows visually impaired users to take a photo of their surroundings, for example, a piece of clothing, and the app verbally describes the item to the user. All the user needs to do is tap anywhere on the phones screen to capture a photo. Images are then compressed to around 50 to 70kb and sent to the apps server for analysis. Artificial intelligence is then used to detect colours, text, darkness and brightness to analyse the image and send back a verbal description of the photo.

There are many things that computer or artificial intelligence can do, so its just using computer algorithms you can get some information about an image. [For example], what are the most significant colours in an image or is there any printed text that you can read out or detect darkness and brightness and so on, Aakash Polra, MobileEyes team leader, told Computerworld Australia.

In cases where the image cannot be analysed, it is sent to a human helper, such as friends on Facebook or volunteers who are helping blind people, who identify the image, with verbal descriptions again sent to the blind user.

Humans answer the images where the [app] cant, so its a mixture of human intelligence and computer intelligence, Polra said.

The app took around seven to eight months to develop, with trials carried out to discover the apps limitations and bugs - it was recently trialled by around 20 users from the Royal New Zealand Foundation of the Blind. There are also plans to extend the app to other countries, including Australia.

We have been in constant touch with the users and getting their constant feedback and improving the product as we go, Polra said.

Throughout the trials, Polra said issues which had not been evident at the concept phase were raised. For example, one user took a photo of a shoe and the app simply described it as a shoe.

She said I already know that because I can touch and feel that its a shoe, but I want to know what colour it is? What are the patterns on it? Polra said.

We realised what kind of information is required by the blind users and as we talked with more and more people we learnt more about how the product can be useful in their daily lives and what features we needed to add, so we started adding them, developing them and testing them as well.

Read more:

Artificial intelligence app helps blind people

Related Posts

Comments are closed.