It will be great that people with disabilities such as visually-challenged people will also experience technology. To make this possible there should be a special kind of technology that will deal and address the issue.
A new adaptive mobile technology for blind people so they will be able to use a device such as smartphone or a tablet is what scientists currently develops. They are scientists based at the University of Lincoln, UK, funded by a Google Faculty Research Award, these scientists specializes in computer vision and machine learning.
The goal of this project is to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.
HOW TO MAKE THE BLIND SEE
The team of scientists plans to use colour and depth sensor technology inside new smartphones and tablets to enable 3D mapping and localisation, navigation and object recognition. From the preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems.
Further, artificial intelligence can be a great tool for this development. Apart from that the team is also planning to develop the most advance interface to provide information to the users by means of vibrations, sounds, or the words being spoken.
Project lead Dr. Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science said that, “This project will build on our previous research to create an interface that can be used to help people with visual impairments.”
“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability.
“If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious.
“There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited.
“We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment,” said Bellotto.