PeopleLens: How Artificial Intelligence Can Enhance the Experience of People With Visual Impairment

     Artificial intelligence has been shown to provide opportunities to improve various aspects of the human experience, from efficiency to safety. Another way AI can help people is through improving the lives of those with disabilities. One example of this is the PeopleLens, a new piece of technology being developed by Microsoft. PeopleLens is a device that will help people with visual impairments or blindness to better understand their surroundings and communicate with those around them.

How it works:

    The device is a combination of a phone app and glasses with augmented reality technology used for the purposes of tracking people’s locations, names, and where they are looking. PeopleLens gives different cues based on distance. Within 10 meters, a registered person will prompt a percussive bump. When within 4 meters, if the perceived person is registered their name will also be emitted. One caveat here is that their ears need to be noticed as well. Furthermore, the device utilizes various sounds such as woodblocks and clicks to help guide the user’s gaze. Woodblocks are used for registered people, whereas clicks are for unregistered people.

This image was included in a Microsoft Research publication detailing PeopleLens, displaying how the device tracks various aspects of people's body positions.
    

PeopleLens basically simulates the occurrence of eye contact, an important yet sometimes overlooked part of human interaction. The device notifies the wearer when someone is looking at them, giving them awareness of those around them as well as an opportunity to connect. Being able to make eye contact helps with initiating conversations and creating mutual attention between people. By creating signals to both the wearer and the person being perceived, this can allow for a more spontaneous interaction. Being able to understand where people are located relative to oneself also allows for better head positioning, which can increase clarity in communication. For example, many can usually better understand people when facing them as compared to facing opposite directions.


                                    Here’s a short video of PeopleLens in action:



This is a diagram Microsoft attached in their research blog about PeopleLens that provides a nice overview of how it works.
    

    Discussion

    One concern that I considered was the registration of people into the device’s database. Because this technology tracks people’s faces for recognition, among other things, there could be concerns involving privacy and the usage of this data in ways those who registered may not desire. In a world with Clearview causing quite the stir with their collection and usage of facial recognition data, this concern may have some merit. However, the device requires those registered to do so of their own accord, it does not recognize people who have not already joined the system, mitigating some of this concern. A closer look revealed that this is not actually a concern in this instance, as the device does not store images, but rather uses photos to create vectors to process instead, which removes the possibility of photos being used in other contexts. There may be other instances of AI research where this could be an issue, however. Despite this, the development of technology like PeopleLens does pave the way for further inclusion and recognition of people with visual impairments, which can be seen as a greater good. Another issue I considered is that the frequent audio cues may become disruptive in certain situations. Perhaps a kind of tactile component or vibration could be worked in, or a set of earbuds or headphones may alleviate this concern, although I’m uncertain how this would impact the spatialized audio. Reading about this reminded me of the difficult sensors of the robot lab, but at least PeopleLens has professionals working on it...

The development of technologies such as PeopleLens promotes accessibility and helps those with disabilities better interact and participate with the world and people around them. They also can provide benefits to those not affected, as the technologies can be multipurpose, and research can be applied to related, other projects that even more people can make direct use of. I’m curious if anyone has other examples of technologies like PeopleLens that can help, or of technologies based in accessibility improvement that led to the improvement of more general use technologies? Another possibility is the adaptation of general use technology for assistance with disabilities. One example of this I found was the use of Amazon’s Echo device to assist a child with cerebral palsy with communication. Overall, I believe PeopleLens is an interesting piece of technology that demonstrates the use of AI in a positive way.


Popular posts from this blog

Can robots make pizza?

Can AI do your homework ?

Can AI become self-aware?