top of page

ForeSight: Assitance for the Vision Impaired

Milan Wilborn, Ed Bayes, Nick Collins, and Anirban Ghosh

What is the product?

Foresight is a hands-free, discreet wearable device that gives users haptic feedback on what objects are around them. It utilizes technology that includes soft robotics, non-linear mechanical structures as well as computer vision. It works by using a phone camera to detect the environment around the user in real-time. It feeds this back to a wearable device that lets them know where these objects are through soft
actuators that inflate and contract. The closer a user gets to an object, the more they feel the response. There are different types of actuators in different places, which provide different signals according to the type of stimulus.

poster.png

User Research

There are a spectrum of users who are visually impaired from 100% blindness (visual) to low vision (print)
If focusing on O&M (orientation and mobility), then market differentiation exists too – some need a physical aide, and some don't.


In O&M, there are two tools – dogs and the cane – which also have social and cultural angles due to legal status and identity. We decided that we needed to complement the cane rather than replace it. Canes have some shortcomings, such as they are not very useful to see anything above or behind a person. 

Current technology has shortcomings, especially vibrating actuators. Through our conversations with the "Massachusetts Association of the Blind and Visually Impaired," we found that most vision assistance devices use vibrating motors, which can be alarming for people when navigating public spaces.   

~285 million people in the world are visually impaired and 82% of people with blindness in the world are 50 years or older.

We also considered extreme users of vision assistance technology:

  • Military personnel and law enforcement (stealth operations)

  • Visually impaired athletes

  • Phone addicts to increase their spatial awareness

  • Firefighters

blind chart.png

Prototyping and early sketches

For the inflatables: We used silicone air chambers bonded to spandex. (actuators based on work done on https://softroboticstoolkit.com/)

For computer vision: We used YOLO, a real-time object recognition system that harnesses the power of deep neural networks, to identify objects and measure how far away they are.

YOLO Demo.png

To inflate the inflatables we used DC motor pumps powered by an Arduino micro

WhatsApp Image 2019-12-16 at 11.12.48 AM

Initial Sketches

Foresight.png

Final Renders

Final Prototype:

image.png
bottom of page