Developing Smart Glass to Guide Blind people


Hardware: Jetson Nano 4gb- latest version

I am a hobbyist and interested in making smart glass that helps blind people. The glass consists of a mic and a wide camera. the main functions of my glass are:

  • Search and Navigate: where is the door? what objects around me? (voice feedback directions)
  • Scan: object detection with (voice output).
  • Read: like newspaper or a book with( narrating voice)
  • Recognizing friends’ faces and save them for later use.

While I’m still in the early stages of the project, I have been kind of lost. I am not sure which platform I should use that would give me the right resources and save me time. So far I am thinking to use Clara Guardian and Transfer Learning Toolkit. Am I picking the right platforms? I would like to hear more opinions from anyone who has experience. My glass mainly works with voice commands. Saying that, which platform is used for conversational voice commands?

Hi @alrah003, you can find some other blind-assistant devices on this page to see how they did it:

Typically, there will be vision DNNs running in PyTorch, TensorFlow, or TensorRT that detect objects, ect.

For voice, you can try out a project I have in beta here:

I won’t guarantee this, but you should probably know that Clara is intended for use with “not” low power. Take a look at the image here:

Part of this is just an AGX Xavier (which you probably want for running models, but not for training models), and an RTX 6000 (much much more power hungry, and good for training, but probably not needed for running models).

You can always train models on a separate system. Note that the AGX in that image does not have a heat sink, and that the kit includes a special carrier board (which is the Mellanox connector) which won’t be very useful to you in the field (unless your user carries an automobile sized battery and a large backpack).

Although the AGX dev kit is not intended for resale (as an end product you would use the commercial module on a third party carrier board), it uses far less power and has a heat sink and carrier board. This is very suitable for an end user to carry around. A bit heavier than the other dev kits due to the heat sink, you could possibly use the smaller Xavier NX dev kit (or commercial module with third party carrier board) just as well for this. Right now I think the most powerful unit which also maintains low weight (in terms of dev kits) is the Xavier NX, and might be your best bet since models would be pre-trained.

Hi Dusty,
That’s so awesome to see you here. I have been watching your tutorials; they’re insightful.

I downloaded the Jetson-Voice repos and tried to run the container but I ran into a problem (see the pics below). I also reported an Issue on Github for the same problem.

Are you aware of this issue when running the jetson-voice docker container? is it from my end or the container is down ?

Thanks again in advance for your response.

Hey Linuxdev, Thanks for your insights. I just realized that Clara wouldn’t run on Jetson-nano and as you said it it intended for high performance kits such as Xavier NX. Would you recommend any conversational voice commands platforms used for Jetson Nano?

I have not experimented with that so I don’t have any specific recommendation. I do think there are probably a number of voice solutions, and the Nano (or more powerful NX) should be plenty powerful for them, but I couldn’t provide any specific recommendation.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.