Hi Jetson community,
Sorry for the irrelevant question. I just wanted to know your opinions about below.
There is an intention to implement visual assistant project for Visually Impaired People using Nvidia Jetson Xavier NX single board computer as a brain of the system. The main application will include You only look once (YOLO) real-time object detection system, Optical Character Recognition(OCR), text to speech(TTS), Spatial Audio for navigation and other sub applications, if needed. The question is how to integrate all these sub programs within one application. The main programming language will be Python. Camera is Depth Intel Realsense. How to implement client side of the application?Balena?Streamlit?I am planing to isolate every sub program within Docker environment. What do you think about it?
Thanks in advance for your recommendations!
Best regards,
Shakhizat