Object Detection and communication

Hi,
I’m an engineer student in electronics and I’m doing an internship about computer vision on autonomous vehicle. I’m currently looking for embbeded solution for computer vision.
My goal is to use a jetson nano on a prototype and make real-time object detection.
My question is : does the jetson nano is capable of handling object detection, a little computation, AND communication?
If for example by the result of the object detection, I want to run a algorithm who decide which order (order like small data, not complex data) to send to the motors : does the jetson nano is capable of doing all that? If yes, on how much does this impact the object detection (I suppose it will have a drop of FPS)?
Also, if I also want to send a video stream showing the object detection on a screen, is it still possible to do all this with the jetson nano? If yes, on how much does this impact the object detection (I suppose it will have a drop of FPS)?

Sincerely,

Hi @olivier.berton.leclercq, yes you should be able to run realtime object detection and communication on the Nano. See here for a tutorial on object detection:

You can then perform some post-processing to determine which motor commands to issue, and then send your commands to the motors. Motor control is typically low-bandwidth, low-overhead and uses the CPU (whereas the object detection runs on GPU), so it shouldn’t really impact the performance of the object detection. Which interface do you use to communicate with your motors?

To send a compressed video stream (e.g. H.264/H.265 over RTP/RTSP) you could use GStreamer and Jetson’s hardware video codecs. The video codecs are independent hardware units in the Tegra SoC and do not use the GPU either. See the L4T Accelerated GStreamer User Guide for some example GStreamer pipelines that use hardware encoder - you would want to replace the filesink output with rtph264pay ! udpsink ... elements for example.