Read camera images on real time basis inside Jetson Nano storage

I have an issue to address. Need your advice how to address it.

I am getting dump of images from the Raspberry Pi camera controller into my Jetson Nano B01 board inbuilt storage drive. Since, I need to process these images and have to do inference immediately. The questions that I have are:

  • How do Jetson Nano knows that new set of images are dumped?
  • Can Pi camera controller send a notification to Nano so that it kick starts the inference program to process it?

are there better way of handling these images?


If you use Raspberry Pi camera V2, it is enabled by default and you can try this command and check if there is camera preview:

$ gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink

For doing inference on Jetson platforms, we suggest try DeepStream SDK. You may take a look at document first:
NVIDIA Metropolis Documentation
If you use Jetpack 4.6.2, please check 6.0.1.

But, my workflow is bit different. My Raspberry Pi camera is connected to a Pi controller board. Jetson Nano doesn’t have direct access to the Pi camera.Hence, Pi controller dumps images into Jetson Nano board and from there I need to take it for processing.

This would need other users to share experience. Generally the input source are connected to Jetson Nano through I/O interface and we can capture through v4l2 or Argus. If the source is not connected, we can use UDP or RTSP.

Your setup is unique and would see if other users have tried similar setup.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.