RAM Optimization for taking snapshots with a Raspberry Pi Camera (CSI)

Hi everybody,

It is my first topic on this forum because I didn’t find a solution on this forum or elsewhere in order to only take one picture (snapshot) every 7seconds to 60seconds with a rapsberry pi cam (IMX219) on the Jetson Nano (4GB) without using too much RAM memory (depending on the number of ROI in the picture, it normaly takes apx 7sec per ROI by my main process and by default, I currently use 9 ROI’s that is the max).

On a raspberry pi, it is easy to only take one picture using e.g. raspistill but on the jetson nano, it seems you have to use the gstreamer pipeline (?) in order to do that but gstreamer needs a lot ram memory : at least 250Mb (my python process + nvargus daemon) and even more (500MB) when I have a look to some graphs - see further for more information.

In order to take one snapshot, I currently use a python code based on the CSI-Camera example and this gstreamer pipeline with the full camera resolution, so 3280x2464 !

    def _gstreamer_pipeline(self):
        return (
            "nvarguscamerasrc sensor_id=%d ! "
            "video/x-raw(memory:NVMM), "
            "width=(int)%d, height=(int)%d, "
            "format=(string)NV12, framerate=(fraction)%d/1 ! "
            "nvvidconv flip-method=%d ! "            
            "videoconvert ! "
            "video/x-raw, format=(string)BGR ! appsink max-buffers=1 drop=True" 
            % (

I also tried to directly run this command every x seconds but it takes a lot of time because the pipeline needs to be initialized each time I run it (e.g. with a python command os.system )

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM),width=3280, height=2464, framerate=(fraction)1/1, format=NV12' ! nvvidconv flip-method=0 !  nvjpegenc ! filesink location=img.jpg -e

But I have the problem with the automatic gain control, so I need to use this command with a buffer but it takes even more time !

gst-launch-1.0 nvarguscamerasrc sensor_id=0 num-buffers=60 ! video/x-raw(memory:NVMM),width=3280, height=2464, framerate=(fraction)1/1, format=NV12' ! nvvidconv flip-method=0 !  nvjpegenc ! multifilesink location=img%02d.jpg -e

I would like to know if it is possible to use an alternative method that do not use the gstreamer pipeline ? If not, maybe a pipeline that wouldn’t use too much memory and do the basic process ?

Thank you in advance for your help,
Kind regards,

FYI : my docker main process “value” (using a neural network implemented with TF 1.15) processed a dataset (in bash) of apx 800 images, the available RAM increased from 400MB to 800MB but the swap increased also to 1.6-1.8GB. After that, I started the process using tha images captured by the camera in real time. The available memory decreases to apx 300MB, so my current estimates is that the camera processing takes apx 500MB even if I only see, in htop,126MB for my python’s code and 125MB for the nvargs daemon probably because some memory could be used by gstreamer with the gpu (not sure ???).

Maybe use smaller sensor output size would help to reduce the memory usage.

Would you help to check the Argus/MMAPI samples for JPEG encoding:


Thank you very much for your reply. I will have a look at it !
Regarding the resolution, I would like to keep the higher resolution for the moment (even if it may be not necessary for all our cases)