Arducam imx477 4k@30fps with jetson nano

i’m using arducam imx477 on jetson nano 4gb developer kit .
on the paper arducam imx477 can capture images in 30 fps on 4k resolution . but i couldn’t reach to 30 fps . in the best result i get 13fps .
i’m using gstream and opencv modules to read data .
this is the dimensions of the image 3280x2464 in 3 color channel ( the resolution is lower than 4k actually)

The performance bottleneck should be in copying data from NVMM buffer to CPU buffer. Please check if you can reach target frame rate by running:

$ gst-launch-1.0 nvarguscamerasrc ! fpsdisplaysink textrlay=0 video-sink=fakesink sync=0 -v

And then try

$ gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! video/x-raw,frmat=BGRx ! videoconvert ! video/x-raw,format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v

Yes it worked fine and reached 30fps on 4k resolution .
Thank you.
how can i reach this fps in python script??
I really need it.please help me

Please apply the string to this sample and check if it can achieve target frame rate:
OpenCV Video Capture with GStreamer doesn't work on ROS-melodic - #3 by DaneLLL

It takes high CPU usage. Please execute sudo nvpmodel -m 0 and sudo jetson_clocks. To run CPU cores at maximum clock.

thanks for your response and sorry for delay .
it doesn’t reach 30fps on 4k resolution on this script .
its only reach 13fps in best situation .
what else i can do for this ??

is there any way to access to raw data of camera before opencv? some data type like byte string or something else to at least remove encode and converting raw data to numpy array ?? is it possible to access those data in python??

This is expected since OpenCV only accepts CPU buffer data in BGR format. This would need to take significant CPU usage. If your use-case uses CUDA filters in OpenCV, please check this:
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL

It is to run a gstreamer pipeline and map NvBuffer to cv:;gpu::gpuMat. This eliminate additional memory copy. Or may check if the required functions are supported in VPI:
VPI - Vision Programming Interface: Main Page
So that you can switch to use VPI.

On the topic of getting the image buffer from gstreamer in python.

try this:

import gi
gi.require_version(‘Gst’, ‘1.0’)
gi.require_version(‘GstVideo’, ‘1.0’)
gi.require_version(‘GstBase’, ‘1.0’)

from gi.repository import GLib, GObject, Gst, GstBase, GstVideo


then define the pipeline

pipeline = ('nvarguscamerasrc name=src_camera sensor_id=%d ! ’
'video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! ’
'nvvidconv output-buffers=1 ! ’
'video/x-raw, ’
'format=(string)BGRx ! videoconvert ! video/x-raw, ’
‘format=(string)BGR ! appsink max-buffers=1 drop=true name=appsink emit-signals=True’ %(camera_number, frame_width, frame_height, frame_rate))

then set up the pipeline:

gst_pipeline = Gst.parse_launch(pipeline)

Define the callback function:

def on_new_sample(sink):

        sample = sink.emit('pull-sample')             
        buffer = sample.get_buffer()

        caps = sample.get_caps()
        height = caps.get_structure(0).get_value("height")
        width = caps.get_structure(0).get_value("width")

        success, map_info =
        if not success:
            print("Buffer data could not be mapped.")
        image = np.ndarray(
            shape=(height, width, 3),
    return Gst.FlowReturn.OK

Attach the call back function:

appsink = gst_pipeline.get_by_name(‘appsink’)
appsink.connect(“new-sample”, on_new_sample)

Start the pipeline:


These are just the functions you need to run and there is a lot more for you to play around with and explore but with this code every time the pipeline gets a new buffer the call back function will fire and put the BGR image into the numpy array. These actions are async and should be made thread safe.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.