CSI video/image capture using Nano

I am new to Nano and even to Linux. I have Sony IMX camera and i am using Nano to capture videos using the command. However, when I tried with python, it’s not working. I tried all the options, though the same command works in the command prompt when coming to python, it’s not working. I tried with NanoCamera as well as other GitHub libraries.

Also, I want to save the video in MP4 format. Would be great, if someone can suggest the command to be used to get max quality. While processing image, I would be using frames, that way, I want the frames to be in raw format to have good quality images.

I can able to make it work. The issue was because of not using sudo. Once I started using sudo while running python program, I can see video as well capturing photos.

Can someone suggest me the command to use to get great quality video or frames?

hello karun003,

please share your commands to capture images for reference, could you please attach the capture results which shows the bad quality?
according to Camera Architecture Stack, if you’re access camera sensor with libargus, it’ll also involved ISP process to produce better quality frames.

Thanks, Jerry for your response. I am using below Python program to capture the image.

In addition, I am using CV2. imwrite(,[CV_IMWRITE_PNG_COMPRESSION,0]). Please check the attached generated image for your reference.

hello karun003,

that’s python script to enable gstreamer pipeline and using nvarguscamerasrc plugin to access camera sensor.
could you please access L4T Multimedia API and enable argus_camera application.
argus_camera application has user-interface for you to enable noise-reduction and other camera controls.

Hi Jerry,

My idea is to capture video and pass through the frames in terms of finding certain objects using OpenCV. In this case, do you think going with argus approach works? As I am planning to capture around 15X10 meters area, and want to recognise small objects, I want the quality be as good as possible. Can you please advise. As processing happens in Nano, I am not bother about the size of video/frames. At the same time, I want to record the video for future reference.


I changed the resolution to higher, and it looks the image quality is better. I am interpreting this after seeing the image size.
def gstreamer_pipeline(
capture_width= 3820,
capture_height= 2464,
display_width= 3820,
display_height= 2464,
However, I am seeing image color changed to grey color as you see in the attached image. I tried with png, jpeg, and TIFF, but in vain. What might be the issue?

hello karun003,

it depends on the sensor output frames,
you may check the sensor capability for all supported sensor modes.
please install v4l-utils, i.e. sudo apt-get install v4l-utils for necessary v4l2 tools.
you may check the sensor modes,
for example,

$ v4l2-ctl -d /dev/video0 --list-formats-ext
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'BG10'
        Name        : 10-bit Bayer BGBG/GRGR
                Size: Discrete 2592x1944
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 2592x1458
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.008s (120.000 fps)


Below is the response I got from the mentioned command
Index : 0
Type : Video Capture
Pixel Format: ‘RG10’
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3264x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3264x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)

Does it mean that sensor supports 10 bit pixel format. When I try running, gst-launch-1.0 nvarguscamerasrc !
‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,
format=(string)NV12, framerate=(fraction)30/1’ !
nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420_10LE’ !
omxh265enc ! matroskamux ! filesink location=test_10bit.mkv -e

Thought its running properly, but generating 0 bytes data. Could you please help me with command on fetching high quality video/frames.

hello karun003,

that’s a 10-bit RGGB bayer sensor,
once the camera stream going through nvarguscamerasrc plugin, it’ll processed by internal ISP, and you may handle it with YUV420.
may I know why you using video converter to change the format as I420_10LE,
here’s sample commands to enable sensor to do video recording.
for example,

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1' ! nvtee ! omxh264enc bitrate=20000000 ! qtmux ! filesink location=video.mp4
1 Like

I used this command to get better quality images. From your response, it looks to be that sensor does not support this feature.
Currently, I am using below command for recording video

gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=(int)3264, height=(int)1848, format=(string)NV12, framerate=(fraction)28/1’ ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=/media/D6D86ADBD86AB97F/Vidd/h265enc/3264X1848X28.mp4 -e

Though, it’s recording video, but I feel the quality of video still can be enhanced further. Do you see any issue with the above mentioned command.

Also, when try running the sample you had shared, it started failing after recording 11-12 seconds. If I remove, num-buffers=300, its working fine

hello karun003,

that’s not a failure, since you specify num-buffers=300, the GST pipeline terminate after 300 capture buffers,
you should also note that my samples using h264 encoder, and you’re enable h265 encoder; that’s why you see different quality images.

Thanks, Jerry for clarification. Does it mean that h264 has better quality than h265

For quality tuning, pleas refer to this post:

Under identical configuration, H265 should be slightly better than H264. You may compare the PSNR.

Thanks Danelll. I am looking for the command which at high level gives good quality frames. Probably, I will check the one you had sent.

As I am new to this field, can you share me some documentation on how can I explore on different pipelines available along with the info related to system resource utilisation.

Please check Multimedia section in developer guide

1 Like