Can I capture still image during streaming video in gstreamer?

I want to capture a number of still images during the video streaming. The video streaming is in full size (2592 x 1944, the same as still image size) through gstreamer pipeline. Would you please me your suggestion.
Thanks

Hi,
Please refer to this sample:

It demonstrates video preview + jpeg encoding. You may try to build/run it first. And customize video preview to udp streaming. You may replace nvcamerasrc with nvarguscamerasrc since nvcamerasrc is deprecated.

For udp streaming, you may replace nvoverlaysink with

nvv4l2h264enc ! h264parse ! rtph264pay ! udpsink

Hi DaneLLL,
Thank you for your reply.
I have downloaded gst_jpg_on_demand.zip.
I did some changes as below:

  1. changed width=2592, height=1944.

  2. launch_stream changed from:

    launch_stream
    << “nvcamerasrc ! "
    << “video/x-raw(memory:NVMM), width=”<< w <<”, height=“<< h <<”, framerate=30/1 ! "
    << “tee name=t1 "
    << “t1. ! queue ! nvoverlaysink "
    << “t1. ! queue ! nvvidconv ! "
    << “video/x-raw, format=I420, width=”<< w <<”, height=”<< h <<” ! "
    << "appsink name=mysink ";

to

launch_stream
<< “v4l2src device=/dev/video0 ! "
<< “video/x-raw, width=”<< w <<”, height=“<< h <<”, framerate=28/1 ! "
<< “tee name=t1 "
<< “t1. ! queue ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! nvv4l2h265enc ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 sync=false "
<< “t1. ! queue ! nvvidconv ! "
<< “video/x-raw, format=I420, width=”<< w <<”, height=”<< h <<” ! "
<< "appsink name=mysink ";

Below is the output from the terminal:

Preview string: v4l2src device=/dev/video0 ! video/x-raw, width=2592, height=1944, framerate=28/1 ! tee name=t1 t1. ! queue ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! nvv4l2h265enc ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 sync=false t1. ! queue ! nvvidconv ! video/x-raw, format=I420, width=2592, height=1944 ! appsink name=mysink 
JPEG encoding string: appsrc name=mysource ! video/x-raw,width=2592,height=1944,format=I420,framerate=1/1 ! nvjpegenc ! multifilesink location=snap-%03d.jpg 
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 8 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 

NVMEDIA: H265 : Profile : 1 
going to exit 

On my laptop, launched gstreamer to view the stream video. But no video has been received.

I tried the gstreamer command below from Jetson nano.

gst-launch-1.0 -v v4l2src device=/dev/video0 ! "video/x-raw, format=(string)UYVY, width=(int)2592, height=(int)1944,framerate=28/1" ! nvvidconv ! "video/x-raw(memory:NVMM),format=(string)I420" ! nvv4l2h265enc ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 sync=false

My laptop received the video stream from Jetson Nano, which proved that the video stream gstreamer code on Jetson Nano and PC are correct.

Question:

  1. why launch stream from the application program does not stream the video?
  2. I can’t find any picture has been saved. Which command used to capture the still image?

Thanks

The issue might be that nvvidconv expects at least one among its input or output to be in NVMM memory.
You may use double nvvidconv, or move tee after the first copy to NVMM memory.

Hi,
You may confirm the mode is correct in configuring v4l2 source. Please refer to Jetson Nano FAQ
Q: I have a USB camera. How can I lauch it on Jetson Nano?

Hi
Thank your info.
I have updated the code as below:

launch_stream
<< "v4l2src device=/dev/video0 ! "
<< "video/x-raw, width="<< w <<", height="<< h <<", format=UYVY, framerate=28/1 ! "
<< "nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! "
<< "tee name=t1 "
<< "t1. ! queue ! nvv4l2h265enc ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 sync=false "
<< "t1. ! queue ! "
<< "appsink name=mysink ";

It can stream one frame to remote device through WIFI and can’t find the saved video file in Jetson Nano.
This is not my expectation.

What I expected is:

  1. streaming video continuously to remote device.
  2. capture still image at Jetson Nano as demanded.
    Would you please give me your suggestion, how can I do it?
    Thanks.

My camera is a MIPI camera e-CAM50_CUNANO from e-con system.
It has an issue of using nvv4l2camerasrc at resolution of 2592 x 1944. Please refer to

Therefore, I can only use v4l2src to get the expected resolution.

You would try this pipeline:

gst-launch-1.0 -ev 4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=2592, height=1944,framerate=28/1 ! tee name=cam ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM),format=(string)I420' ! nvv4l2h265enc maxperf-enable=true insert-vui=true insert-sps-pps=1 ! tee name=h265_stream ! queue ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201       h265_stream. ! queue ! h265parse ! matroskamux ! filesink location=test_h265.mkv     cam. ! queue ! fakesink

and if it works, you would just replace fakesink by appsink in your application, assuming it can process frames in UYVY format.

Thank you for your help.
Your script is working at terminal. It streamed the video to other device and saved a video file named test_h265.mkv.
The saved video can play back with command below as well

gst-launch-1.0 filesrc location=test_h265.mkv ! matroskademux ! h265parse ! omxh265dec ! nvoverlaysink

My camera is support UYVY format.

after replaced fakesink with appsink,
This is the output info
Preview string: v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=2592, height=1944, framerate=28/1 ! tee name=cam ! queue !nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! nvv4l2h265enc maxperf-enable=true insert-vui=true insert-sps-pps=1 !tee name=h265_stream ! queue ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 h265_stream. ! queue ! h265parse ! matroskamux ! filesink location=test_h265.mkv cam. ! queue ! appsink name=mysink

JPEG encoding string: appsrc name=mysource ! video/x-raw,width=2592,height=1944,format=I420,framerate=1/1 ! nvjpegenc ! multifilesink location=/home/ion/snap-test.jpg

Opening in BLOCKING MODE

NvMMLiteOpen : Block : BlockType = 8

===== NVMEDIA: NVENC =====

NvMMLiteBlockCreate : Block : BlockType = 8

NVMEDIA: H265 : Profile : 1

It is only stream one frame and the saved video has only one frame as well. No saved still image file is available.

The attached file is the source code.
main.cpp (4.5 KB)

Seems like typos… be sure to have a space before and after ! in pipeline. Yours seems lacking one before nvvidconv and before tee name=h265_stream.

Does it helps ?

Hi,
You can add debug print in new_buffer() to check if buffers are received in appsink.

Thank you very much for your kind help.
I have put your gstreamer script into OpenCV VideoCapture as below:

cv::VideoCapture camera("v4l2src device=/dev/video0 \
                        ! video/x-raw, format=UYVY, width=2592, height=1944, framerate=28/1 \
                        ! tee name=cam !  queue \
                        ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! nvv4l2h265enc maxperf-enable=true insert-vui=true insert-sps-pps=1 \
                        ! tee name=h265_stream ! queue ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 \
                        h265_stream. ! queue ! h265parse ! matroskamux ! filesink location=test_h265.mkv \
                        cam. ! queue \
                        ! nvvidconv ! video/x-raw(memory:NVMM), format=I420 \
                        ! nvvidconv ! video/x-raw, format=BGRx \
                        ! videoconvert ! video/x-raw,format=BGR \
                        ! appsink", CAP_GSTREAMER);

I tested. It can stream the video to other device through WIFI, save the video file, and capture the frames as I expected.
But it has an issue of playing back the saved video with the error message below:

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...

(gst-launch-1.0:10127): GStreamer-CRITICAL **: 10:30:11.419: gst_caps_is_empty: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:10127): GStreamer-CRITICAL **: 10:30:11.420: gst_caps_truncate: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:10127): GStreamer-CRITICAL **: 10:30:11.420: gst_caps_fixate: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:10127): GStreamer-CRITICAL **: 10:30:11.420: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:10127): GStreamer-CRITICAL **: 10:30:11.420: gst_structure_get_string: assertion 'structure != NULL' failed

(gst-launch-1.0:10127): GStreamer-CRITICAL **: 10:30:11.420: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Allocating new output: 2592x1952 (x 9), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 2592, nFrameHeight = 1944 

Below is my script of playing back:
gst-launch-1.0 filesrc location=test_h256.mkv ! matroskademux ! h265parse ! omxh265dec ! nvoverlaysink

The recorded video on screen is fine, why the GStreamer-CRITICAL? How to fix it?
Thanks

The messages are related to omxh265dec. Use nvv4l2decoder and you’ll get rid of this:

gst-launch-1.0 filesrc location=test_h265.mkv ! matroskademux ! h265parse ! nvv4l2decoder ! nvoverlaysink

Also note that you may save one nvvidconv moving tee after it:

cv::VideoCapture camera("v4l2src device=/dev/video0 \
                        ! video/x-raw, format=UYVY, width=2592, height=1944, framerate=28/1 \
                        ! nvvidconv ! video/x-raw(memory:NVMM), format=I420 \
                        ! tee name=camNVMM ! queue ! nvv4l2h265enc maxperf-enable=true insert-vui=true insert-sps-pps=1 \
                        ! tee name=h265_stream ! queue ! h265parse ! rtph265pay ! udpsink clients=192.168.18.18:5201 \
                        h265_stream. ! queue ! h265parse ! matroskamux ! filesink location=test_h265.mkv \
                        camNVMM. ! queue ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink", cv::CAP_GSTREAMER);

Thank you very much.
Your both scripts works well. But it hits other issue (I did not check it before) that is the CPU usage and the achieved frame rate:

  1. Run video streaming script from terminal by replaying appsink with fakesink: the average CPU usage is around 50% on each core. The video received from laptop is about 23 fps.

  2. Run the video stream from OpenCV VideoCapture, without display the captured video frames on screen: the average CPU usage is around 85% on each core. The video received from laptop is about 20 fps.

  3. Run the video stream from OpenCV VideoCapture, and display every captured video frames on screen: the average CPU usage is around 70% on each core. The video received from laptop is about 10 fps.

Even in case 1, it can’t achieve 28 fps as the camera can achieved. It is NOT camera exposure time issue as I have set camera exposure mode to manual and exposure time to 31 ms.

My questions are:

  1. How to achieve the streaming frame rate at 28 fps?
  2. How to reduce the CPU usage?
  3. Why CPU usage in case 3 is less than case 2, but the achieved frame rate is only half of case 2? It is NOT CPU temperature issue as 4 CPUs temperature are below 45 degree in both cases. It is NOT CPU frequency issue as the frequency is 1479 on both cases.

Thanks

You may have a look to this post.
You may create a new topic for this.

Hi,
For using OpenCV on Jetson platforms, you will see significant CPU usage due to generating buffers in BGR format. The nvvidconv plugin supports BGRx and does not support BGR. It is limitation of hardware VIC engine. You have to use videoconvert plugin for converting BGRx to BGR, which takes high CPU loading.

Hi DaneLLL,
Thank you pointed out where takes high CPU usage.
I replaced capture BGR image for my image processing
nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsin
with capture compressed grey image for my image processing:
nvvidconv ! video/x-raw, format=GRAY8, width=640,height=480 ! appsink
which may achieve the expected video streaming frame rate.

1 Like

Hi,
GRAY format should take less CPU usage then BGR. Good to know that you work out a valid solution.

Hi Honey_Patouceul,
Based on your gstreamer code below:

I may capture every frame ( full size of the camera supported) for image algorithm implementation. I also can save the captured frame as a jpg image by using OpenCV imwrite at the specific point. This method has an issue of heave CPU usage. I don’t know if possible to split the appsink into two:

  1. convert to smaller size (640x480) grey image for video image processing by reading each frame from this branch.
  2. keep it as another video source, so that I can create another VideoCapture object to capture the required frame to save it as jpg image by using imwrite.

If possible, would you please let me know how.
Thank you very much.

You may first save some CPU usage using nvv4l2camerasrc that may be able to read your V4L UYVY camera and provide frames into NVMM memory instead of v4l2src + nvvidconv. Any conversion done with nvvidconv may save CPU usage, as it would be done by dedicated HW.

I don’t clearly unsterstand your use case, but be sure that using videoconvert with high reolution*framerate will result in significant CPU load.
So using nvvidconv for converting into GRAY8 is a good solution if your detection is done in monochrome.
Though, if you try to start another VideoCapture with gstreamer from opencv detection, it might take about 2s to run, so it would probably be too late.

You may further detail about what quality image your would need, how many frames from detection, for better advice, but I think you would have to program in gstreamer framework (or jetson mmapi) for these kinds of usage.