Frame from IP camera thinning using OpenCV and GStreamer

I’m trying to develop an application that processes frames from an IP camera.
The IP camera streams at 30fps (fixed).
However, the processing in my application works at 10fps.
Therefore, it is necessary to thin out the frames.
How can I achieve this?
For now, we are considering a method that uses multithreading.

Development Environment
HW : Xavier NX
SW : JetaPack 4.4

On Jetson platforms, you would need to run gstreamer + OpenCV like:

This is not an optimal solution because there is memory copy in

nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR

We would suggest use pure gstreamer or jetson_multimedia_api. If your usecase is deep learning inferencing, you can try Deepstream SDK.

Thank you for your suggestion.
I use gstreamer pipeline and opencv videocapture.
Your code is pass through situation (IP cam -> DIsplay) .
But my goal is " IP cam@30fps -> some processing( need 10fps=100ms) -> display, other)".

You may try to use plugin videorate:

... ! video/x-raw, framerate=30/1 ! videorate ! video/x-raw, framerate=10/1 ! ....

Thanks, Honey_Patouceul.
I tried the below command.But it was failed.
Is the insert place is correct ?

gst-launch-1.0 rtspsrc location=rtsp://@/h264 latency=300 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! videoconvert ! “video/x-raw, format=(string)BGR, framrate=25/1” ! videorate ! “video/x-raw, framerate=10/1” ! ximagesink sync=false

Please try nvv4l2decoder and configure the property:

  drop-frame-interval : Interval to drop the frames,ex: value of 5 means every 5th frame will be given by decoder, rest all dropped
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 30 Default: 0

For 30fps -> 10fps, please set drop-frame-interval=3.

I tried your suggestion. However it doesn’t work.

Pipeline given OpenCV Videocapture is as below.
“rtspsrc location=rtsp:// latency=0 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder drop-frame-interval=5 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”

The solution provided by @DaneLLL would be much more efficient than videorate.
Just remove h264parse before nvv4l2decoder (this is a known issue in nvv4l2decoder with H264 streams in byte-stream format).

I tried following pipeline.
In gst-launch, it does work.
But OpenCV Videocapture does not work(change ximagesink to appsink).

gst-launch : OK
gst-launch-1.0 rtspsrc location=rtsp:// latency=300 ! rtph264depay ! nvv4l2decoder drop-frame-interval= 10 ! nvvidconv ! videoconvert ! ximagesink

OpenCV : NG
rtspsrc location=rtsp:// latency=300 ! rtph264depay ! nvv4l2decoder drop-frame-interval= 10 ! nvvidconv ! video/x-raw,format=(string)BGRx ! videoconvert ! video/x-raw,format=(string)BGR ! appsink

The following works for me:

  const char *gst = "rtspsrc location=rtspt:// latency=300 ! rtph264depay ! nvv4l2decoder drop-frame-interval=3 ! nvvidconv interpolation-method=5 ! video/x-raw, format=BGRx, width=1280, height=720 ! videoconvert ! video/x-raw, format=BGR ! appsink";
  cv::VideoCapture cap (gst, cv::CAP_GSTREAMER);
  if (!cap.isOpened ()) {
    std::cout << "Failed to open camera." << std::endl;
    return (-1);

  std::cout << "Video Capture opened (backend: " << cap.getBackendName() << ")" << std::endl;
  unsigned int width = (unsigned int) cap.get (cv::CAP_PROP_FRAME_WIDTH);
  unsigned int height = (unsigned int) cap.get (cv::CAP_PROP_FRAME_HEIGHT);
  unsigned int fps = (unsigned int) cap.get (cv::CAP_PROP_FPS);
  unsigned int pixels = width * height;
  std::cout << "Frame size : " << width << " x " << height << ", " << pixels << " Pixels @" << fps << " FPS" << std::endl;

The problem may be with framerate not set. You may try to specify one such as:
... ! rtph264depay ! video/x-h264, clock-rate=90000, framerate=24/1 ! nvv4l2decoder ...

I tried to set frame rate as below, but it did not play.
“rtspsrc location=rtsp:// latency=300 ! rtph264depay ! video/x-h264, clock-rate=90000, framerate=30/1 ! nvv4l2decoder drop-frame-interval=10 ! nvvidconv ! video/x-raw,format=BGRx, width=1920, height=1080, framerate=30/1 ! videoconvert ! video/x-raw, format=BGR ! appsink”

And my opencv code is as below.
//get pipeline from command line
pipeline = std::string(argv[1]);
std::cout << pipeline << std::endl;
cap = cv::VideoCapture(pipeline);;
if (cap.isOpened()){
cv::namedWindow(“demo”, cv::WINDOW_AUTOSIZE);
std::cout << " rtsp open failed \n";
return 0;}

First try my pipeline, it uses a public online sample, so it should be ok for you if your jetson is connected to internet.
You would specify gstreamer API for videoCapture:

cap = cv::VideoCapture(pipeline, cv::CAP_GTSREAMER);

And you would uncomment the waitKey which is mandatory after imshow (you may use 1ms instead of 10).

1 Like

I modified as you said(add cv::CAP_GSTREAMER, and uncomment waitKey), then it did work !
Thank you !