OpenCV build script

I’m seeing it is built with ffmpeg. Can you run this and see what you get?

 $ opencv_version --verbose | grep FFMPEG
    FFMPEG:                      YES

I’d recommend that unless you have Nvidia’s version of ffmpeg installed and you built OpenCV against it, that you use GStreamer pipelines to decode instead.

Converting to BGR can be a problem, however since nvidia’s nvvidconv only support BGRA and VideoCapture does not. You can try something like uridecodebin ... ! ... format=I420 ! nvvidconv ! appsink with videocapture instead, but then you’re stuck having to convert YUV to BGR in software which is slow AF.

Depending on what you want to do, you might be better off using DeepStream instead, with OpenCV inside gstreamer elements if necessary.

Mixing OpenCV with Nvidia’s stuff is a recipe for performance disaster since you have to keep copying between CPU and GPU to get anything done. At least if you’re doing it in some GStreamer’s worker thread it doesn’t necessarily have to block the video pipeline.

Using opencv for converting from I420 to BGR after opencv video capture may often be slower than using nvvidconv to produce BGRx format and using CPU videoconvert for removing the extra 4th byte and produce BGR. A queue between videoconvert and appsink may also help for performance.

However, your case is different since it is sending processed frames to rtsp server. You may better explain your use case, but you if your processing is some filtering, you may also try to check nvivivafilter plugin, and use it in your test-launch pipeline.

[EDIT: Not sure I’ve correctly understood your case. If you want to read an RTSP feed from an IP camera, probably uridecodebin as mentioned by @mdegans would be a good choice. Depending on your processing, DeepStream might be a good choice. You would tell more.]

Hi,
Thanks for the response you two. Sorry, I’m a beginner to all this, pardon me if a say something dumb.

Well I’ll explain my use case here and look throughly into the solutions you guys provided above -
To simply put, I’m developing a GUI using PyQt Python that shows, captures(with button press) the video feed from an IP camera (I’m using one by Hikvision IP cam) and also control it’s PTZ.

Till now what i was able to do was to stream the video feed by pasting the RTSP URL of my IP cam directly in the VideoCapture(-url-) func and getting the frames using read() in while loop. Which is somehow working fine but I certainly know that its using only the CPU of the Jetson Xavier NX.

My aim is to use the max perfomance of the Xavier NX by utilizing the GPU for this purpose. I tried constructing a Gstreamer pipline with rtspsrc..!..appsink (by studying it sparsley) but I couldn’t get it working with Opencv VideoCapture(). That’s why tried updating the Opencv from 4.1.1 default to 4.4 with CUDA support to somehow use the gpu for my rtsp stream using VideoCapture().

Could you guys suggest me somewhere to start researching. Again, Thanks a lot for your time.
Kindly let me know if any more information is required.

P.S: I’m using QThreads to seperate the read() section from GUI thread.
Also I want to do Yolo object detection on this feed from the IP cam.

Decoding a H264 stream on Jetson wouldn’t use GPU. Standard ffmpeg version would use CPU only. NVIDIA’s ffmpeg version or jocover’s vcersion with nvmpi support would be able to use HW decoder (NVDEC), but you would have to rebuild opencv to support such ffmpeg version.

The simplest way is to use a gstreamer pipeline leveraging this HW decoder for opencv capture:

cv2.VideoCapture cap('rtspsrc location=rtsp://<your_cam_IP>:<port>/<path> latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)

# If your IP cam requires login/password:
cv2.VideoCapture cap('rtspsrc location=rtsp://<user>:<password>@<your_cam_IP>:<port>/<path> latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)

OMX plugins are going deprecated. You may try nvv4l2decoder instead of omxh264dec.

Hey I just built it using your script I am using jetpack 4.5 and it installed it into python2.7 instead of 3.6 There wasnt even a 2.7 before I ran your script!! Jetpack 4.5 only has 3.6 So why does it now have a 2.7 and can only import cv2 in 2.7 ?!?

EDIT: I tried to install following these instructions: https://pythops.com/post/compile-deeplearning-libraries-for-jetson-nano

And I notice on install it says this:

 Python 3:
--     Interpreter:                 /usr/bin/python3 (ver 3.6.9)
--     Libraries:                   NO
--     numpy:                       NO (Python3 wrappers can not be generated)
--     install path:                -
-- 
--   Python (for build):            /usr/bin/python2.7

Even though numpy is installed.

This is a fresh install of jetpack 4.5 following official nvidia install instructions on this forum. There should be nothing wrong… please help

also now since installing I cannot import tensorflow it exits saying core dumped illegal operation… EDIT: fixed by re-installing numpy again…

still dont know why opencv wont install on python3.6 for me…

EDIT: FIXED! For anyone else having trouble this method worked for me: https://pysource.com/2019/08/26/install-opencv-4-1-on-nvidia-jetson-nano/

Just to report that I think I have sucessfully used GitHub - mdegans/nano_build_opencv: Build OpenCV on Nvidia Jetson Nano with jetpack 4.5 and opencv 4.5.2
JC

Extremely sorry for the delay and thank you so much. I was able to leverage the GPU by using this Gstreamer pipeline. But I did face some issues like the following:
1, Was unable to stream video from the Hikvision ip cam using the gstreamer pipeline. Got the stream when I tried with a different ip cam.
2, Even though the video streaming was successful with the latter ip cam, the terminal displayed some errors given below:

Opening in BLOCKING MODE
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

(python3:15395): GStreamer-CRITICAL **: 17:42:26.320: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
[ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1044) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1

Although its working. I’m curious to solve this error. It would be great if you could let me know your thoughts, and also suggest some resource to understand more on the topic. I did try to learn about Gstreamer and got the basic idea(usbcam worked). I like to understand more about pipeline associated with rtsp source and jetson.

These are normal warnings. A live source has no duration.

Not sure what is the cause here. You may post your command or code so that it can be reproduced and investigated.