OpenCV build script

I’m seeing it is built with ffmpeg. Can you run this and see what you get?

 $ opencv_version --verbose | grep FFMPEG
    FFMPEG:                      YES

I’d recommend that unless you have Nvidia’s version of ffmpeg installed and you built OpenCV against it, that you use GStreamer pipelines to decode instead.

Converting to BGR can be a problem, however since nvidia’s nvvidconv only support BGRA and VideoCapture does not. You can try something like uridecodebin ... ! ... format=I420 ! nvvidconv ! appsink with videocapture instead, but then you’re stuck having to convert YUV to BGR in software which is slow AF.

Depending on what you want to do, you might be better off using DeepStream instead, with OpenCV inside gstreamer elements if necessary.

Mixing OpenCV with Nvidia’s stuff is a recipe for performance disaster since you have to keep copying between CPU and GPU to get anything done. At least if you’re doing it in some GStreamer’s worker thread it doesn’t necessarily have to block the video pipeline.

Using opencv for converting from I420 to BGR after opencv video capture may often be slower than using nvvidconv to produce BGRx format and using CPU videoconvert for removing the extra 4th byte and produce BGR. A queue between videoconvert and appsink may also help for performance.

However, your case is different since it is sending processed frames to rtsp server. You may better explain your use case, but you if your processing is some filtering, you may also try to check nvivivafilter plugin, and use it in your test-launch pipeline.

[EDIT: Not sure I’ve correctly understood your case. If you want to read an RTSP feed from an IP camera, probably uridecodebin as mentioned by @mdegans would be a good choice. Depending on your processing, DeepStream might be a good choice. You would tell more.]

Hi,
Thanks for the response you two. Sorry, I’m a beginner to all this, pardon me if a say something dumb.

Well I’ll explain my use case here and look throughly into the solutions you guys provided above -
To simply put, I’m developing a GUI using PyQt Python that shows, captures(with button press) the video feed from an IP camera (I’m using one by Hikvision IP cam) and also control it’s PTZ.

Till now what i was able to do was to stream the video feed by pasting the RTSP URL of my IP cam directly in the VideoCapture(-url-) func and getting the frames using read() in while loop. Which is somehow working fine but I certainly know that its using only the CPU of the Jetson Xavier NX.

My aim is to use the max perfomance of the Xavier NX by utilizing the GPU for this purpose. I tried constructing a Gstreamer pipline with rtspsrc..!..appsink (by studying it sparsley) but I couldn’t get it working with Opencv VideoCapture(). That’s why tried updating the Opencv from 4.1.1 default to 4.4 with CUDA support to somehow use the gpu for my rtsp stream using VideoCapture().

Could you guys suggest me somewhere to start researching. Again, Thanks a lot for your time.
Kindly let me know if any more information is required.

P.S: I’m using QThreads to seperate the read() section from GUI thread.
Also I want to do Yolo object detection on this feed from the IP cam.

Decoding a H264 stream on Jetson wouldn’t use GPU. Standard ffmpeg version would use CPU only. NVIDIA’s ffmpeg version or jocover’s vcersion with nvmpi support would be able to use HW decoder (NVDEC), but you would have to rebuild opencv to support such ffmpeg version.

The simplest way is to use a gstreamer pipeline leveraging this HW decoder for opencv capture:

cv2.VideoCapture cap('rtspsrc location=rtsp://<your_cam_IP>:<port>/<path> latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)

# If your IP cam requires login/password:
cv2.VideoCapture cap('rtspsrc location=rtsp://<user>:<password>@<your_cam_IP>:<port>/<path> latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)

OMX plugins are going deprecated. You may try nvv4l2decoder instead of omxh264dec.

Hey I just built it using your script I am using jetpack 4.5 and it installed it into python2.7 instead of 3.6 There wasnt even a 2.7 before I ran your script!! Jetpack 4.5 only has 3.6 So why does it now have a 2.7 and can only import cv2 in 2.7 ?!?

EDIT: I tried to install following these instructions: https://pythops.com/post/compile-deeplearning-libraries-for-jetson-nano

And I notice on install it says this:

 Python 3:
--     Interpreter:                 /usr/bin/python3 (ver 3.6.9)
--     Libraries:                   NO
--     numpy:                       NO (Python3 wrappers can not be generated)
--     install path:                -
-- 
--   Python (for build):            /usr/bin/python2.7

Even though numpy is installed.

This is a fresh install of jetpack 4.5 following official nvidia install instructions on this forum. There should be nothing wrong… please help

also now since installing I cannot import tensorflow it exits saying core dumped illegal operation… EDIT: fixed by re-installing numpy again…

still dont know why opencv wont install on python3.6 for me…

EDIT: FIXED! For anyone else having trouble this method worked for me: https://pysource.com/2019/08/26/install-opencv-4-1-on-nvidia-jetson-nano/

Just to report that I think I have sucessfully used GitHub - mdegans/nano_build_opencv: Build OpenCV on Nvidia Jetson Nano with jetpack 4.5 and opencv 4.5.2
JC

Extremely sorry for the delay and thank you so much. I was able to leverage the GPU by using this Gstreamer pipeline. But I did face some issues like the following:
1, Was unable to stream video from the Hikvision ip cam using the gstreamer pipeline. Got the stream when I tried with a different ip cam.
2, Even though the video streaming was successful with the latter ip cam, the terminal displayed some errors given below:

Opening in BLOCKING MODE
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

(python3:15395): GStreamer-CRITICAL **: 17:42:26.320: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
[ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1044) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1

Although its working. I’m curious to solve this error. It would be great if you could let me know your thoughts, and also suggest some resource to understand more on the topic. I did try to learn about Gstreamer and got the basic idea(usbcam worked). I like to understand more about pipeline associated with rtsp source and jetson.

These are normal warnings. A live source has no duration.

Not sure what is the cause here. You may post your command or code so that it can be reproduced and investigated.

Hi @mdegans, I have posted an issue on github (Build OpenCV with CUDA on Jetson AGX Xavier · Issue #65 · mdegans/nano_build_opencv · GitHub) and would be very thankful for your advice! Thank you :)

Hello @mdegans , thank you for your script.
I got an jetson-xavier (jetpack 4.6) from a colleague who already tried to install opencv with cuda support.
I used your script, it ran without problems, but cuda-support was available in python2.7 only - python3.6 always used the opencv-version from my colleague.
after several attempts, i finally removed all opencv, cv2 files in /usr/ in order to get rid of all opencv (removing with apt did not help). but even after that, python3 did not get any bindings.
after that i tried your script in a venv, but with the same result.

i put the logs of my tries here: opencv-cuda-installation-logs

what information do i have to look for, to get python3 bindings?

Getting version '4.4.0' of OpenCV
cmake flags: 
        
        -D BUILD_EXAMPLES=OFF
        -D BUILD_opencv_python2=ON
        -D BUILD_opencv_python3=ON
        -D CMAKE_BUILD_TYPE=RELEASE
        -D CMAKE_INSTALL_PREFIX=/usr/local
        -D CUDA_ARCH_BIN=5.3,6.2,7.2
        -D CUDA_ARCH_PTX=
        -D CUDA_FAST_MATH=ON
        -D CUDNN_VERSION='8.2'
        -D EIGEN_INCLUDE_PATH=/usr/include/eigen3 
        -D ENABLE_NEON=ON
        -D OPENCV_DNN_CUDA=ON
        -D OPENCV_ENABLE_NONFREE=ON
        -D OPENCV_EXTRA_MODULES_PATH=/tmp/build_opencv/opencv_contrib/modules
        -D OPENCV_GENERATE_PKGCONFIG=ON
        -D WITH_CUBLAS=ON
        -D WITH_CUDA=ON
        -D WITH_CUDNN=ON
        -D WITH_GSTREAMER=ON
        -D WITH_LIBV4L=ON
        -D WITH_OPENGL=ON
        -D BUILD_PERF_TESTS=OFF
        -D BUILD_TESTS=OFF

results in

-- General configuration for OpenCV 4.4.0 =====================================
--   Version control:               4.4.0
-- 
[...]
--   OpenCV modules:
--     To be built:                 alphamat aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hdf hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor ml objdetect optflow phase_unwrapping photo plot python2 quality rapid reg rgbd saliency sfm shape stereo stitching structured_light superres surface_matching text tracking video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
--     Disabled:                    world
--     Disabled by dependency:      -
--     Unavailable:                 cnn_3dobj cvv java js julia matlab ovis python3 ts viz
[...]
--   NVIDIA CUDA:                   YES (ver 10.2, CUFFT CUBLAS FAST_MATH)
--     NVIDIA GPU arch:             53 62 72
--     NVIDIA PTX archs:
[...]
--   Python 2:
--     Interpreter:                 /usr/bin/python2.7 (ver 2.7.17)
--     Libraries:                   /usr/lib/aarch64-linux-gnu/libpython2.7.so (ver 2.7.17)
--     numpy:                       /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.13.3)
--     install path:                lib/python2.7/dist-packages/cv2/python-2.7
-- 
--   Python 3:
--     Interpreter:                 /home/vc-forecr/ximea-opencv/venv01/bin/python3 (ver 3.6.9)
--     Libraries:                   NO
--     numpy:                       NO (Python3 wrappers can not be generated)
--     install path:                -
-- 
--   Python (for build):            /usr/bin/python2.7
[...]

Hello,

I tried using this script to update my Nano (previously set up with Jetpack 4.61) from OpenCV 4.1.1 (comes with Jetpack) to 4.5.5. I ran the script “as is” but entered 4.5.5 at the command line upon running it. The script ran for 6 hours and seemed to complete, but when I rebooted the Nano and checked the rev it still said 4.1.1. I tried to find the log file (I remember answering “no” to the question “do you want to erase the temp files”) but it is not in the tmp folder of the nano. Any ideas as to what might be wrong? Thank you

OK, another piece of info and I see it is similar to what someone else said earlier …
If I check the openCV version with python2.7, I get 4.5.5. If I check it with python 3.6.9, I get 4.1.1.

In my case, I want to develop in C++ rather than Python, so how do I know which openCV version will be available to me? It would be nice, though, if Python 3.6.9 referred to the same openCV rev as Python2.7, so any suggestion on how to fix that is still also welcome.

Thank you

Hello @mdegans
I have another small update. I just installed VS Code on the same Nano, opened up a new folder, and configured the project as C++ using GCC ver 7.5.5 compiler. When I did that, messages in VS code window said
Found CUDA (version “10.2”)
Found OpenCV( version “4.5.5”)

This info seems to conflict with what I got earlier using cv::CV_VERSION (that way I got 4.1.1 as an answer). Other than add VS Code I haven’t changed the Nano. How can I tell which version of OpenCV will really be used?

Thanks

Looks like you ran the script from within a venv. What happens is that OpenCV’s cmake files run some python to find the package path, and if you have a venv active, it’ll use that path for installation.

You may be able to just copy the cv2 python package (folder) from your venv to where you want it.

You’ll want to check the output of /usr/local/bin/opencv_version (script opencv) and /usr/bin/opencv_version (preinstalled) and your python paths (sys.path, which is just a list) to figure out what’s going on. Python will import in the first cv2 it finds in sys.path, so the easiest way around that could just be to create a venv and copy the cv2 folder where you want it in the venv.

Modifying anything systemwide is likely to have side-effects, but if you need a specific cv2, you can do something like this as a hack.

import sys
import os

# the parent folder of `cv2`, not `/bla/bla/cv2` itself
CV2_MODULE_PATH = "/path/_containing_/desired/cv2/folder/" 

if os.path.isdir(CV2_MODULE_PATH):
    # prepend CV2_MODULE_PATH so it is loaded first
    sys.path.insert(CV2_MODULE_PATH)
else:
    sys.stderr.write(f"`cv2` folder not found in {CV2_MODULE_PATH}.\n")
    sys.exit(1)

import cv2

That is typed in the browser so there is possibly a typo or something, but you get the idea.

Dear mdegans, thanks for your reply. Sorry, that I did not do an update. Meanwhile I did a reset of the whole filesystem to factory settings, created a fresh python3-venv and run your script therein with success. python3 therein got its bindings as intended.

Thank you for your help. I ran the opencv_version script in each of the 2 locations (usr/local/bin & usr/bin) and read “4.5.5” in each one, so at this point I am just going to assume I am OK for C++ and continue on. I like your suggestion/example about checking the path in Python and forcing it to be the one I want if necessary - I will keep that idea in my back pocket.