I am using opencv library with the VideoCapture class to input a mp4 file for tracking and detection purposes. I have already built the opencv source code with cuda on. However, the video fps is horrendously low, which does not make sense given jetson tx2 capabilities. Am i doing something wrong? I have tried to test the code and run it on my own laptop(i5-8250U) and it works very smoothly. My CPU and GPU usage is as follows. What am i doing wrong?
Have you tried to run with jetson_clocks?
Yes, I already ran jetson_clocks.sh. The frames are still displaying slowly
Could you share the code you are running?
Sorry, the previous attachment formats were not allowed to be sent. I have zipped the files and sent it over.
Vehicle_Count.zip (4.72 KB)
Hi,
If all the code you are using are based on openCV, then I can only suggest you to calculate each elapsed time in your code and compare them with the desktop version.
I also noticed my CPU usage for one core is quite heavy, are there any other ways to distribute the load? The tech sheet for TX2 does not say much besides clocking and mode switching. I suspect it might be the initial video capturing pipeline, may have to use gstreamer or something else.
Hi,
The cpu load looks balance.
But if you want to configure it manually, you can try the taskset tool.
I think that when you create a videoCapture from filename.mp4, opencv will use ffmpeg for demux and decode.
But the Ubuntu apt version of ffmeg on jetson is cpu only.
The easiest way would be to use a gstreamer pipeline as input:
cv::VideoCapture capVideo("filesrc location=CarsDrivingUnderBridge.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ", cv::CAP_GSTREAMER);
so that H264 decoding and conversion to BGRx is done by dedicated HW.
Just to clarify, if it is a stream instead of a file, will they also use ffmpeg by default? Is there a way to check or some link that indicates opencv uses ffmpeg by default in video capture? For example, now i am kind of side tracking a little, if on windows i do not have ffmpeg, what will video capture use?
For a rtsp stream, on Linux it may use FFMEG as well.
I’m not familiar with opencv on Windows, I guess it would use Media Foundation, but not sure. It is one of the available backends.
You may use getBackendName() function of VideoCapture for checking such as:
std::cout << "Video Capture opened (backend: " << cap.getBackendName() << ")" << std::endl;
When running this pipeline, i got this Error opening bin: no element “nvv412decoder” . Is there a way to get this file? I checked and my OpenCV build has gstreamer supported I tried googling but there isn’t anything on it. I am a total newbie at gstreamer, sorry.
There is a typo here…Change the ‘1’ to ‘l’ (minor L).
I think that when you create a videoCapture from filename.mp4, opencv will use ffmpeg for demux and decode.
But the Ubuntu apt version of ffmeg on jetson is cpu only. T
The easiest way would be to use a gstreamer pipeline as input:
cv::VideoCapture capVideo("filesrc location=CarsDrivingUnderBridge.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ", cv::CAP_GSTREAMER);
so that H264 decoding and conversion to BGRx is done by dedicated HW.
Now that i have used the GST pipeline, the video frame is still taking quite awhile to load. The clip i used is 30fps. However, the frame from imshow() is definitely not 30fps
My guess is i might have to start doing some kind of multi threading operation in the code
The bootleneck may be in cv::imshow(). It is CPU only and not so efficient on Jetson for large resolutions.
You may try a videoWriter with a gstreamer pipeline to a videosink such as this one