Sample accelerated OpenCV+encoding code fails to run on the Nano

Hi all, and I hope you can give me some pointers.

What I’m after is a simple sample/pilot code that will capture from argus, run through CUDA OpenCV, and then encode, without shuffling frames in memory between CPU and GPU. In other words, hardware decoding into GpuMats, running a filter (I need more, but if I can get this working I’m probably fine), and then hardware encoding back to a file or network connection.

I’ve tried pretty much every sample I could find, and they consistently fail at runtime in ways that suggest, somehow, that CUDA isn’t happy with the memory its being given from the video output. The errors usually look like:

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.5.4) /home/stuart/opencv_contrib/modules/cudafilters/src/cuda/row_filter.hpp:172: error: (-217:Gpu API call) unspecified launch failure in function 'caller'

which does not give me a whole lot of clues. Even tracing the gstreamer pipeline doesn’t show what’s happening. If I skip running an OpenCV filter, it all completes OK, although creating the filter looks fine.

Having tried that, and continued searching, I found a completely different sample that seems closer, the opencv_nvgstenc sample.

This time, I am getting assertion failures:

OpenCV(4.5.4) /home/stuart/opencv/modules/videoio/src/cap_gstreamer.cpp:1611: error: (-215:Assertion failed) frameSize.width > 0 && frameSize.height > 0 in function 'open'

And yet, this is with a valid width and height passed in. This is definitely frustrating as it looks like this sample might do exactly what I need. (Except not really, because even here, it’s using Mat rather than GpuMat, which is what I’m after.]

I’ve attached the samples I’ve been trying to get working.

gst_cv_gpumat.cpp (4.3 KB)
opencv_gst_encoder.cpp (6.1 KB)

I know this is a common use case, because I see a lot of questions asking this in one form or another, but each resolution seems to be slightly different, and I can’t get them working (I’m not a CUDA expert). All I want is to get some proof of concept that OpenCV on CUDA can actually run on GpuMats between an accelerated decoder and an accelerated encoder.

Does anybody have a hints and tips?

Hi,
Encoder does not support RGBA. Please try to link to one more nvvidconv to conver to NV12 like:

... ! nvvidconv name=myconv ! video/x-raw(memory:NVMM),format=RGBA ! nvvidconv ! video/x-raw(memory:NVMM),format=NV12 ! nvv4l2h264enc bitrate=8000000 ! ...

You may also try the reference sample:
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL

That certainly helps, in that at least it now runs and appears to capture something. What doesn’t yet happen is anything useful encoding. I am pretty sure I’m using that sample wrongly as a starting point. As I said, what I need is camera → decoder → openCV+CUDA → encoder. The encoder is now running and generating a valid mp4, but one with no non-black pixels!

I did start with exactly that reference sample (and every other sample I could put my paws on) but I did have to extend it to get the encoding part happening. And that’s now where I am stuck.

The all black pixels may make sense as I guess that NvDestroyEGLImage throws everything away a few lines later. Or maybe it’s in the nature of gstreamer probes. Or maybe something else I missed.

The other sample should do the bridge back to encoding, but as I said, that fails too, in a different way, so I’m still stalled on that one, even though it does maybe outline an alternative approach within OpenCV.

Hi,
Could you try Gaussian filter? Sobel filter makes whole frame dark. Probably it is why you see completely dark scene. Gaussian filter would blur the frames.

I think that’s it. Major progress. I have a basic working GpuMat OpenCV integrated into video, and it is capable of blurring a video.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.