Ubuntu GStreamer warning: Error opening bin: no element "nvvideoconvert"

I’d like to ingest a rstp video and “leave” in the GPU memory to be ready for tensorRT inference. I have Ubuntu Server 18.04 with Cuda 11.1, Drivers 455, Tensort 7.2.1.1

Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0

I setup Gstreamer on Ubuntu Server 18.04 with the following official command (here):

sudo apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio

But the following pipeline (I think the camera should be public)

gst-launch-1.0 rtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov latency=10 num-buffers=256 ! decodebin !  nvvideoconvert ! video/x-raw\(memory:NVMM\),format=BGRx ! fakesink name=s

gives me the error:

GStreamer warning: Error opening bin: no element "nvvideoconvert"

Also, cv2.getBuildInformation() reports this:

  Video I/O:
    DC1394:                      YES (2.2.5)
    FFMPEG:                      YES
      avcodec:                   YES (57.107.100)
      avformat:                  YES (57.83.100)
      avutil:                    YES (55.78.100)
      swscale:                   YES (4.8.100)
      avresample:                YES (3.7.0)
    GStreamer:                   YES (1.14.5)
    v4l/v4l2:                    YES (linux/videodev2.h)

Do I need Deepstream? If not, how can I solve this? I don’t think Deepstream support my current cuda version.

nvvideoconvert is developed and published by Nvidia deepstream, so surely you must install deepstream to use nvvideoconvert. No, the latest DeepStream SDK does not support CUDA11.1
Please refer to the following table for the driver and software version dependencies: Quickstart Guide — DeepStream 6.3 Release documentation

Thank you @Fiona.Chen. Do you know if there is an alternative way (no deep stream) to ingest a rstp video and “leave” the frames in the GPU memory?

Deepstream can do the job. We have sample app to show how to “ingest a rstp video and “leave” in the GPU memory to be ready for tensorRT inference” DeepStream Reference Application - deepstream-app — DeepStream 6.1.1 Release documentation

Please read the whole deepstream SDK document carefully. Welcome to the DeepStream Documentation — DeepStream 6.1.1 Release documentation