Window playback using nvvideosink

Hi,

I am using the following pipeline to playback video from the on board camera on a Jetson TX1:

gst-launch-1.0 nvcamerasrc fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv flip-method=2 ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvoverlaysink -e

However, this pipeline uses the nvoverlaysink. I need to have a window playing back the camera stream. I tried to use nvvideosink but there are no examples on how to use it within a pipeline. I have three questions:

  1. Is nvvvideosink the right block to use for window playback?
  2. If not which one should I try instead?
  3. If yes how do I use it. I tried the following pipeline but i get errors which basically mean that I am not setting certain properties right.

gst-launch-1.0 nvcamerasrc fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv flip-method=2 ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvvideosink
Setting pipeline to PAUSED …
nvvideosink: display is not set display=(nil) and/or nvvideosink: stream is not set stream=(nil) or ERROR: Pipeline doesn’t want to pause.

Thanks in advance

Looking at the multimedia guide (http://developer.download.nvidia.com/embedded/L4T/r23_Release_v1.0/L4T_Tegra_X1_Multimedia_User_Guide.pdf), I think you’re supposed to use “nveglglessink” for windowed playback, looks like its usage is the same down to the usage of the -e flag at the end.

So try:

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nveglglessink -e

That is an older version of the guide, but I think the same command would apply to the new version of L4T.

Hi,

Have you actually tried that successfully before suggesting it? I have too read the manual. But when I try to execute that pipeline I get the following error:

WARNING: erroneous pipeline: could not link nvvconv0 to eglglessink0

If I remove the nvvideoconv element I get the following error:

WARNING: erroneous pipeline: could not link nvcamerasrc0 to eglglessink0

I have copy pasted your answer just in case, but it does not work. I would never post a question before reading the manual. So please try to test things before suggesting them.

The reason I asked for the nvvideosink was in case it takes advantage of the NVMM memory.

Update:

What seems to be working is the following pipeline:

gst-launch-1.0 nvcamerasrc fpsRange=“30.0 30.0” intent=3 ! nvvidconv flip-method=6 ! ‘video/x-raw, width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1’ ! nveglglessink -e

However, this pipeline does not use the nvmm memory. If you do a gst-inspect-1.0 on nveglglessink you can see that the source pad cannot accept x-raw(memory:NVMM) but nvvideosink does. The above pipeline does not use any special memory, hence it works.

I would appreciate if somebody from NVIDIA could contribute any example demonstrating the use of nvvideosink.

Hello,
you can try this pipeline:

DISPLAY=:0 gst-launch-1.0 nvcamerasrc ! ‘video/x-raw(memory:NVMM), format=(string)I420, framerate=(fraction)30/1’ ! nvegltransform ! nveglglessink -e

br
ChenJian

Hi jachen,

I tried that pipeline and it works but it does what i suspected. It makes heavy use of the GPU. If you benchmark your pipeline against a pipeline that uses the nvoverlaysink, you will notice that the GPU usage goes up from 0% to almost 40%, on average. The AVP usage on the other hand, drops significantly. I have better uses for the GPU, hence the question for the nvvideosink. Thanks for the help much appreciated.

Still the question is open. Has anybody managed to get the nvvideosink to work? We would appreciate some input from NVIDIA.

Thanks

Hello piperak, not sure if you want to hear from me again, but perhaps the increase in GPU usage is because of interaction with the window manager? I’d be curious if there is a difference between running the pipeline in the Ubuntu GUI (which, I believe, is GPU-accelerated) and running it in a bare X session.

Sorry, as before, I don’t have time to test this out right now.

Hi,

Can you let me know how to enable window playback on Jetson Nano since nvoverlaysink is the only option there. I am currently using Deepstream SDK 5 on Python.

I am looking for a solution to use in my python application with a format similar to sample application in https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test3. I am using Jetson Nano device on JetPack4.4 DeepStream 5 where nveglglesink command throws an error. It is recommended to use nvoverlaysink in my environment.