Unable to use fakesink with deepstream-test3-app

• Hardware Platform: Jetson Xavier NX
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1

I want to run deepstream on my jetson in a headless configuration. I don’t want to render any output, I only want to use probes to count objects. I am working from the test3 app, since it is most similar to what I eventually want to do. I have tried changing this code

sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");

to

sink = gst_element_factory_make ("fakesink", "fakesink");

When I compile and run the app with an RTSP url, it seems to do a single inference and then quit:

Decodebin child added: source
Running...
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad
Frame Number = 0 Number of objects = 4 Vehicle Count = 0 Person Count = 4
0:00:07.340585688  2645   0x55b3a71000 WARN                 nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:07.340669977  2645   0x55b3a71000 WARN                 nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1946): gst_nvinfer_output_loop (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
Deleting pipeline

I have also tried unsetting DISPLAY based on other suggestions in this forum, with the same results.

How can I modify the test3 app to not output to the display?

Hi,
Please try

  /* Finally render the osd output */
#ifdef PLATFORM_TEGRA
  transform = gst_element_factory_make ("queue", "nvegl-transform");
#endif
  sink = gst_element_factory_make ("fakesink", "nvvideo-renderer");
2 Likes

This doesn’t work. It produces the same output, quitting after the first frame.

Hi,
Please try sync-false:

Hi Dane, thank you for your reply.

Setting sync=false also did not work. I’m actually unsure why it would have any effect on my scenario, since sync is documented as “Indicates how fast the stream is to be rendered”.

To state my problem again: I would like to run deepstream-test3-app with no display output, just the console output to print the objects detected.

Are you or anyone else able to reproduce this problem? Because I can trivially reproduce it using the deepstream-test3-app with no modifications whatsoever besides changing “nveglglessink” to “fakesink”.

Let me know. Thanks!

Hi,
We don’t observe the issue with the public URI:

deepstream-test3$ ./deepstream-test3-app rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov

Might be certain issue in your source.

Thank you Dane.

I still observe the same error with this public RTSP stream. Have you tried to reproduce it on my same platform, Xavier NX JP 4.4 and Deepstream 5.0?

Can you or anyone explain what reason error (-5) means?

$ ./deepstream-test3-app rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
Now playing: rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov,
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:09.258838046  8263   0x55b0073810 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/home/gabe/ds5forum/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:09.259112185  8263   0x55b0073810 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /home/gabe/ds5forum/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:09.272249060  8263   0x55b0073810 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Running...
WARNING from element source: Could not read from resource.
Warning: Could not read from resource.
Decodebin child added: decodebin0
Decodebin child added: decodebin1
Decodebin child added: rtph264depay0
Decodebin child added: rtpmp4gdepay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: aacparse0

(deepstream-test3-app:8263): GStreamer-WARNING **: 22:57:22.639: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so': /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Decodebin child added: nvv4l2decoder0

(deepstream-test3-app:8263): GStreamer-WARNING **: 22:57:22.690: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so': /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Opening in BLOCKING MODE 
Decodebin child added: faad0
In cb_newpad
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
0:00:16.309297114  8263   0x55afdd6000 WARN                 nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:16.309483352  8263   0x55afdd6000 WARN                 nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1946): gst_nvinfer_output_loop (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 2 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 3 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Deleting pipeline

Hi,
The error is similar to using default sample(with nveglglessink) and does not set export DISPLAY=:0(or 1). Probably you don’t run the modified and rebuilt test3 app?

We try on r32.4.2 Xavier and Xavier NX. Both works fine with fakesink.

Hi
Any followups to this issue ?
I am getting the same problem with Xavier NX and deepstream 5.1

I get the error when I modify the egl sink in deepstream test 3 example to a fake sink