Raw tensor output

Hi,
I have Jetson Xavier NX and I am trying to create a simple GStreamer pipeline like this: GIE → Tracker → SIE. The secondary inference is an age classification which operates on object detections from primary inference and object tracker. I need to read raw tensor data from the age classification but when I enable the parameter “output-tensor-meta=1”, the pipeline freezes. It seems that the secondary inference exports tensor metadata six times and then maybe a buffer is overfilled, which stops whole GStreamer pipeline. My pipeline is based on deepstream-test2 example app, which freezes as well. However, when I use a different model or when I set “output-tensor-meta=0”, everything works fine. Unfortunately, I need the raw data from the age classification and I haven’t been able to find a solution to this problem for a couple of days. Have you any suggestion how to solve this problem, please?

Best Regards,
Daniel

2 Likes

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU)
Jetson Xavier NX Developer Kit

• DeepStream Version
5.0

• JetPack Version (valid for Jetson only)
Jetpack 4.4 [L4T 32.4.3]

• TensorRT Version
TensorRT: 7.1.3.0
CUDA: 10.2.89

• NVIDIA GPU Driver Version (valid for GPU only)
32.4.3

The secondary inference model is in .onnx format.

Any error log? You can “export GST_DEBUG=3” before you run the app.

Hi,
thank you for the hint. I didn’t know about GST_DEBUG parameter. There are some errors now:

Using winsys: x11 

(deepstream-demographics-app:28978): GLib-CRITICAL **: 11:48:59.385: g_strrstr: assertion 'haystack != NULL' failed
[WS Protocol] Connect, config file: ../config//ws_proto.json
[WS Protocol] host: 127.0.0.1, port: 4685
[WS Protocol] Connecting to ws://127.0.0.1:4685/ 
[WS Protocol] Connection to server established.
0:00:04.093938591 28978   0x559277e270 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/jetson-analytics/deepstream_demographics/model_age_v3.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input.1         3x224x224       
1   OUTPUT kFLOAT 662             101             

0:00:04.094210688 28978   0x559277e270 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/jetson-analytics/deepstream_demographics/model_age_v3.onnx_b1_gpu0_fp16.engine
layer size: 2, fullframe: 0
input.1
662
0:00:04.101569187 28978   0x559277e270 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:04.339283427 28978   0x559277e270 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/jetson-analytics/deepstream_demographics/resnet34_peoplenet_pruned.etlt_b4_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

0:00:04.339544740 28978   0x559277e270 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/jetson-analytics/deepstream_demographics/resnet34_peoplenet_pruned.etlt_b4_gpu0_fp16.engine
layer size: 3, fullframe: 1
input_1
output_bbox/BiasAdd
output_cov/Sigmoid
0:00:04.343455863 28978   0x559277e270 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:pgie_config.txt sucessfully
0:00:04.344737309 28978   0x559277e270 WARN                 basesrc gstbasesrc.c:3583:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source
Decodebin child added: decodebin0
0:00:04.346355908 28978   0x559277e270 WARN                 basesrc gstbasesrc.c:3583:gst_base_src_start_complete:<source> pad not activated yet
Running...
Decodebin child added: qtdemux0
0:00:04.365786336 28978   0x7f080760f0 WARN                 qtdemux qtdemux.c:3031:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
0:00:04.392379261 28978   0x7f0c02e630 WARN                    v4l2 gstv4l2object.c:4430:gst_v4l2_object_probe_caps:<nvv4l2decoder0:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:04.392469758 28978   0x7f0c02e630 WARN                    v4l2 gstv4l2object.c:2372:gst_v4l2_object_add_interlace_mode:0x7f00042e90 Failed to determine interlace mode
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
0:00:04.500281818 28978   0x7f0c02e630 WARN                    v4l2 gstv4l2object.c:4430:gst_v4l2_object_probe_caps:<nvv4l2decoder0:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:04.500405786 28978   0x7f0c02e630 WARN                    v4l2 gstv4l2object.c:2372:gst_v4l2_object_add_interlace_mode:0x7f00042e90 Failed to determine interlace mode
In cbNewpad
0:00:04.503948011 28978   0x7f0c02e630 WARN            v4l2videodec gstv4l2videodec.c:1618:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:04.504506990 28978   0x7f0c02e630 WARN          v4l2bufferpool gstv4l2bufferpool.c:1057:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:04.506961465 28978   0x5591e76ad0 ERROR            egladaption gstegladaptation.c:659:gst_egl_adaptation_choose_config:<nvvideo-renderer> Could not find matching framebuffer config
0:00:04.507030874 28978   0x5591e76ad0 ERROR            egladaption gstegladaptation.c:672:gst_egl_adaptation_choose_config:<nvvideo-renderer> Couldn't choose an usable config
0:00:04.507057402 28978   0x5591e76ad0 ERROR          nveglglessink gsteglglessink.c:2707:gst_eglglessink_configure_caps:<nvvideo-renderer> Couldn't choose EGL config
0:00:04.507079834 28978   0x5591e76ad0 ERROR          nveglglessink gsteglglessink.c:2767:gst_eglglessink_configure_caps:<nvvideo-renderer> Configuring caps failed
0:00:04.507140346 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.507279035 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.507332251 28978   0x55920851e0 WARN                GST_PADS gstpad.c:4226:gst_pad_peer_query:<nvegl-transform:src> could not send sticky events
0:00:04.509661030 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.523840521 28978   0x7f00007ed0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1503:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:04.524388971 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.524722189 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.524831021 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.524914222 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.525220687 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.525417648 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.535345887 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.536689477 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:04.536809382 28978   0x55920851e0 ERROR          nveglglessink gsteglglessink.c:2812:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
KLT Tracker Init
face: 7
face: 1
face: 7
face: 1
!!!!!!!attach_tensor_output_meta!!!!!!!
!!!!!!!attach_tensor_output_meta!!!!!!!
!!!!!!!attach_tensor_output_meta!!!!!!!
!!!!!!!attach_tensor_output_meta!!!!!!!
!!!!!!!attach_tensor_output_meta!!!!!!!
!!!!!!!attach_tensor_output_meta!!!!!!!

If you only add "output-tensor-meta=1” to sample codes deepstream-test2, it will not cause such problem. Either there is some problem of the age classification model or there is something wrong with your processing with the tensor meta. The information is not enough to know what is wrong.

Please check " Tensor Metadata" part of https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html#wwpID0E0YFB0HA

Hi,
thank you for fast response. It seems strange to me too. I have read the documentation and I am still not able to find the problem. Even if I don’t read metada from a probe, pipeline freezes. If I disable output-tensor-meta, it works fine. Maybe, I will do some tests with deepstream-app application instead of our pipeline.

Hi,
I have finally found a solution for this strange problem. I’ll try to explain briefly why the pipeline freezes and what my solution is. As I mentioned above, when I set “output-tensor-meta=1", the pipeline freezes after a couple of detections. The reason is simple but hard to find in gst-nvinfer plugin code.

When I enable exporting of tensor metadata, the NvDsInferTensorMeta keeps reference to tensor_out_object (GstNvInferTensorOutputObject). It causes that all resources allocated by batch are released only if the whole batch is processed. But it also uses nvinfer->pool, which is of limited size, and each object from the batch gets one resource from the pool. In the situation when there is more objects in the batch than the pool size is, deadlock happens. The reason is that the next object in the batch waits for the resource from the pool but the pool is not freed until the whole batch is processed. In other words, the code in function gst_nvinfer_process_objects should check the number of objects according to the pool size. I have found this problem just because my cluster settings weren’t perfect and the primary inference sent too many objects to secondary inference but it should have worked anyway.

I have fixed the code myself but I hope this short description allows you to improve the gst-nvinfer plugin in some of the next releases.

Regards,
Daniel

7 Likes