Get the full vector of probabilities for every class for one classifier in DeepStream

I have a detector linked to a tracker then linked to classifier.
The detector detects charachters.
The tracker tracks them and gives a unique id for every one of them.
The classifier classifies the charchter into the appropriate class (‘A’, ‘B’, …)

I just want the full vector of the output layer (softmax layer) of the classifier.

It has the dimensions [1, 26] and contains the [probability to belong to ‘A’, probability to belong to ‘B’, …]

I followed the tutorial provided here: deepstream_python_apps/custom_parser_guide.md at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

But the l_user variable is always None. It does not contain any data.

How do I get this vector?

Hi @bilel_bj,
Please provide the setup info as other topics.

For your question, did you set “output_tensor_meta: true” as deepstream_python_apps/dstest_ssd_nopostprocess.txt at 5cb4cb8be92e079acd07d911d265946580ea81cd · NVIDIA-AI-IOT/deepstream_python_apps · GitHub ?

This is the setup of my Xavier NX:

  • NVIDIA Jetson Xavier NX (Developer Kit Version)
    • Jetpack UNKNOWN [L4T 32.4.4]
    • NV Power Mode: MODE_15W_6CORE - Type: 2
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: 0.4.4
    • Vulkan: 1.2.70

I put output-tensor-meta=1 inside the config file of the classifier.

But the variable l_user (l_user = obj_meta.obj_user_meta_list) always return None. It does not put the NvDsInferTensorMeta isnide it at all.

What is the problem?

can you reproduce this issue with the deepstream-ssd-parser ?

The example does not work, it generates this error:

(python3:16826): GStreamer-WARNING **: 09:09:46.960: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so’: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Unable to create Encoder
If the following error is encountered:
/usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Preload the offending library:
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1
Traceback (most recent call last):
File “deepstream_ssd_parser.py”, line 458, in
sys.exit(main(sys.argv))
File “deepstream_ssd_parser.py”, line 363, in main
encoder.set_property(“bitrate”, 2000000)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’

Could be this problem linked to my pb?

please do check the README

  1. Add to LD_PRELOAD:
    /usr/lib/aarch64-linux-gnu/libgomp.so.1
    This is to work around the following problem with TLS usage limitation:
    91938 – libgomp (and libitm) DSOs are incorrectly built with initial-exec tls-model

It was done but now it gneterates this error:

ERROR: failed to load model: ssd_inception_v2_coco_2018_01_28, nvinfer error:NVDSINFER_TRTIS_ERROR

In my case I am working with Tensor RT engine for the classifier and the detector not Triton Server

/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-trtis/README


Preparing TensorRT, Tensorflow, ONNX models

  1. Go to samples directory and run the following command.
    $ ./prepare_ds_trtis_model_repo.sh
    All the sample models should be downloaded/generated into
    samples/trtis-model-repo directory.

I run the this sample. It is very heavy on the GPU (based on the Triton server).

The l_user variable is not None, it returns this value:

l_user <pyds.GList object at 0x7eccd66730>

But I run another sample: deepstream-test2-tensor-meta (run on tensorRT like my example) and we can recuperate the l_user variable. It prints this message in the screen:

Inside l_user = obj_meta.obj_user_meta_list Loop

By the way, it shows me this error message while running my app:

Unknown or legacy key specified ‘output_tensor_meta’ for group [property]
Unknown or legacy key specified ‘is-classifier’ for group [property]

In my case it is always None, why?

the property is “output-tensor-meta” , not ‘output_tensor_meta’, could you double check ?

Yes fixed. But not resolved the issue. I needed to change to make another probe function linked in the sink of the plugin just coming after the classifier plugin. And it works now. The l_user is not always None. It returns data related to the charachter detected.
Thanks for your help.
I have another issue presented here: Read the buffer data recuperated from NvDsInferLayerInfo (in Python)

Could you help me with?

is the issue of this topic solved?

Yes it is solved. The solution is to put another sgie probe in the sink that just follows the classifier plugin. In my case it was the nvvidconv sink :

vidconvsinkpad = nvvidconv.get_static_pad(“sink”)
if not vidconvsinkpad:
sys.stderr.write(" Unable to get sink pad of nvosd \n")
vidconvsinkpad.add_probe(Gst.PadProbeType.BUFFER, sgie_sink_pad_buffer_probe, 0)

Then access it using :

gst_buffer = info.get_buffer()
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
l_obj=frame_meta.obj_meta_list
l_user = obj_meta.obj_user_meta_list
user_meta=pyds.NvDsUserMeta.cast(l_user.data)
tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)

The data are stored in tensor_meta

Still it will be there a technical problem to access the output layer form this variable. This is targeted in a separate issue: Read the buffer data recuperated from NvDsInferLayerInfo (in Python)