Generated classifier engine isn't visible within DeepStream pipeline

Hello!

I’m currently reworking the deepstream-test3 (running detection on multiple streams) to include a tracker and secondary classifier like in deepstream-test2. This currently works with the provided models, but I wanted to use a classifier I trained with Pytorch.

So far, I’ve checked the following:

  • Pytorch model gives nonzero outputs / signal when running on test data
  • The converted ONNX model can be read into netron.app , and can be imported with a fixed batch size of 1
  • DeepStream can generate a TensorRT engine file given the ONNX file and the proper batch size (force-implicit-batch is turned off)
  • I can load the file using TensorRT and run inference on an input (and get nonzero signal).
  • The pipeline works when the SGIE config uses the Secondary_VehicleType classifier provided by Nvidia, even with batch size of 1 and float32 processing mode.
  • When I switch to use the custom model, though, no classifications are added to the OSD, even when the classifier threshold is set to 0. The original video, along with bounding box and tracks, are still visible.

I’m not sure what else I can do to figure out why my model output is different from what DeepStream expects (it is a list containing 1 2D Tensor of size [batch_size, 8]).

Thanks!
TJ

I guess it may be because the configuration file needs to be modified.

Since I don’t know the input and output of your model, you can refer to the following document.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

nvinfer plugin is open source, you can also debug by adding log

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Hey! Sorry for the late reply. To answer your question:

  • Hardware Platform : Jetson Xavier NX (Ubuntu 20.04)
  • DeepStream Version 6.2
  • JetPack version 5.4
  • TensorRT version 8.5.2.2-1
  • Environment created through jetson-containers for deepstream: GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
  • Issue Type: Bug
  • How to reproduce this issue:
    • Start with deepstream-test2
    • Train a Pytorch classifier model and save it to onnx (pytorch 2.2.0+cu121, onnx 1.15.0) (I’m not sure how to share the ONNX model, I can do so if needed). Also create a labels.txt file denoting the labels.
    • In dst2_sgie_config.yml , change the path to said ONNX model, the batch size to the correct number, network-mode = 0 for fp32 processing, and labels.txt path file
    • Run the model with ./deepstream-test2-app ./dst2_config.yml . The stream will run with display, but the results from the classifier won’t be on display.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can analyze it from the following questions:

1.Is the detector working and can you see the bbox?

2.Does your classifier require post-processing to get classification labels?

3.What is the id of the object output by the detector?
Which category is your classifier for? Do you need to configure the operate-on-class-ids parameter?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.