I’m currently reworking the deepstream-test3 (running detection on multiple streams) to include a tracker and secondary classifier like in deepstream-test2. This currently works with the provided models, but I wanted to use a classifier I trained with Pytorch.
So far, I’ve checked the following:
Pytorch model gives nonzero outputs / signal when running on test data
The converted ONNX model can be read into netron.app , and can be imported with a fixed batch size of 1
DeepStream can generate a TensorRT engine file given the ONNX file and the proper batch size (force-implicit-batch is turned off)
I can load the file using TensorRT and run inference on an input (and get nonzero signal).
The pipeline works when the SGIE config uses the Secondary_VehicleType classifier provided by Nvidia, even with batch size of 1 and float32 processing mode.
When I switch to use the custom model, though, no classifications are added to the OSD, even when the classifier threshold is set to 0. The original video, along with bounding box and tracks, are still visible.
I’m not sure what else I can do to figure out why my model output is different from what DeepStream expects (it is a list containing 1 2D Tensor of size [batch_size, 8]).
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
Train a Pytorch classifier model and save it to onnx (pytorch 2.2.0+cu121, onnx 1.15.0) (I’m not sure how to share the ONNX model, I can do so if needed). Also create a labels.txt file denoting the labels.
In dst2_sgie_config.yml , change the path to said ONNX model, the batch size to the correct number, network-mode = 0 for fp32 processing, and labels.txt path file
Run the model with ./deepstream-test2-app ./dst2_config.yml . The stream will run with display, but the results from the classifier won’t be on display.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
You can analyze it from the following questions:
1.Is the detector working and can you see the bbox?
2.Does your classifier require post-processing to get classification labels?
3.What is the id of the object output by the detector?
Which category is your classifier for? Do you need to configure the operate-on-class-ids parameter?