Inaccurate GIE classification after converting to TensorRT

• Hardware Platform (Jetson / GPU): Jetson Orin Nano
• DeepStream Version: 6.3
**• JetPack Version (valid for Jetson only): **
• TensorRT Version: 8.5.2.1
• Issue Type( questions, new requirements, bugs): Question/Bug

Hi there, I have been facing an issue regarding the accuracy of classification models (caffe and ONNX) after converting them to TensorRT engines. Like in this post, I first tried using a custom classification model (yolov8-cls) to perform gender classification as a Secondary GIE, but was having issues with innaccurate predictions. More specifically, the model output was always the same class (1: Male) with confidence of 1.

I tested the ONNX model on the Jetson Orin Nano using this script:
testONNX.zip (835 Bytes)
. This performed as expected. I then converted it to a TensorRT engine using this command:

trtexec --onnx=genderClassification.onnx --saveEngine=genderClassification_fp16.trt --fp16

Using the following script, I tested the performance of the TensorRT engine and it exhibited the inaccurate behaviour described above
testTRT.zip (1.1 KB)

Following the suggestions in this post, I then tried implementing the Secondary_CarColor model as described in this FAQ post. I used the following command to build the TensorRT engine from the provided files:

trtexec --deploy=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.prototxt --output=predictions/Softmax --maxBatch=1 --saveEngine=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b1_gpu0_int8.engine

I then used the following command to test the engine:

gst-launch-1.0 filesrc location=blueCar.jpg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=./dstest_appsrc_config.txt ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg

Similarly to the behaviour exhibited with my custom model, this tensorRT engine also displays horrible accuracy, consistently labelling cars as “gold”.


I also tried building the engine using the --fp16 tag, with similar results.

Any assistance would be greatly appreciated

Your image preprocessing in testONNX.py and testTRT.py are different. Why?

testONNX

image = Image.open(image_path).resize((320,320))
    image_data = np.array(image).astype(np.float32)
    # Normalize image data if required by your model
    image_data = image_data / 255.0
    image_data = np.transpose(image_data, (2, 0, 1))  # HWC to CHW
    image_data = np.expand_dims(image_data, axis=0)  # Add batch dimension

testTRT.py

image = Image.open('/home/aizatron/Downloads/blueCar.jpg').resize((224, 224)).convert('RGB')
image_np = np.array(image).astype(np.float32)
image_np = np.transpose(image_np, (2, 0, 1)).ravel()

Please dump the input tensor in your two files to check and guarantee the inputs to the two models are just the same.

Hi, thanks for your response.

My apologies, I made changes to the preprocessing in the testTRT file when I was messing around with the Secondary_ CarColor example. The testONNX file was only used to my custom mode, hence the differences in image size etc.). I believe that the more pressing issue is why the inaccurate classification behavior is exhibited when I use the sample provided in the FAQ. In that case I ran the gstreamer pipeline directly as instructed in the linked post and not by using my python scripts.

I’m confused with your description. Have you make sure that the preprocessing you configured for DeepStream nvinfer is the same to what you used with the python code with ONNX?

DeepStream also uses TensorRT. As you said, the testTRT.py also exhibited the inaccurate, so we need to identify which part causes the inaccuracy. That is why I check the preprocess in the two python files. Please provide accurate code you are using. Or you have to guarantee the inputs of the two models(ONNX model and TRT model) are exactly the same by yourself.

For now, let us ignore the python scripts that I wrote and my custom classification model and rather shift the focus on why the inaccurate classification is occurring when I follow the instructions laid out in the FAQ post. When I followed those instructions, I used only the resources provided in the Secondary_CarColor example. I made no changes to any of the configuration files.

I had to run the following command not listed in the FAQ post in order to build the engine as the engine file was not found:

trtexec --deploy=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.prototxt --output=predictions/Softmax --maxBatch=1 --saveEngine=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b1_gpu0_int8.engine

After this, I tested the performance using the gstreamer command listed in the FAQ post. It continuously resulted in cars being classified as gold. This leads me to believe that the issue is not with preprocessing of the input data, since I made no changes to the pipeline or configuration files.

Hopefully that clarifies things for you.

  1. The deepstream nvinfer can generate the tensorRT engine automatically, you don’t need to generate with “trtexec”. The steps in FAQ are enough.
  2. Your “trtexec” command is wrong.
/usr/src/tensorrt/bin/trtexec --deploy=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.prototxt --output=predictions/Softmax --maxBatch=1 --saveEngine=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b1_gpu0_fp32.engine --model=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel
  1. Please raise TensorRT related topic in TensorRT forum. Latest Deep Learning (Training & Inference)/TensorRT topics - NVIDIA Developer Forums

Thank you @Fiona.Chen . When I tried following the FAQ, I got an error to say the engine file was not found, hence why I tried used the trtexcec command to build the engine.

I will try with the corrected trtexec command and feedback.

This is just some warning, the engine will be generated automatically, you can ignore such log.

Thanks again, @Fiona.Chen. I was able to get the Secondary_CarColor example working as expected. Once that was working, I tried using the same process to test my custom classification model.

I made changes to the config file to suit my application. Here is the config file:
dstest_appsrc_configCustom.txt (3.3 KB)

I used this command to run the pipeline:

gst-launch-1.0 filesrc location=personFemale1.jpeg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=320 height=320 ! nvinfer config-file-path=./dstest_appsrc_configCustom.txt ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg

This resulted in the same behaviour I initially described, where it continuously labels the output as Male (class 0). Following your advice to ensure the inputs to the two models are the same, and so I combined my two previous test scripts into this script:
testONNX_TRT.zip (1.4 KB)

Using this script, I got the expected output using both the ONNX model and TensorRT engine, with both classes being detected accurately.

I then wrote the following python script to allow me to insert a custom probe to understand why the GStreamer pipeline was not working:
testGST.zip (1.5 KB)

With this,

        l_user = frame_meta.frame_user_meta_list
        if l_user is None:
            print("No metadata found")

I was able to determine that the nvinfer plugin does not appear to be outputting any metadata. This appears to be the reason that the output is always labelled as Class 0 (Male) when using the Gstreamer pipeline.

I am not sure if I have set up my configuration file incorrectly or if there is some other cause of the issue.
Please advise.

Do you know what the following codes in your testONNX_TRT.py mean?

def preprocess_image(image_path, img_size=(320, 320)):
    image = Image.open(image_path).resize(img_size).convert('RGB')
    image_data = np.array(image).astype(np.float32)
    # Normalize image data if required by your model
    image_data = image_data / 255.0
    image_data = np.transpose(image_data, (2, 0, 1))  # HWC to CHW
    image_data = np.expand_dims(image_data, axis=0)  # Add batch dimension
    return image_data

Please read DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums and fill the correct preprocessing parameters to the nvinfer configuration file. Each model has its own preprocessing algorithms, please make sure you know the preprocessing algorithm the model needs or just consult the guy who provide the model to you.

I have attempted to adjust the preprocessing parameters in the config file. Attached is the new config file.
dstest_appsrc_configCustom1.txt (1.8 KB)

If I set net-scale-factor=0.00392156862 (equivalent to the normalization used in my python script), then no class is detected (no box is drawn around the object).
If i leave net-scale-factor=1, then I get the same behaviour as previously described.

I am at a complete loss. Any further assistance will be appreciated.

I can provide all the necessary files to see if you can recreate the issue on your side if needed.

According to your python code, your model’s output needs argmax algorithm. Please customize your own classifier postprocessing function.

How do I implement my own postprocessing function?

I dont think that the argmax will make a difference, since the model is consistently returning a confidence score of 1.00 for the incorrect class using nvinfer.

Take NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream (github.com) as an example, the LPR model is a classifier model whose output tensor needs custom postprocessing. The customized function and configurations are in the sample.

The gst-nvinfer is open source, please read the source code to understand the default classifier postprocessing inside gst-nvinfer, the default classifier postprocessing does not match to your model, you need to customize the postprocessing according to your model.