Tao classification model outputs only one class during inference in deepstream

I have trained a binary image classification model using tao toolkit 3.22.05, PFA for the training config truck_jun02_efficientnet_b0_classifier.txt (1.2 KB), this model is used as an SGIE in deepstream6.0, PFA for the Gst-nvinfer configuration file truck_classifier.conf (1.6 KB) and the labels file classmap.txt (10 Bytes), but for any given video the model only outputs a single class ‘front’. I had evaluated the model in tao toolkit and also tested on a few test images, it had given fairly accurate results there.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (GPU)

• DeepStream Version (6.0)

• TensorRT Version (8.0.1-1+cuda11.3)

• NVIDIA GPU Driver Version (470.57.02)

• Issue Type (bugs)

• How to reproduce the issue ? (Train an image classification model with efficientnet-b0 backbone available from ngc on tao toolkit 3.22.05 using the above mentioned training config. Then export the model to .etlt format and deploy it on a deepstream6.0 python application, based on python applications found in GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications, getting classification data from NvDsObjectMeta.classifier_meta_list)

you can use this sample to test the classification model directly.

Iam getting the following error, after running gst-launch-1.0 filesrc location=146_224_224.jpg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=224 height=224 ! nvinfer config-file-path=./dstest_appsrc_config.txt ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg

Unknown or legacy key specified ‘is-classifier’ for group [property]
Setting pipeline to PAUSED …
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-classify-pgie-test/efficientnet_b0_080_v1_224.etlt_b1_gpu0_fp32.engine open error
0:00:00.284752124 3903 0x5579c1708840 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning
from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-classify-pgie-test/efficientnet_b0_080_v1_224.etlt_b1_gpu0_fp32.engine failed
0:00:00.284788108 3903 0x5579c1708840 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning
from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-classify-pgie-test/efficientnet_b0_080_v1_224.etlt_b1_gpu0_fp32.engine failed, try rebuild
0:00:00.284796351 3903 0x5579c1708840 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:661 INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1204 INT8 calibration file not specified. Trying FP16 mode.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1224 FP16 not supported by platform. Using FP32 mode.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:26.109448462 3903 0x5579c1708840 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/paralaxiom/vast/platform-BETA-VAST-5.0.0.9/nvast/ds_vast_pipeline/efficientnet_b0_080_v1_224.etlt_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 2x1x1

0:00:26.116754610 3903 0x5579c1708840 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:./dstest_appsrc_config.txt sucessfully
Pipeline is PREROLLING …
ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: memory type configured and i/p buffer mismatch ip_surf 0 muxer 3
Additional debug info:
gstnvstreammux.c(609): gst_nvstreammux_chain (): /GstPipeline:pipeline0/GstNvStreamMux:mux
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
Freeing pipeline …

gstnvinfer config dstest_appsrc_config.txt (3.5 KB)

it is because nvideoconvert’s output memory type is different with nvstreammux on DS6.2.
you can try …! nvvideoconvert nvbuf-memory-type=3 ! …

thanks, Iam able to get the output. How can I do this for a video and also for a directory with a set of test images?

  1. please ensure your model can output the right results by the command above.
  2. if using video or images, you need add a detection model before classification model. please refer to deepstream sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2. it has one detection model and three classification models.
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.