Accuracy of model "CarMake" is only about 35% with my test dataset

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version7.0
**• NVIDIA GPU Driver Version (valid for GPU only)**440.33.01
**• Issue Type( questions, new requirements, bugs)**Questions

Hi sirs,
I used model under samples/models/Secondary_CarMake/ to test myself dataset like:


→ Inference to “mercedes”

→ Inference to “kia”

→ Inference to “honda”
My preprocess steps are:

  1. Object detection cars
  2. Cut car from original picture
  3. Resize to weight 224 and height 224
  4. Use the following commands to inference:
    gst-launch-1.0 filesrc location={Car Image} ! jpegparse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=224 height=224 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test2/dstest2_pgie2_config.txt ! queue ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! jpegenc ! filesink location=result.jpg
    dstest2_pgie2_config.txt is here:
    dstest2_pgie2_config.txt (3.8 KB)

Due to I get the accuracy of model about 91%, so I am confused of my inference result.
Is there any process I miss? Or something of limitation of model?

Thanks

Hey customer, may I know how do you calculate the accuracy such as 91% and 35%

Hi Bcao,

Thanks your reply.
91% is from NGC model:
https://ngc.nvidia.com/catalog/models/nvidia:tlt_vehiclemakenet
Methodology and KPI part.
And 35% is correct images divide total images:
Correct images: Images that correct inference from model
Total images: Total numbers of images to be inference

Thanks

Hi sir,
Is there any update?

Thanks

Hi sir,
Is there any update?

Thanks

Hey @Morganh , can we calculate the accuracy like what user did, is it expected?

@chris5_lin
According to your comments above, you want to compare https://ngc.nvidia.com/catalog/models/nvidia:tlt_vehiclemakenet (https://ngc.nvidia.com/catalog/models/nvidia:tlt_vehiclemakenet )
So, please follow /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/README to download necessary models and then run below command.
$ deepstream-app -c deepstream_app_source1_trafficcamnet.txt

In dstest2_pgie2_config.txt (3.8 KB), you were using caffe model instead of tlt models. So, you were not testing tlt models at all.

@chris5_lin
More reference about how to run inference with a classification model.