AP for class B is 0.0 at each epoch

Hi
New to TLT.
I am training a SSD + ResNET18 model using TLT.
When I evaluated the model:

*******************************
Using TLT model for inference, setting batch size to the one in eval_config: 16
Producing predictions: 100%|██████████████████████| 4/4 [00:20<00:00,  5.04s/it]
Start to calculate AP for each class
*******************************
Bottle        AP    0.89
Brand Label    AP    0.0
              mAP   0.445
*******************************

My KITTI format data looks like this:
Input Image shape is 1920 x 1280.

Bottle 0.0 0 0.0 1448 206 1614 914 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Bottle 0.0 0 0.0 1252 184 1448 944 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Bottle 0.0 0 0.0 952 146 1186 978 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Bottle 0.0 0 0.0 590 114 826 1002 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Bottle 0.0 0 0.0 140 84 378 1026 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Brand Label 0.0 0 0.0 1452 550 1606 842 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Brand Label 0.0 0 0.0 1254 554 1410 862 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Log csv looks like this:

epoch,AP_bottle,AP_brand label,loss,lr,mAP,validation_loss
20,0.9793069238741927,0.0,7.4368989424619505,0.02,0.48965346193709636,21.568314722606114

30,0.9759595853804064,0.0,6.759253094812673,0.02,0.4879797926902032,21.532711333158066

40,0.9737941030851158,0.0,6.133726015836299,0.02,0.4868970515425579,16.648316070741537

50,0.9856092576032248,0.0,5.520587285678229,0.02,0.4928046288016124,21.027197858508753

60,0.9822448704289952,0.0,5.159150861308188,0.02,0.4911224352144976,15.822613143191045

70,0.9869596004172352,0.0,4.582064160364186,0.002114743,0.4934798002086176,12.074139727621663

80,0.9885224518705049,0.0,4.5301909322490195,5e-05,0.49426122593525246,11.478109178494433

Thanks

@Morganh
any suggestions??

Could you please try to modify Brand Label to Brand or Brand_Label in the label files?

in all the kitti format labels as well??

Yes, what I suggested is to modify your KITTI format label files.
For example,
Brand Label 0.0 0 0.0 1254 554 1410 862 0.0 0.0 0.0 0.0 0.0 0.0 0.0
to
Brand 0.0 0 0.0 1254 554 1410 862 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Please modify the training spec as well.

1 Like

Willl try this

[quote=“Srbh23, post:1, topic:181489”]
Input Image shape is 1920 x 1280.

@Morganh
Will it work for this input size?

It can. See SSD — Transfer Learning Toolkit 3.0 documentation

But how it’s giving the AP for one class, not for the other one?
both the classes are in same image.

No luck!! the same

Did you modify the label in your validation dataset?

The KITTI format label has 15 fields. So if you do not modify, it has 16 fields which are not expected. See Data Annotation Format — Transfer Learning Toolkit 3.0 documentation

Thanks it worked!!

@Morganh
Do we have TLT 3.0 image with TensorRT 7.2.3?

rn it is:

root@c0a449e97e12:/workspace# dpkg -l | grep TensorRT
ii libnvinfer-dev 7.2.1-1+cuda11.1 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 7.2.1-1+cuda11.1 amd64 TensorRT plugin libraries
ii libnvinfer-plugin7 7.2.1-1+cuda11.1 amd64 TensorRT plugin libraries
ii libnvinfer7 7.2.1-1+cuda11.1 amd64 TensorRT runtime libraries
ii libnvonnxparsers-dev 7.2.1-1+cuda11.1 amd64 TensorRT ONNX libraries
ii libnvonnxparsers7 7.2.1-1+cuda11.1 amd64 TensorRT ONNX libraries
ii libnvparsers-dev 7.2.1-1+cuda11.1 amd64 TensorRT parsers libraries
ii libnvparsers7 7.2.1-1+cuda11.1 amd64 TensorRT parsers libraries
root@c0a449e97e12:/workspace#

While inferencing via Deepstream it will cause issues, because we will export the model engine with trt 7.2.1 and if our deepstream setup is having 7.2.3 ?

Please copy the etlt file into the machine where you want to run inference. Then in your machine, download the corresponding tlt-converter(TensorRT — Transfer Learning Toolkit 3.0 documentation) and run it to generate trt engine for inference use.

@Morganh Okay that tlt-converter is resolved.

we are getting class IDs as 1 and 2 from deepsream inference instead of 0 and 1 out of that NMS.
and our lables.txt is having only two classes.
to solve we used labeles.txt as

unknown
classA
classB

How to resolve…is it related to exporting the model from TLT?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Sorry, I cannot get your point.
You already train two classes- bottle and brand_label . Right?
When you run inference with deepstream, what is the issue?

BTW, what is the “unknown”?

It should not be related to exporting.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.