Training Object Detection with pretrained resnet 18 , following Detectnet_V2 example

Hello ,
I just trained object detection for 2 classes. After finishing training i wanted to run the detect_net_inference_imagefeeder app. I modified the detect_net_inference.subgraph.json file, putting in a second label under detection decoder.
But when i try to run the app i get error message:

Where can i set the number of classes to detect ? I can’t find any file where i can edit the number of classes.

Thanks in advance!

Hi markus,
Could you refer to https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_detectnetv2_model?
It will run inference with Deepstream SDK.

User can edit the number of classes in Deepstream config file .
For example,

num-detected-classes=3

Hi Morganh, thanks for the quick reply. Currently facing another problem, when running the evaluating step in the Jupyter Notebook example. At Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation there is described to add the lines to the detectnet_v2_train_resnet18_kitti.txt file like this
validation_data_source: {
tfrecords_path: “path to testing tfrecords root”
image_directory_path: “”
}
But i dont have a “path to testing tfrecords root”. Or is there a way to generate those files ?

Please see https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#dataloader
You can use parts of training tfrecords for validation.

validation_fold: 0

Or set as below

validation_data_source: {
tfrecords_path: “path to testing tfrecords root”
image_directory_path: “”
}

That means if you have a new tfrecord, you can set it for validation.

Thanks. I got another question: When i visualize inferences in the Jupyter Notebook with tlt-infer on the testdata for the object, i get quite accurate bounding boxes on the object. However if i export the .tlt model and use the detect_net_inference_imagefeeder app i get completly different bounding boxes,which doesnt fit to the object at all. When i used the imagefeeder app before, i had same results like in Jupyter Notebook with tlt-infer. Only thing i changed on the app files, was the path to the exported .etlt model, the class label name and the folder to the test images. Any ideas on that ? Thanks in advance

What is the “detect_net_inference_imagefeeder app”?

i am following this example : https://docs.nvidia.com/isaac/isaac/packages/detect_net/doc/detect_net.html#tensorrt-inference-on-tlt-models . The app is included in the Issak_SDK download folder.

You mentioned “When i used the imagefeeder app before, i had same results like in Jupyter Notebook with tlt-infer.” At that time, which model were your exporting to the app?

The model i trained in the Jupyter Notebook for a own object , where i generated simulated data in Unity3d for training. When i then exported this trained model, I got same results for visualization in Jupyter Notebook and when using the detect_net_inference_imagefeeder app

I am not focusing on Isaac SDK supporting. So I need to spend time checking the issue you mentioned.
For this kind of issue, mostly it is resulted from post-processing. Is there any code or config for post-processing in this Isaac sdk app?

Okay. No there is no post processing config in the app file.

Can you share detect_net_inference_imagefeeder.app.json and the training spec of tlt?

1 Like