The engine trained and deployed using TLT runs incorrectly in Deepstream

I used the KITTI data set to train a model with 6 classifications, and used deepstream_tlt_apps to successfully convert the etlt file into an engine, but the following error occurred when used in Deepstream:


image
any idea?thanks.

Which width/height did you train? Please attach your config file in deepstream too.

deepstream_app_config_fasterRCNN.txt (2.6 KB) default_spec_resnet50.txt (3.7 KB) pgie_frcnn_tlt_config.txt (2.6 KB)

Please modify your config file of deepsteam.
Since you already trained a model of 1248x384, you need to set

uff-input-dims=3;384;1248;0

Thank you for your reply. Now the target recognition is correct, but the video stream output by Xavier is still very stuck, only 5FPS.

For fps, it is another topic. Below items should be considered.

  1. The model you trained is 1248x384 . If trained a smaller one, the fps can be improved.
  2. Is your model pruned and retrained?

I have trained a detectnet_v2 model with KITTI formatted dataset which shows avg precision of 58% during evaluation on TLT. But while running in video, it can’t detect any single object properly. Here is the training config file: resnet_train.txt (3.0 KB)
My input images are 1280x720 resolution so where should I set the width-height parameters in the training config file?

@neuroSparK
Please create a new forum topic. Yours is different from @Baot

Hi,Morganh:
I now have a new problem. I use my own data set (KITTI format) to train a new model, but TLT does not seem to be able to recognize my data set folder, as shown below:




any idea?

From the log

Train : 0 val: 0

Please check your spec and confirm the dataset link is correct.

I think my file path is correct:


Please double check the quotes.
They are " " instead of “ ”

In the end, I solved the above problem by restarting jupyter notebook services, maybe something abnormal happened in the ssh terminal.