The engine file trained and converted by TLT runs incorrectly in Deepstream on the jetson platform

I use TLT to train Mask-Rcnn, the data set is coco2017, and the errors running in Deepstream are as follows:


any idea?thanks.

Where did you run inference, in Jetson devices?

yep,xavier nx

How did you generate the trt engine, in host PC?

Oh, do I need to run tlt-converter in xavier to get engine files?

Yes, please use tlt-converter jetson version to generate trt engine in your nx.

hi,Morganh i run tlt-converter in xavier to get engine files,It works normally, but the FPS is only 3. What is the problem?

Refer to https://developer.nvidia.com/blog/training-instance-segmentation-models-using-maskrcnn-on-the-transfer-learning-toolkit/
I’m afraid it is similar to your result. (Please run it with batch size 2).

Currently, the Maskrcnn does not support pruning yet. So, if you want to get a higher fps, you can train a smaller network.

MaskRCNN

  • Input size : C * W * H (where C = 3, W > =128, H >=128 and W, H are multiples of 32)
  • Image format : JPG
  • Label format : COCO detection