Issues with TensorRT on Drive PX2

I am currently developing application on Drive PX2 base on the repo here:

https://github.com/dusty-nv/jetson-inference

I am currently facing some issues with TensorRT, which I posted here:

https://github.com/dusty-nv/jetson-inference/issues/134

In summary, there is 2 issues:

  1. Drive PX2 does not support FP16?

The return of

builder->platformHasFastFp16()

is false.

  1. When loading the network trained by me, it got the following error:

conv1: ERROR - 32-bit weights not found for 32-bit model

Which is similar to the following issue:

https://github.com/dusty-nv/jetson-inference/issues/83

Does anyone have any clues how to solve such issues?

BTW, it runs the downloaded pretrained model perfectly, just with my own trained model has difficulty, while the model parameters are exactly the same.

Hi,

For question 1, please check the reply by dusty-nv:

For question 2, tensorRT2.1 can’t support the caffemodel trained with NvCaffe-0.16.
Please remember to use NvCaffe-0.15 for training.

Thanks.