Convert trained model to tensorRT / uff

We trained our model using Keras (tf.keras or keras) and converted the model to ONNX.
The question is, how to create the tensorrt representation of the model on the DPX2?

The tool tensorRT_optimization needs the model as uff, but it’s not mentioned how to get the uff file.

Hi,

Suppose you are using a TensorFlow-based Keras(tf.keras), is it correct?
If yes, you can follow this tutorial to convert your model(.pb) into uff.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#mnist_uff_sample

Thanks.

Ok I’ll try this.
One question: on the host system, tensorrt is not installed by DriveInstall, right?
The path /usr/src/tensorrt/samples/sampleUffMNIST is not existing.

Hi,

You can get the package installed with sdkmanager:
https://developer.nvidia.com/nvidia-drive-downloads

Thanks.

Dear JHaselberger,
You can find TensorRT samples at /usr/local/cuda/dl/samples after installing using SDKmanager.
Please check https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#working_tf for converting your keras model to UFF. For this, You require convert-to-uff tool which gets installed from the wheel file provided in /usr/local/cuda/dl/uff (Refer point 7 in https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-tar)