How to pass Weights to Custom TRT Network Layers, for Keras Model (Tensorflow backend).

I have been experimenting with building a custom TRT Network by first training it on Jetson Nano using Keras (tensorflow backend) for MNIST Dataset, and then extracting weights layer by layer, passing them to the manually constructed TRT Network using TRT Python API. But in all this, I get error for the weights array passed.

In the TRT Developer Guide, there is only one example and that too of Pytorch, for building custom engine (network_api_pytorch_mnist).

Can you please provide an example where it’s done and elaborated for Keras model (tensorflow backend).

Will be quite helpful, as I have many other trained models, that I need to port for Jetson Nano.

Thanks in advance !

If required, I can share my source code and output results of entire process.


It’s recommended to use our uff parser for the conversion.

TensorFlow is an operation based frameworks which is complicated to extract the weight for each layer.
For doing so, we develop a uff parser which can parser the TensorFlow freeze model into uff format and then convert it into TensorRT engine.