Hey! I am currently taking Nvidia’s DLI course at my university and we must deploy a deep learning model on the edge. I must also containerize my program to make it compatible to other jetson’s, or just another jetson nano. I have a deeplabv3+ model that I have the .h5 file for, but from my research I have come to the conclusion that I must convert the .h5 model to a TRT model, if not please let me know that will make my life much easier. I saw people recommended that you use a special container from Nvidia that contains Tensorflow/TRT, and I am confused as to how I can make a container that contains TRT, Opencv, and tensorflow.keras from inside that container. If anyone could send good links for how to correctly build TRT, and Opencv on the nano as well would be good, I keep getting weird errors with opencv following this link’s instructions githubcom/JetsonHacksNano/buildOpenCV. Thanks! Any help is appreciated.
I am moving this topic to the TensorRT category for better visibility.
You can convert keras → ONNX → TensorRT.
Please refer the following links for the inputs.