I am looking for inferencing an SSD based model in keras and inferencing using TensorRT. The expected pipeline is as follows :
keras h5 file -> .pb -> .uff -> TensorRT UFF parser C++
The model training is complete and inferencing using python gives desired results. So the training part is complete. Next I need to do the inferencing. I found the sampleUFFSSD in tensorrt samples, but I suppose it supports conversion of mobilenetSSD_v2.pb file alone and the model which I train is custom made and has some unsupported layers/layer name changes.
The current approach which I am using is using graph_surgeon tool for adding support for custom layers and implementing those layers in CPP source code. But I am getting dimensionality errors while parsing the uff model/building the engine.
Basically I am stuck at building the engine from the UFF file.
Is there any documentation/reference which guides how to implement a model with custom layers and inferencing via TensorRT ?