TensorRT Optimizer Tool arguements for semantic segmentation model

Hi,

I am trying to run my own tensorflow model on to DriveWorks(1.5), which is converted into uff format only including the tensorRT supported operations. As per the documentation, first step is to use tensorRT optimizer tool to convert uff models. However, I am not able to understand what should be my arguement for the binary considering that my model is suppose to do semantic segmentation. Example, what should be value for
–outputBlobs=bboxes,coverage (for semseg ?).

Also what does --inputBlobs=data0,data1 indicate, how to get these values ?

Lastly, If there is some sample application for using custom model on DriveWorks I can probably refer that too.

Hi,

The argument is similar to TensorRT executable.
Please check this document for more information:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html

Thanks.