There is an easy way to inspect the tensorrt engine.
$ tao-converter -k tlt_encode -p input_1,1x3x544x960,1x3x544x960,1x3x544x960 -t fp32 peoplesemsegnet.engine peoplesemsegnet.etlt
$ python -m pip install polygraphy --index-url https://pypi.ngc.nvidia.com
$ polygraphy inspect model peoplesemsegnet.engine
[I] ==== TensorRT Engine ====
Name: Unnamed Network 0 | Explicit Batch Engine---- 1 Engine Input(s) ---- {input_1 [dtype=float32, shape=(1, 3, 544, 960)]} ---- 1 Engine Output(s) ---- {softmax_1 [dtype=float32, shape=(1, 544, 960, 2)]} ---- Memory ---- Device Memory: 398991360 bytes ---- 1 Profile(s) (2 Binding(s) Each) ---- - Profile: 0 Binding Index: 0 (Input) [Name: input_1] | Shapes: min=(1, 3, 544, 960), opt=(1, 3, 544, 960), max=(1, 3, 544, 960) Binding Index: 1 (Output) [Name: softmax_1] | Shape: (1, 544, 960, 2) ---- 45 Layer(s) ----
See https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/peopleSemSegNet_tao/pgie_peopleSemSegNet_tao_config.txt#L29, the input tensor is RGB. And it is in CHW order.
The output sensor, for example, peoplesemsegnet, its output is the Category label (person or background) for every pixel in the input image. Outputs a semantic of people for the input image. The output order is HWC.