TensorRT: stack: stack only support for Constants nodes as input for now

ENV:
tensorflow=1.14.0
tensorRT=6.0.1.5
cuda=10.0
cudnn=7.5
google object detection api: latest

error info:

WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 640
}
dim {
size: 640
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
DEBUG [/usr/local/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking [‘NMS’] as outputs
No. nodes: 396
UFF Output written to /media/york/F/GitHub/tensorflow/train_model/ssd_mobilenet_v1_fpn_shared_trafficlight/export_train_bdd100k_baidu_truck_zl004_class4_wh640640_depth1.0_level35_num1_focal_trainval299287_step320000_ssd_anchor_for_tensorrt_640640_548/frozen_inference_graph.uff
UFF Text Output written to /media/york/F/GitHub/tensorflow/train_model/ssd_mobilenet_v1_fpn_shared_trafficlight/export_train_bdd100k_baidu_truck_zl004_class4_wh640640_depth1.0_level35_num1_focal_trainval299287_step320000_ssd_anchor_for_tensorrt_640640_548/frozen_inference_graph.pbtxt
TensorRT inference engine settings:

  • Inference precision - DataType.FLOAT
  • Max batch size - 1

Building TensorRT engine. This may take few minutes.
[TensorRT] ERROR: UffParser: Parser error: FeatureExtractor/MobilenetV1/fpn/top_down/nearest_neighbor_upsampling/stack: stack only support for Constants nodes as input for now
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “/media/york/F/GitHub/tensorflow/models/research/uff_ssd-TensorRT-6.0.1.5/detect_objects_trafficlight.py”, line 255, in
main()
File “/media/york/F/GitHub/tensorflow/models/research/uff_ssd-TensorRT-6.0.1.5/detect_objects_trafficlight.py”, line 229, in main
batch_size=args.max_batch_size)
File “/media/york/F/GitHub/tensorflow/models/research/uff_ssd-TensorRT-6.0.1.5/utils/inference.py”, line 117, in init
engine_utils.save_engine(self.trt_engine, trt_engine_path)
File “/media/york/F/GitHub/tensorflow/models/research/uff_ssd-TensorRT-6.0.1.5/utils/engine.py”, line 132, in save_engine
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

Process finished with exit code 1

Hi,

Stack operation currently only support List[Constant] as input and the output of this layer is a Constant, and therefore will not work with layers expecting a Tensor input.

Please refer below link for UFF parser stack operation:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html#stack

Another alternative is to convert your model to ONNX instead using tf2onnx and then convert to TensorRT using ONNX parser. Any layer that are not supported needs to be replaced by custom plugin.
https://github.com/onnx/tensorflow-onnx
Similar to numpy.stack:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#ConcatFromSequence

Thanks