Linux version : Ubuntu 16.04 LTS
GPU type : GeForce GTX 1080
nvidia driver version : 410.72
CUDA version : 9.0
CUDNN version : 7.0.5
Python version [if using python] : 3.5.2
Tensorflow version : tensorflow-gpu 1.9
TensorRT version : 5.0.2.6
Actual Problem,
I tried the example script under samples/python/uff_ssd folder. The Script downloads SSD_inception model, creates uff parser, builds engine and performs inference on Image.
These are the results of that:
Preparing pretrained model
Downloading /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17.tar.gz
Download progress [==================================================] 100%
Download complete
Unpacking /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17.tar.gz
Extracting complete
Removing /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17.tar.gz
Model ready
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]
=========================================
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
No. nodes: 563
UFF Output written to /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.uff
UFF Text Output written to /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pbtxt
TensorRT inference engine settings:
* Inference precision - DataType.FLOAT
* Max batch size - 1
Building TensorRT engine. This may take few minutes.
TensorRT inference time: 4 ms
Detected car with confidence 97%
Total time taken for one image: 54 ms
Now, instead of downloading a pre-trained model, I trained my own object_detection on a custom datasetusing SSD_inception as architecture. I commented out the download part in the script and made the script to look for my trained .pb file. But am getting following errors:
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]
=========================================
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
No. nodes: 781
UFF Output written to /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.uff
UFF Text Output written to /home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pbtxt
TensorRT inference engine settings:
* Inference precision - DataType.FLOAT
* Max batch size - 1
[TensorRT] ERROR: Parameter check failed at: ../builder/Layers.h::setAxis::315, condition: axis>=0
[TensorRT] ERROR: Concatenate/concat: all concat input tensors must have the same dimensions except on the concatenation axis
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/ClassPredictor/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
Building TensorRT engine. This may take few minutes.
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File "detect_objects.py", line 193, in <module>
main()
File "detect_objects.py", line 166, in main
batch_size=parsed['max_batch_size'])
File "/home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/inference.py", line 69, in __init__
engine_utils.save_engine(self.trt_engine, trt_engine_path)
File "/home/teai/TensorRT/TensorRT-5.0.2.6/targets/x86_64-linux-gnu/samples/python/uff_ssd/utils/engine.py", line 83, in save_engine
buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'
Stuck with this issue for a long time. Could anyone help me with to resolve