sampleUffSSD

The frozen graph used in Nvidia tensorRT sample “sampleUffSSD” is generated from what source code? Are the source code available somewhere so I can reproduce the same frozen graph?

why is there a “variance” parameter in GridAnchor_TRT? Does that mean the GridAnchor_TRT is a customized implementation of Multple Grid Anchor Generator (https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py)?

Hello,

The frozen graph used in Nvidia tensorRT sample is from http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz , which is distributed by Google, and source is here: https://github.com/tensorflow/models/tree/master/research/object_detection

The code in https://github.com/tensorflow/models/tree/master/research/object_detection has different versions…are the latest multiple anchor generator version is used in the sample?

Also, “variance” is not in that source code implementation? Is it sth added in Tensor RT plug in implementation of “GridAnchor_TRT” and “NMS_TRT”?

Also, what is a recommended way to debug TendorRT? Since only header files are provided in TensorRT library…it is hard to see where went wrong…

Not sure about which version was to generate the frozen graph… It’s generated by Tensorflow, don’t have visibility into that.

Regading debugging TRT, The logger is probably the best place to start https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/infer/Core/Logger.html

Is it possible to have a copy of the tensorflow code version that generates the frozen graph for sampleUffSSD? Also is it possible to access to the source code of nvinfer and nvinfer plugin (not just header files)? I really hope I could speed up the way to adopt TensorRT technology. Thank you!

I’m keeping getting error like this:
I0111 09:28:02.778165 5198 resnet_trt_detector.cc:451] Done with parsing uff
I0111 09:28:02.778441 5198 resnet_trt_detector.cc:457] path_to_uff_file: /home/lyft/ssd_tmp/ssd/model.uff
I0111 09:28:03.140018 5198 common.h:106] Begin parsing model…
ERROR: UFFParser: Parser error: FeatureExtractor/resnet_v1_34/resnet_v1_34_fpn/top_down/nearest_neighbor_upsampling/Reshape: Order size is not matching the number dimensions of TensorRT
F0111 09:28:04.002177 5198 common.h:112] Failed to parse the network
*** Check failure stack trace: ***
*** Aborted at 1547227683 (unix time) try “date -d @1547227683” if you are using GNU date ***
@ 0x7f4163069e39 avsoftware::common_internal::failFunc()
@ 0x7f41449105cd google::LogMessage::Fail()
@ 0x7f4144912433 google::LogMessage::SendToLog()
@ 0x7f414491015b google::LogMessage::Flush()
@ 0x7f4144912e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f4144b57d25 avsoftware::perception::tensorrt::loadUffAndCreateEngine()
@ 0x7f4144b5966c avsoftware::perception::tensorrt::run()
@ 0x562a0934f49f main
@ 0x7f414137f830 __libc_start_main
@ 0x562a0934f649 _start

Is there a way to know what dimension and order is tensor RT is expecting? passing uff is not a problem. The problems hapeens in pass net when “loadUffAndCreateEngine” is called…

@tianyun06,
Did you find a way to port your tensorflow object detection model to tf in c++ version???