The frozen graph used in Nvidia tensorRT sample “sampleUffSSD” is generated from what source code? Are the source code available somewhere so I can reproduce the same frozen graph?
Is it possible to have a copy of the tensorflow code version that generates the frozen graph for sampleUffSSD? Also is it possible to access to the source code of nvinfer and nvinfer plugin (not just header files)? I really hope I could speed up the way to adopt TensorRT technology. Thank you!
I’m keeping getting error like this:
I0111 09:28:02.778165 5198 resnet_trt_detector.cc:451] Done with parsing uff
I0111 09:28:02.778441 5198 resnet_trt_detector.cc:457] path_to_uff_file: /home/lyft/ssd_tmp/ssd/model.uff
I0111 09:28:03.140018 5198 common.h:106] Begin parsing model…
ERROR: UFFParser: Parser error: FeatureExtractor/resnet_v1_34/resnet_v1_34_fpn/top_down/nearest_neighbor_upsampling/Reshape: Order size is not matching the number dimensions of TensorRT
F0111 09:28:04.002177 5198 common.h:112] Failed to parse the network
*** Check failure stack trace: ***
*** Aborted at 1547227683 (unix time) try “date -d @1547227683” if you are using GNU date ***
@ 0x7f4163069e39 avsoftware::common_internal::failFunc()
@ 0x7f41449105cd google::LogMessage::Fail()
@ 0x7f4144912433 google::LogMessage::SendToLog()
@ 0x7f414491015b google::LogMessage::Flush()
@ 0x7f4144912e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f4144b57d25 avsoftware::perception::tensorrt::loadUffAndCreateEngine()
@ 0x7f4144b5966c avsoftware::perception::tensorrt::run()
@ 0x562a0934f49f main
@ 0x7f414137f830 __libc_start_main
@ 0x562a0934f649 _start
Is there a way to know what dimension and order is tensor RT is expecting? passing uff is not a problem. The problems hapeens in pass net when “loadUffAndCreateEngine” is called…