Hello, I want to train my own data using jetson xavier model (SSD mobilenet v2).
First of all, we made frozen_inference_graph.pb from model.ckpt -0000.
We want this weight file to object detection on xavier based on detectnet c++.
To do this, it is necessary to convert the pb file to the uff file, and the following error occurred.
UFFParser: Validator error: FeatureExtractor/MobilenetV2/expanded_conv_15/output: Unsupported operation Identity
failed to parse UFF model ‘networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff’
device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
detectnet – failed to initialize.
detectnet-camera: failed to load detectNet model
Currently, the uff conversion was performed on a different PC than the xavier, and the TensorRT version is 6.0.1.5 in PC(not xavier)
And, the Xavier’s TensortRT version is 5.0.3
I’ve been searching on this problem a number of times.
I think Xavier’s TensorRT version (5.0.3) does not support identity, so I think we should upgrade the TensorRT version, is this right?
I hope you are always careful of the corona-virus and I will wait for your reply.
HI, I apply your comment in config.py, and we convert pb to uff.
and I retried the jetson xavier with detectnet. we have two uff files.
One is given frozen_inference_graph.pb in pretrained model.
Another is our own frozen_inference_graph.pb in our own data training.
But, the following error is occured. (It is given frozen_inference_graph.pb → convert uff file)
[TRT] Parameter check failed at: …/builder/Layers.h::setAxis::315, condition: axis>=0
[TRT] retrieved Input tensor “Input”: 3x300x300
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
[TRT] Concatenate/concat: all concat input tensors must have the same dimensions except on the concatenation axis
[TRT] Could not compute dimensions for Concatenate/concat, because the network is not valid
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
detectNet – failed to initialize.
detectnet-camera: failed to load detectNet model
Also, the following error is occured. (It is our own frozen_inference_graph.pb → convert uff file)
[TRT] UFFParser: Validator error: Cast: Unsupported operation _Cast
[TRT] failed to parse UFF model ‘networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff’
[TRT] device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
detectNet – failed to initialize.
detectnet-camera: failed to load detectNet model
HI, I apply your comment in config.py, and we convert pb to uff.
But, the following error is occured. (It is given frozen_inference_graph.pb → convert uff file)
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
detectnet-camera: nmsPlugin.cpp:135: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * numLocClasses * 4 == inputDims[param.inputOrder[0]].d[0]’ failed.
Aborted (core dumped)
Also, the following error is occured. (It is our own frozen_inference_graph.pb → convert uff file)
[TRT] UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
[TRT] failed to parse UFF model ‘networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff’
[TRT] device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
detectNet – failed to initialize.
detectnet-camera: failed to load detectNet model
The numClasses is the class number your model trained for. Please remember to count the background class.
The inputOrder is the NMS order for ssd_mobilenet_v2.
Thank you for your answer. I solved this problem about forzen_inference_graph.pb in pretrained model.
But, our own data training frozen_inference_graph.pb still had problem.
[TRT] UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
[TRT] failed to parse UFF model ‘networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff’
[TRT] device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
detectNet – failed to initialize.
detectnet-camera: failed to load detectNet model
I think we have problem that is making a frozen_inference_graph from model.ckpt 0000(check point).