Input filename: /opt/nvidia/deepstream/deepstream-
5.1/sources/apps/main/mobilenet/mob.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: tf2onnx
Producer version: 1.9.2
Domain:
Model version: 0
Doc string:
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:02:04.535115865 144 0x7ff1780022d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 2]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/main/mobilenet/mob.onnx_b1_gpu0_fp16.engine successfully
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: StatefulPartitionedCall/StatefulPartitionedCall/predict/MobilenetV3/Logits/Squeeze: reshaping failed for tensor: StatefulPartitionedCall/StatefulPartitionedCall/predict/MobilenetV3/Logits/Conv2d_1c_1x1/BiasAdd:0
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: shapeMachine.cpp (160) - Shape Error in executeReshape: reshape would change volume
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Instruction: RESHAPE{1 1001 1 1} {8 1001}
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT inputs 224x224x3
1 OUTPUT kFLOAT logits 1001
0:02:04.539625870 144 0x7ff1780022d0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:875> [UID = 2]: RGB/BGR input format specified but network input channels is not 3
ERROR: nvdsinfer_context_impl.cpp:1158 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
And if I use trt generated model as model-engine-file I get something like this
WARNING: nvdsinfer_backend.cpp:162 Backend context bufferIdx(0) request dims:1x3x224x224 is out of range, [min: 8x224x224x3, max: 8x224x224x3]
0:00:00.966294670 193 0x7fbdd80022d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1659> [UID = 2]: backend can not support dims:3x224x224
Since the examples that I used for convertion does not cover it, I am looking for some help here, how can I deploy tf savedmodel downloaded from the tfhub, how should I convert and what config to use in deepstream?
Could you share the setup information with us first?
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: inputs: for dimension number 1 in profile 0 does not match network definition (got min=3, opt=3, max=3), expected min=opt=max=224).
Then I tried changing tensorrt to like this in the config: