• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.1 Triton Container • TensorRT Version 8.2.5-1 • NVIDIA GPU Driver Version (valid for GPU only) 510.47.03 • Issue Type( questions, new requirements, bugs) Questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
@NVES
I am just using Deepstream triton container and using nvinfer only for the inference which is using tensorrt.
ERROR: [TRT]: [shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_75: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
After the above operation used the model in DeepStream, still it’s same.
0:00:00.365195508 4367 0x6739d50 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: [shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_75: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 61 [Pad -> "onnx::Concat_218"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "x1"
input: "onnx::Pad_216"
input: "onnx::Pad_217"
output: "onnx::Concat_218"
name: "Pad_86"
op_type: "Pad"
attribute {
name: "mode"
s: "constant"
type: STRING
}
ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Pad_86
[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_75: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:19.977414541 4367 0x6739d50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:20.008574321 4367 0x6739d50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:20.008649896 4367 0x6739d50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:20.008713164 4367 0x6739d50 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:20.008838534 4367 0x6739d50 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: fold.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-nvinference-engine:
Which version of the PyTorch are you using to generate the ONNX model?
Could you please use the latest PyTorch version and Opset version to generate the ONNX model.
And try again the same steps above mentioned with the latest ONNX model.
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : sample_720p.mjpeg
Starting pipeline
0:00:07.000338400 10562 0x4e26b50 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: [shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_83: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 61 [Pad -> "onnx::Concat_226"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "x1"
input: "onnx::Pad_224"
input: "onnx::Pad_225"
output: "onnx::Concat_226"
name: "Pad_94"
op_type: "Pad"
attribute {
name: "mode"
s: "constant"
type: STRING
}
ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Pad_94
[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_83: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:23.187896384 10562 0x4e26b50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:23.214256270 10562 0x4e26b50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:23.214333580 10562 0x4e26b50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:23.214363958 10562 0x4e26b50 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:23.214462597 10562 0x4e26b50 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: fold.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-nvinference-engine:
Config file path: fold.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED