Failed to parse onnx file in deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
7.1
• NVIDIA GPU Driver Version (valid for GPU only)
440
• Issue Type( questions, new requirements, bugs)
QUESTIONS

I can parse the onnx file with my own code, but it failed to parse the same file using deepstream. The error shows as follows:
ONNX IR version: 0.0.6
Opset version: 11
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:

WARNING: [TRT]: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: …/builder/cudnnBuilderUtils.cpp (427) - Cuda Error in findFastestTactic: 700 (an illegal memory access was encountered)
ERROR: [TRT]: …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

Questions:

  1. whats the best way to solve this problem?
  2. I have parsed the onnx to trt engine files locally with no problem, and also have tested it successfully. Can I use the converted trt engine file in deepstream directly? and how? I have tried to configure it with model-engine-file , but with no luck finally.

I think this problem should be related to batch size setting.

1 Like