While converting onnx to trt enginel, given me such error:
----------------------------------------------------------------
[W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/23/2021-21:21:16] [07/23/2021-21:21:16] [E] [TRT] (Unnamed Layer* 370) [Shuffle]: at most one dimension may be inferred
ERROR: onnx2trt_utils.cpp:1498 In function scaleHelper:
[8] Assertion failed: dims.nbDims == 4 || dims.nbDims == 5
When I used a fixed batch e.g. 1, there was no problem. Any suggestion for this problem?
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
/opt/TensorRT/bin/trtexec --onnx=/opt/rida/models/face_score/eqface_dy.onnx --minShapes=input:1x3x112x112 --optShapes=input:4x3x112x112 --maxShapes=input:8x3x112x112 --shapes=input:5x3x112x112 --verbose >> trt.log
[W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[E] [TRT] (Unnamed Layer* 370) [Shuffle]: at most one dimension may be inferred
ERROR: onnx2trt_utils.cpp:1498 In function scaleHelper:
[8] Assertion failed: dims.nbDims == 4 || dims.nbDims == 5
[E] Failed to parse onnx file
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed
Thank you for sharing the ONNX model, but unable to download it. Looks like given one is a wrong pwd. Meanwhile we recommend you to please try on latest TensorRT version 8.0.1 and let us know if you still face this version.
We dont have plan for tensorrt 8 yet because of our current production enviroment. But I will try it with tensorrt 8 firstly and report the result to you.
Sorry we are facing difficulty to download model from baidu, its asking for installation of tools. Could you please share over google drive or dropbox.
which version of TensorRT are you currently using ?
Thank you for sharing the ONNX model, we are unable to reproduce the issue. We could successfully build TRT engine on 7.2.3.4 version and latest TRT version 8.
Hi @spolisetty I am using tensorrt 7.1.3.4. Now I have problem to download 7.2.3.4. When I tried to lgoin in on Nvidia, the backend serviced seemed to crash down with this error:
{"errors":[], "error":{"zz":{"statuscode":"503","message":"Service Unavailable -- No backend server is available to handle this request."}}}
We need to deploy this onnx model later on Jetson Platform which only supports TensorRT 7.1.3.0 currently. If it only works on 7.2.3.4 later version, how could we handle this problem?
@spolisetty Today I downloaded the same version with yours, however it did not work any way. the log is attached below. what could be rhe reason for this?