Description
when I execute the following command, I get errors:
./export_engine2 bevdet_lt_depth.yaml img_stage_lt_d.onnx bev_stage_lt_d.onnx _lt_d_test 0
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
10: Could not find any implementation for node {ForeignNode[344…Unsqueeze_253 + Unsqueeze_254]}.
10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[344…Unsqueeze_253 + Unsqueeze_254]}.)
Failed to build TensorRT engine.
TensorRT encountered issues when converting weights between types and that could affect accuracy.
If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
Check verbose logs for the list of affected weights.
- 28 weights are affected by this issue: Detected subnormal FP16 values.
- 16 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
I think the libnvonnxparsers-dev version of TensorRT-8.6.1.6 does not support my onnx model.
I can run success in Python TensorRT-8.6.1.6 and onnx==1.14.0.
if I want use libnvonnxparsers-dev to export my trt engine, What version of libnvonnxparsers-dev should I use?
Environment
TensorRT Version: TensorRT-8.6.1.6
GPU Type: RTX3080
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version: Ubuntu22.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
my onnx file :
my onnx to engine script:
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered