error when run executeV2() with dynamic shape model from Onnx file

error:

[W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[E] [TRT] ../rtSafe/cuda/genericReformat.cu (1262) - Cuda Error in executeMemcpy: 11 (invalid argument)
[E] [TRT] FAILED_EXECUTION: std::exception

I used NetworkDefinitionCreationFlag::kEXPLICIT_BATCH on TensorRT 7, ubuntu16, gpu 1080.
my model https://1drv.ms/u/s!AhFk3ICqlZI2irgOxoSOSIY80QLWHA?e=5idBBf
How to fix? Thanks.

Hi,

I was able to repro your issue and am looking into it. Seems to be common for users training their model with Keras, not sure if you used Keras as well:

Similar issues from other users:

Hi @NVES_R, yes i used keras to train and use tf2onnx to out onnx file. If i convert tf to uff, it run fine but uff not support dynamic shape.
I am waiting the answer, thanks.

Hi anhtu812,

I believe I made a mistake before. I can successfully parse your model using TensorRT 7 with trtexec.

Can you try running:

trtexec --onnx=detection_model.onnx --explicitBatch

It works for me:

$ trtexec --onnx=detection_model.onnx --explicitBatch
...
----------------------------------------------------------------
Input filename:   detection_model.onnx
ONNX IR version:  0.0.6
Opset version:    7
Producer name:    tf2onnx
Producer version: 1.5.3
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
...
...
&&&& PASSED TensorRT.trtexec # trtexec --onnx=detection_model.onnx --explicitBatch

You not test with dynamic shape.
I used:

network = builder->createNetworkV2(1U << static_cast<int>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH));
config = builder->createBuilderConfig();
profile = builder->createOptimizationProfile();
profile->setDimensions(input_names[i].c_str(), OptProfileSelector::kMIN, dims1);
profile->setDimensions(input_names[i].c_str(), OptProfileSelector::kOPT, dims2);
profile->setDimensions(input_names[i].c_str(), OptProfileSelector::kMAX, dims3);
config->addOptimizationProfile(profile);
parser = nvonnxparser::createParser(*network, gLogger);
parser->parseFromFile(model_path.c_str(), 3);
engine = builder->buildEngineWithConfig(*network,*config);

I think problem is allocation size of output.
How to get output shape to allocation?

I fixed this error thank.

hello,
I have the same problem,How do you solve this problem?

thanks!