Question in tensorrt

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
[y] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[y] Linux
QNX
other

Hardware Platform
[y] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
[y] 1.6.1.8175
1.6.0.8170
other

Host Machine Version
[y] native Ubuntu 18.04
other

When I quantify model into int8, I get the following error:
engine.cpp(692) -cuda error in commonEmitTensor: 1 (invalid argument)
FAILED_ALLOCATION: std::exception

my model input dims : 40000x32x10
Is the 40000 holded as batchsize??
If the input dims be set as 1x40000x32x10

Dear @wang_chen2,
For NCHW, 1x40000x32x10 means C = 40000.

It looks like issue with memory allocation. Is the above full log? Could you share your model(or dummy model) to reproduce the issue.

Sorry, I can not give you the model;

I know this, but my model Dims is 40000x32x10. In this case, what is meaning of 40000

Dear @wang_chen2,
Is it a ONNX model? Please clarify if your model is working fine with FP32 for the given input dimensions?

my model Dims is 40000x32x10. In this case, what is meaning of 40000

I would expect it to C=40000 and N=1

yes, it is a onnx model. It works fine with fp16 or int8 in tensorrt7.2.3.
But, in Xavier with tensorrt6…, it failed transform to int8.

So, my onnx model input 40000x32x10 is ok?

Dear @wang_chen2,
Could you check the issue on DRIVE OS 5.2.6?

the OS5.2.6 docker is the name “nvidia gpu driver” in NGC?

Hi, @SivaRamaKrishnaNV
single_7728_pfe_64.onnx (4.2 KB)
this is the model, you can reproduce this question in OS 5.2.0

Dear @wang_chen2,
I don’t see any issue on target using ./trtexec --onnx=/home/nvidia/single_7728_pfe_64.onnx --explicitBatch --int8 on DRIVE OS L 5.2.6 . Could you upgrade to DRIVE OS 5.2.6 if possible.

Hi, because we need to use DW, I could not upgrade to OS 5.2.6 recently.
I will try it in the feature, thank you for much.

1 Like

Dear @wang_chen2,
Just to clarify, single_7728_pfe_64.onnx has batch size 64. batch size does not change across the network. You can notice batch size as 64 when you view the network in netron