I’m trying to do INT8 calibration with DLA using a retinanet model exported under the onnx format. I use a batch size of 1 and the latest tensorrt version available.
I get this error:
NVMEDIA_DLA : 717, ERROR: setInputTensorDesc failed
NVMEDIA_DLA : 801, ERROR: SetInputTensorDesc failed for tensor: 7. status: 0x0.
NVMEDIA_DLA : 967, ERROR: BindArgs failed (Input). status: 0x7.
Segmentation fault (core dumped)
What could cause this?
thanks in advance
This error is related to the input format.
The data format for input tensor is set to kNCxHWx by the compiler as that is the only format supported by DLA for layers other than convolution.
However, it is mapped to tensor with kNCHW format at runtime.
Currently DLA does not support format conversion at runtime.
The conversion of NCHW <=> NCxHWx is in our future plans.
Thank you for your answer.
Which layer(s) use a kNCHW format, typically ? If we omit these layers and only use DLA with tensors with format kNCxHWx, would it be possible to use DLA then?
Suppose yes. If you use the kNCxHWx as input format.
Would you mind to share the detailed procedure to reproduce the use case?
Then we can check and give more suggestion.
Thanks for your answer.
I used this method to do INT8 calibration: https://github.com/NVIDIA/retinanet-examples/tree/master/extras/cppapi. I also added the following lines of code to the engine to add DLA support:
And then I used the export command as described in the aforementioned link. The model I used is more or less the same as retinanet 50. However, I cannot share the details of it publicly.
Thanks for the help.
Sorry for the late update.
After confirm, this issue issue is required be fixed in our internal source.
For Tensors without vector dimension (< 3), TensorRT falls back to the plain NCHW format.
However, it’s required to use kNCxHWx in DLA.
We are planing to implement a converter for NCHW <=> NCxHWx in our future release.
Please wait for our announcement and sorry for any inconvenience it brings to you.
I am trying to use DLAs on Xavier AGX, with NCHW with fp16 , but, I keep getting the unsupported Input Tensor error message reported in this post.
I am trying to determine if there is anything we can do to get this to work, short of waiting for the next release with the ‘converted’ (coming when??). There seemed to be some hope (In this post) that something could possibly be done, if proper InputTensor format (kNC2HW2) is ‘configured’?
I had tried to change the input binding to use (force) kNC2HW2 instead of kLINEAR, but that didn’t work (same error). Maybe this is due to that “fallback” to NCHW mentioned in this post?
could we do this conversion in a plugin, before first LAYER is executed on DLA??
A converter provided by TRT library would be nice (i.e. no App code change), just a matter of when available for GA.
Any status/hints/guidance you could share would be most valuable.