I have tried to create a TensorRT model using DLA and GPU on Xavier with JetPack 4.5.1, but I have obtained an error. The layer in which I have the problem is a Conv2D that is connected with a Reshape Layer (Lambda). Is there any solution for this problem?
I include two images, the first one with the error and the second shows the summary of the model.
If yes, could you share the detailed input/output dimension of the reshape layer?
There is a similar issue related to the DLA INT8 mode.
And the corresponding fix will be available in the upcoming release:
Yes, my model works well in TensorRT GPU mode. The goal of the reshape layer is to store temporary information in four dimensions. The input dimension is (None, 25, 60, 60, 1) and the output dimension is (None, 60, 60, 1). This way, we move the temporal information to the batch dimension.
A solution could be to choose to map some layers in DLA and others in GPU, but I don’t find any information. Is there any example or way to do it?
Could you run the model with --dumpProfile in GPU mode to get the exact layer used for the reshaping?
Since based on the document below, we don’t support cross batch resize operation:
Do you mean that GPU mode supports cross batch resize operation but DLA doesn’t support it?
Just in case you forgot my previous question. Is there any example or way to map some layers in DLA and others in GPU? I don’t find any information about it.
Have you verified the output to see if it is correct?
Based on our document, we don’t support cross batch reshaping in both GPU and DLA.
It might work if the reshaping behavior just meets the cross-batch version.
But it’s recommended to double-check the correctness first.
For layer-level placement, you can find some information in the below document.
It will require you to define the layer with TensorRT API directly rather than using onnx-parser: