Unsupported shuffle layers to run on DLA

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version6.2
**• JetPack Version (valid for Jetson only)**5.1
• TensorRT Version8.5.2
I have converted a facenet model (Face Recognition) from PyTorch to .ONNX to be used in deepstream configuration file. However, running the config file to generate engine file that is running on DLA will cause shuffle layers which were are not part of the model itself to fall back to GPU. Shuffle layers are added 28 time and constant layers are added 28 time. The model it self has 22 layers as below.

what’s the exact requirement by the DLA that requires these shuffle/constant layers to be added. Is it due to reconstruction process? because this much of shuffle layers have caused delay in processing as they fall back to the gpu which will cause back and forth data transmission between Gpu/dla
As per TRT documentation DLA do support shuffle layers but why in my case they fall back to the GPU.
FaceNet_DLAEngine_unsupportedLayers.txt (9.8 KB)

I have attached the log file generated during engine file creation.

Hi,

Usually, shuffle is used for reshape and transpose.
It might be added to change the format between CHW and HWC to reach compatibility.

You can find the details of the DLA support matrix below:

Thanks.

Does this mean that these shuffle layers does also exist in the engine file which is created to run the model on GPU as well not only DLA right?
and is there a way to avoid the creation of these shuffle layers, as i have mentioned that the model itself consist of 22 layers generating 28 layers of shuffle is exceeding the model layers even.

Hi,

Does this mean that these shuffle layers does also exist in the engine file which is created to run the model on GPU as well not only DLA right?

Please convert the engine to GPU and find the info in the compiling log.
GPU and DLA have quite different support matrices so the underlying implementation won’t be the same…

We have a repo to demonstrate how to optimize the model for DLA.
Please give it a check:

Thanks.

when an operator shows see REF in TRT and Native in DLA. what does this mean?
image

Hi,

It means the operator does support by DLA but the compile doesn’t enable the function yet.
If you want to request for certain layer, please click the RFE.

Thanks.