Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson AGX Orin • DeepStream Version6.2
**• JetPack Version (valid for Jetson only)**5.1 • TensorRT Version8.5.2
I have converted a facenet model (Face Recognition) from PyTorch to .ONNX to be used in deepstream configuration file. However, running the config file to generate engine file that is running on DLA will cause shuffle layers which were are not part of the model itself to fall back to GPU.
As per TRT documentation DLA do support shuffle layers but why in my case they fall back to the GPU. Additionally, how these shuffle layers were imported to the model since the original model does not have shuffle layers. FaceNet_DLAEngine_unsupportedLayers.txt (9.8 KB)
I have attached the log file generated during engine file creation.
what’s the exact requirement by the DLA that requires these shuffle/constant layers to be added. Is it due to reconstruction process? becuase this much of shuffle layers have caused delay in processing as they fall back to the gpu which will cause back and forth data transmission between Gpu/dla
I see contact is presented by (See REF) in tensorrt column and Native in DLA. what does this mean?
In my case it fell back to the GPU why did it fall back if it’s native to DLA