Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) JETSON
• DeepStream Version 5.0.0
Hey!
We have been doing inferencing using a custom model (in onnx format) in DeepStream pipeline on jetson nano. We had a doubt regarding what could be the effect on the memory and speed if the matrix shape for inferencing is changed, let’s say increased from the shape of (32, 128) to (64, 256).
How does the matrix load when running in the pipeline?