Effect Of Bigger Inferencing Matrix in Deep Stream Pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) JETSON
• DeepStream Version 5.0.0
Hey!

We have been doing inferencing using a custom model (in onnx format) in DeepStream pipeline on jetson nano. We had a doubt regarding what could be the effect on the memory and speed if the matrix shape for inferencing is changed, let’s say increased from the shape of (32, 128) to (64, 256).
How does the matrix load when running in the pipeline?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

sorry, what’s matrix shape?