Effect Of Bigger Inferencing Matrix in Deep Stream Pipeline

Hey!

We have been doing inferencing using a custom model (in onnx format) in DeepStream pipeline on jetson nano. We had a doubt regarding what could be the effect on the memory and speed if the matrix shape for inferencing is changed, let’s say increased from the shape of (32, 128) to (64, 256).
How does the matrix load when running in the pipeline?

Hi @guneet,

It seems to be deep stream related issue. Request you to raise issue in below forum

Thanks