Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output

I tried going to the config infer file and changed the work-size param by adding it under the property key. because of a previous post on this issue but It doesn’t work for me. It is not a major breaking issue right now but i don’t want to bottle neck anything.

Yet i still get the error.
The Jetson Nano of mine has RAM of 4GB.
So I set parameter as workspace-size = 2500

My setup:
1)Using a Jetson Nano B01
2)Deepstream SDK 5.0
3)Yolov3-tiny detection model

So question where can I change

What is your previous post? Can you provide the link?

The link to the previous post

Are you using deepstream-app? Can you upload your config file? Can our sample config files work on your platform?

I looked into this further and found the problem, I was careless to not specify the engine name and path in both the config_infer.txt and deepstream_app config.txt. So it kept building a new engine each time. Now its fixed. The error doesn’t show up. Thank You for nudging me to solution.

Things are working fine for me memory wise , but a warning for Tensor RT does pop up when i run inference, just a warning though
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors