“INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.”
What does this means? I see it when starting deepstream-app sometimes. I see it on every run with the objectdetection_SSD sample config.
Is there some settings we need to change or is it safe to ignore this?
There is a “workspace” parameter that limit the maximal memory amount for TensorRT.
This error indicates that the workspace is not enough for TensorRT to reach the optimal performance.
A: Some TensorRT algorithms require additional workspace on the GPU. The method IBuilderConfig::setMaxWorkspaceSize() controls the maximum amount of workspace that may be allocated, and will prevent algorithms that require more workspace from being considered by the builder. At runtime, the space is allocated automatically when creating an IExecutionContext. The amount allocated will be no more than is required, even if the amount set in IBuilderConfig::setMaxWorkspaceSize() is much higher. Applications should therefore allow the TensorRT builder as much workspace as they can afford; at runtime TensorRT will allocate no more than this, and typically less.
If I try a bigger number (on Jetson Xavier NX) I get this error:
ERROR: [TRT]: ../rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)
Have tried setting it to just a little higher at: workspace-size=600000000. #600MB
Memory usage at the moment is: 1.6G/7.8GB.
Actually some more testing is showing me that the mobilenet_ssd config wont run at all if I have that parameter set. Maybe the units is wrong and it needs to be 600 and not 600000000 ?
I don’t notice this message when loading .engine model file directly. So probably, the limited performance is only for the caffe / onnx model to .engine conversion, which only needs to be done once.
I get the same error but i tried going to the config infer file and changed the work-size param by adding it under the property key. Yet i still get the error.
The Jetson Nano of mine has RAM of 4GB. So I set parameter as workspace-size = 2500
My setup:
1)Using a Jetson Nano B01
2)Deepstream SDK 5.0
3)Yolov3-tiny detection model
So question where can I change it, because config_infer.txt file doesn’t seem to affect.