I am building a runtime engine using tensorrt from a .onnx file - YoloV4. It is able to build successfully however, even when i give the workspace 3 GB (3000 in MB in the command), it prints a message while building saying
Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
I suspect the problem is that there is some configuration file somewhere that puts a limit on the maximum space that tensorrt is able to use. Sadly i can’t find such a file :( . Any help would be much apreciated!
Environment
TensorRT Version: 7.1.3 GPU Type: Volta- Arch 7.2 Nvidia Driver Version: Whatever comes with Jetpack 4.4 – L4T 32.4.3 on the Jetson Xavier NX CUDA Version: 10.2.89 CUDNN Version: 8.0.0.180 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): 3.6 TensorFlow Version (if applicable): N/A PyTorch Version (if applicable): 1.6.0
Relevant Files
test1.onnx file here:
Steps To Reproduce
Please include:
Using after building trtexec from usr/src/tensorrt/samples/trtexec run this trtexec command to build an engine from test1.onnx
Hi @AakankshaS, yes i have read this thread, the flag for the trtexec command is workspace not workspace-size, which is also in MB and i have also given it 3GB as mentioned above. That discussion is about the DeepStream SDK. I am asking about TensorRT. The file they referenced to change that parameter is not in the TensorRT package.
I was having the same problem when I tried to port the yolov4 onnx model to TensorRT with trtexec.
Using onnx-tensorrt solved the problem for me. At least, that sort of warning message didn’t come up anymore.
Increasing workspace looks like the only solution.
Some TensorRT algorithms require additional workspace on the GPU. Applications should therefore allow the TensorRT builder as much workspace as they can afford; at runtime TensorRT will allocate no more than this, and typically less.
However this is a Info message conveying that this might not be the best optimized model based on workspaceSize availability.
Thanks!
Hi @MostafaTheReal
Do you solve this issue?
I’ve used 8GB with converting my model but it’s still give me same errors:
Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
also
[Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance,
@NVES
so what shall we pass as argument for trtexec if getting this err? [TRT] Tactic Device request: 3246MB Available: 1536MB. Device memory is insufficient to use tactic.