I am building a runtime engine using tensorrt from a .onnx file - YoloV4. It is able to build successfully however, even when i give the workspace 3 GB (3000 in MB in the command), it prints a message while building saying
Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
I suspect the problem is that there is some configuration file somewhere that puts a limit on the maximum space that tensorrt is able to use. Sadly i can’t find such a file :( . Any help would be much apreciated!
TensorRT Version: 7.1.3
GPU Type: Volta- Arch 7.2
Nvidia Driver Version: Whatever comes with Jetpack 4.4 – L4T 32.4.3 on the Jetson Xavier NX
CUDA Version: 10.2.89
CUDNN Version: 188.8.131.52
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): N/A
PyTorch Version (if applicable): 1.6.0
test1.onnx file here:
Steps To Reproduce
- Using after building trtexec from usr/src/tensorrt/samples/trtexec run this trtexec command to build an engine from
/usr/src/tensorrt/bin/./trtexec --onnx=test1.onnx --explicitBatch --saveEngine=Yolov4_DLA1.trt --useDLACore=1 --workspace=3000 --fp16 --allowGPUFallback