When I convert the onnx model using trtexec within the tar file with this command
./trtexec --explicitBatch --onnx=duke_onnx.onnx --minShapes=input:1x3x288x144 --optShapes=input:5x3x288x144 --maxShapes=input:30x3x288x144 --saveEngine=duke_more_space_output_16_2.engine
It gives me the mentioned error " [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output." and reduced the accuracy of the model , So I think the accuracy of the model is directly related to that error, how could I give the workspace more size with that conversion method ?
** I’ll set some sinppet from the terminal • Hardware Platform (GPU) • DeepStream Version: 5.0 • TensorRT Version: 7.0.0.11
I’ve used 2080 RTX super that has 12 GB RAM, I’ve gave it workspace of 8 GB for conversion with maximum output shape 2 streams (2 batch size), and here’s the command :
./trtexec --explicitBatch --onnx=duke_onnx.onnx --minShapes=input:1x3x288x144 --optShapes=input:1x3x288x144 --maxShapes=input:2x3x288x144 --saveEngine=duke_8g_fp16.engine --workspace=8192 --fp16
But it’s still give me those errors:
[08/09/2021-07:33:35] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.