I’ve converted the efficientnetB0 model from tensorflow to onnx (using tf2onnx) and use code very similar to the sampleOnnxMNIST.cpp for using the model; however, for some batches it prints the following:
“Cublas Algo ID 11 doesn’t suit current config, setting to default algo”
Is this something to worry about? Performance/speed is still adequate but I’d like for it to be as optimal as possible. I tried to google this but didn’t find anything explaining what this means.
TensorRT Version: 7.2.3.4 GPU Type: Tesla V100 32gb Nvidia Driver Version: System has 418.126.02, not sure if it’s different inside the container CUDA Version: 11.1 in container CUDNN Version: 8.0.5 in container Operating System + Version: Ubuntu 18.04 Python Version (if applicable): N/A TensorFlow Version (if applicable): 2.5.0 PyTorch Version (if applicable): N/A Baremetal or Container (if container which image + tag):
nvidia/cuda:11.1-cudnn8-devel-ubuntu18.04
I’m not running python, I’m using the cpp sample file. The container I’m running is the same as the TRT NGC containers, I copied the dockerfile and added other dependencies. I’m also not running a custom model, it’s EfficientNetB0.
Based on the info in the description, looks like you’re using CUDA container. Please share us complete verbose logs and minimal issue repro script/model for better debugging.