Tensorrt8 & wsl2 issues

the hard environment is: windows-wsl2; rtx2060; ubuntu2004
the software environment is:

numpy 1.20.2
nvidia-cublas-cu11 2021.10.25
nvidia-cuda-runtime-cu11 2021.10.25
nvidia-cuda-runtime-cu115 11.5.117
nvidia-cudnn-cu11 2021.12.8
nvidia-pyindex 1.0.9
pycuda 2020.1
pytools 2021.2.9
PyYAML 6.0
requests 2.26.0
six 1.16.0
torch 1.9.0+cpu
torchvision 0.10.0+cpu

test tensorrt8 & pytorch example like : cd xxxx/TensorRT/samples/python/network_api_pytorch_mnist/
python3 sample.py
the error is :

Train Epoch: 2 [57600/60000 (96%)] Loss: 0.047320

Test set: Average loss: 0.0623, Accuracy: 9798/10000 (98%)

/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:45: DeprecationWarning: Use add_convolution_nd instead.
conv1 = network.add_convolution(input=input_tensor, num_output_maps=20, kernel_shape=(5, 5), kernel=conv1_w, bias=conv1_b)
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:46: DeprecationWarning: Use stride_nd instead.
conv1.stride = (1, 1)
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:48: DeprecationWarning: Use add_pooling_nd instead.
pool1 = network.add_pooling(input=conv1.get_output(0), type=trt.PoolingType.MAX, window_size=(2, 2))
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:49: DeprecationWarning: Use stride_nd instead.
pool1.stride = (2, 2)
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:53: DeprecationWarning: Use add_convolution_nd instead.
conv2 = network.add_convolution(pool1.get_output(0), 50, (5, 5), conv2_w, conv2_b)
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:54: DeprecationWarning: Use stride_nd instead.
conv2.stride = (1, 1)
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:56: DeprecationWarning: Use add_pooling_nd instead.
pool2 = network.add_pooling(conv2.get_output(0), trt.PoolingType.MAX, (2, 2))
/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py:57: DeprecationWarning: Use stride_nd instead.
pool2.stride = (2, 2)
[12/29/2021-19:10:37] [TRT] [W] GPU error during getBestTactic: (Unnamed Layer* 0) [Convolution] : invalid argument
[12/29/2021-19:10:37] [TRT] [W] GPU error during getBestTactic: (Unnamed Layer* 2) [Convolution] : invalid argument
[12/29/2021-19:10:37] [TRT] [E] 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node (Unnamed Layer* 2) [Convolution].)
[12/29/2021-19:10:37] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
Traceback (most recent call last):
File “/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py”, line 118, in
File “/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py”, line 102, in main
engine = build_engine(weights)
File “/opt/TensorRT/samples/python/network_api_pytorch_mnist/sample.py”, line 85, in build_engine
return runtime.deserialize_cuda_engine(plan)
TypeError: deserialize_cuda_engine(): incompatible function arguments. The following argument types are supported:
1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) → tensorrt.tensorrt.ICudaEngine

Invoked with: <tensorrt.tensorrt.Runtime object at 0x7fa49fb7a0f0>, None

Hi, Please refer to the below links to perform inference in INT8


I just want to test trt8.2 official python(pytorch) sample “network_api_pytorch_mnist”.
After reporting the error, I want to confirm whether WSL2’s incompatibility is the cause.


TensorRT is compatible on WSL2, but looks like there are known issues related to Convolution on WSL2 platform.
Please refer to known issues section in following release notes doc for more info.

Thank you.