Access violation in sample_onnx_mnist (Windows)

Description

While running sample_onnx_mnist from Visual Studio this error message shows:

Exception thrown at 0x00007FF78E9C4CD6 in sample_onnx_mnist.exe: 0xC0000005: Access violation reading location 0x0000000000000001.

Console output:

c:\vs\SDKs\TensorRT-8.0.1.6\bin>sample_onnx_mnist.exe
&&&& RUNNING TensorRT.sample_onnx_mnist [TensorRT v8001] # sample_onnx_mnist.exe
[07/08/2021-15:33:13] [I] Building and running a GPU inference engine for Onnx MNIST

Environment

TensorRT Version: 8.0.1.6
GPU Type: RTX 2070
Nvidia Driver Version: 465.89
CUDA Version: 11.3
CUDNN Version: v8.2.0.53
Operating System + Version: Win10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

No relevant files here.

Steps To Reproduce

Please include:

  • Open sample_onnx_mnist.sln
  • Add path to cudnn lib files
  • Build
  • Run with debugger

Bonus - stack trace

Steps above doesn’t get You exact error location - just a disassembly. But I have configured project to include debug info (How to: Debug a Release Build | Microsoft Docs). Long story short, here is the stack trace:

[Inline Frame] sample_onnx_mnist.exe!nvinfer1::IBuilder::createNetworkV2(unsigned int) Line 8009
	at c:\vs\sdks\tensorrt-8.0.1.6\include\nvinfer.h(8009)
sample_onnx_mnist.exe!SampleOnnxMNIST::build() Line 111
	at c:\vs\sdks\tensorrt-8.0.1.6\samples\sampleonnxmnist\sampleonnxmnist.cpp(111)
sample_onnx_mnist.exe!main(int argc, char * * argv) Line 394
	at c:\vs\sdks\tensorrt-8.0.1.6\samples\sampleonnxmnist\sampleonnxmnist.cpp(394)
[Inline Frame] sample_onnx_mnist.exe!invoke_main() Line 78
	at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl(78)
sample_onnx_mnist.exe!__scrt_common_main_seh() Line 288
	at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl(288)
kernel32.dll!00007ffb2e597034()
ntdll.dll!00007ffb2fba2651()

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

Hi,
Thanks for quick response.

Seems like there were a mess with my environment. Multiple versions of cuda/cudnn/tensorrt/etc. were registered in %PATH%. The good old cleaning up solved the issue.