run ./sample_mnist failed,

hello guys:
I follow the instructions (https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-tar) to install the tensorRT step by step, but errors in testing sampleMNIST.
the error prompt is:
&&&& RUNNING TensorRT.sample_mnist # ./sample_mnist [-h]
[I] Building and running a GPU inference engine for MNIST
[E] [TRT] engine.cpp (185) - cuBLAS Error in initializeCommonContext: 1 (Could not initialize cublas, please check cuda installation.)
[E] [TRT] engine.cpp (185) - cuBLAS Error in initializeCommonContext: 1 (Could not initialize cublas, please check cuda installation.)
&&&& FAILED TensorRT.sample_mnist # ./sample_mnist [-h]
it seems that some mistaks happen about CUDA installing. i installed CUDA9.0.176, cuDNN7.5. and vim ~/.bashrc as followed:
export PATH=/usr/local/cuda-9.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64$LD_LIBRARY_PATH
i could import tensoflow-gpu in python3.5 successfully, so CUDA9.0 seems be installed correctly. how could i fix the tensorRT error prompt? THX

Hi @wennysprin, how about the PATH&LD_LIBRARY of TRT(versions need to be consistent for CUDA-CUDNN-TRT).

BTW, try NGC docker images first is more recommended for saving us from environment configuration.

http://ngc.nvidia.com

https://github.com/NVIDIA/nvidia-docker/