LNK2001 unresolved external symbol "class sample::Logger sample::gLogger" (?gLogger@sample@@3VLogger@1@A)

Description

Hello, I am trying to compile s ampleOnnxMNIST as in Nvidia guide, but I get this error
*LNK2001 unresolved external symbol “class sample::Logger sample::gLogger” (?gLogger@sample@@3VLogger@1@A) *

What might be the issue?

Environment

TensorRT Version: 8.2.2.1
GPU Type: NVIDIA GeForce GTX 1080 Ti
Nvidia Driver Version: NVIDIA GeForce GTX 1080 Ti
CUDA Version: cuda 11.03
CUDNN Version: cudnn
Operating System + Version: windows 10
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2.4.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

Could you please let us know which method or steps you followed to install TensorRT ?
In case you built using TensorRT repo, we recommend you to please post your concern on https://github.com/NVIDIA/TensorRT/issues to get better help.

Thank you.

Thank you for the reply. Actually, the problem was I didn’t link the libraries properly. Now it works well.

1 Like

Hello, I have a worry.

I am currently working on a project which involves applying DNN for control. I already constructed my network and I generated the TRT engine using tensorrt c++ api from the onnx file successfully. The network is made up of 2 inputs and 1 output. All inputs are data from sensors (force, position). The model performs well during the training and testing phase. My question is, how do I point the sensors to the input tensor for real-time inferencing? I will be grateful if you can help. I have been struggling with this for a long time now. All the examples I found only are all about images.

Also, any related code will be appreciated.

Thank you

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

I meet a same problem. but I cant find which lib cause,looking forward your help