When trying to link tensorrt 5.1.2.2 statically and using the onnx parser I get many undefined references like the following
/usr/lib/x86_64-linux-gnu/libnvonnxparser_static.a(NvOnnxParser.cpp.o):(.data.rel.ro._ZTVN8onnx2trt21TypeSerializingPluginE[_ZTVN8onnx2trt21TypeSerializingPluginE]+0x48):
undefined reference to `onnx2trt::PluginAdapter::initialize
And indeed, scanning the static libs in /usr/lib/x86_64-linux-gnu and /usr/local/cuda/lib64, the definitions cannot be found in any static lib provided by the packages libnvinfer-dev.
Building with shared libraries compiles successfully, however.
Some help will be very appreciated about static linking of tensorRT
The version that I use is 7.2.3.4 and I experience similar linking issues.
Some “undefined reference” issues are fixed by adding CUDA-11 libraries but I think it might be the wrong approach.
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
How does the ONNX model is relevant to this discussion?
The issue is with static linking with tensorRT. When I compile dynamically (linking with so) the program compiles successfully.
In my case, the original issue was caused by passing the linked libraries in an incorrect order. The order in which STATIC libraries are passed to the linker matters :/ This issue should be fixed by playing with the order of the libraries. In my case the following order works: