I follow the How to Speed Up Deep Learning Inference Using TensorRT https://devblogs.nvidia.com/speed-up-inference-tensorrt/ blog, and I met this error.
cc -L/home/beiqi.qh/workplace/cuda-9.0/lib64 -L/home/beiqi.qh/workplace/cuda-9.0/lib64/stubs -L/home/beiqi.qh/workplace/tensor_rt/TensorRT-5.0.2.6/lib -L/home/beiqi.qh/workplace/tensor_rt/code-samples/posts/TensorRT-introduction/onnx -L/home/beiqi.qh/workplace/tensor_rt/code-samples/posts/TensorRT-introduction/opt/lib simpleOnnx_1.o ioHelper.o -Wl,–start-group -lnvinfer -lnvonnxparser -lcudart_static -lrt -ldl -lpthread -lprotobuf -lonnx -lonnx_proto -lstdc++ -lm -Wl,–end-group -o simpleOnnx_1
/home/beiqi.qh/workplace/tensor_rt/code-samples/posts/TensorRT-introduction/onnx/libonnx_proto.a(onnx.pb.cc.o): In function onnx::TypeProto::ByteSizeLong() const': onnx.pb.cc:(.text+0x217): undefined reference to
google::protobuf::internal::WireFormat::ComputeUnknownFieldsSize(google::protobuf::UnknownFieldSet const&)’
/home/beiqi.qh/workplace/tensor_rt/code-samples/posts/TensorRT-introduction/onnx/libonnx_proto.a(onnx.pb.cc.o): In function onnx::OperatorSetIdProto::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const': onnx.pb.cc:(.text+0x255): undefined reference to
google::protobuf::internal::WireFormatLite::WriteStringMaybeAliased(int, std::string const&, google::protobuf::io::CodedOutputStream*)’
onnx.pb.cc:(.text+0x26b): undefined reference to google::protobuf::internal::WireFormatLite::WriteInt64(int, long, google::protobuf::io::CodedOutputStream*)' /home/beiqi.qh/workplace/tensor_rt/code-samples/posts/TensorRT-introduction/onnx/libonnx_proto.a(onnx.pb.cc.o): In function
onnx::OperatorSetIdProto::ByteSizeLong() const’:
onnx.pb.cc:(.text+0x2f8): undefined reference to google::protobuf::internal::WireFormat::ComputeUnknownFieldsSize(google::protobuf::UnknownFieldSet const&)' /home/beiqi.qh/workplace/tensor_rt/code-samples/posts/TensorRT-introduction/onnx/libonnx_proto.a(onnx.pb.cc.o): In function
onnx::AttributeProto::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const’:
onnx.pb.cc:(.text+0x37a): undefined reference to google::protobuf::internal::WireFormatLite::WriteFloat(int, float, google::protobuf::io::CodedOutputStream*)' onnx.pb.cc:(.text+0x3a9): undefined reference to
google::protobuf::internal::WireFormatLite::WriteInt64(int, long, google::protobuf::io::CodedOutputStream*)’
onnx.pb.cc:(.text+0x3d9): undefined reference to `google::protobuf::internal::WireFormatLite::WriteBytes(int, std::string const&, google::protobuf::io::CodedOutputStream*)’
Hello,
It looks like there’s a configuration issue with your environment. To remove dependency issues, the easiest place to get started is by downloading the TensorRT container from the Nvidia Container Registry, part of the Nvidia GPU Cloud (free).
regards,
NVES