OS:Ubuntu16.04
GPU type 1080Ti
nvidia driver version 390.59
CUDA version 9
CUDNN version 7.0.5
TensorRT version TensorRT 5 GA
Hi.
I use onnxparser to save from onnx model to trt model and use processing to directly call from this trt model. I wrote this process with reference to the following.
mnist, and vgg etc were successfully loaded, but my custom model could not be loaded. When examining it, the layer of “Upsample” of the custom model seems to be related. Loading succeeded when deleting this layer.
Incidentally, without using deserializeCudaEngine(not use from trt model), using the following trt_engine from the onnx model succeeded in inference processing.
Can you share the traceback from the segmentation fault? Can you share a repro of your onnx model that exihibit the seg fault? You can DM me directly if you’d like.
model.onnx is a ONNX model without an upsample layer.
model.trt is a TRT model converted from model.onnx.
This model can be loaded without error at deserializeCudaEngine.
model_up.onnx is a ONNX model with an upsample layer.
model_up.trt is a TRT model converted from model_up.onnx.
This model will occurs a segmentation fault at deserializeCudaEngine.
Our engineers are able to parse model_up.onnx and generate a TRT engine locally and reload it just fine with deserializeCudaEngine, but when loading model_up.trt gets an error.