Hi guys:
My platform details:
- Ubuntu 18.04.2 LTS
- GTX 650
- Driver 418
- CUDA 10.0
- CUDNN 7.5.1
- Pytorch 1.0
- TensorRT 5.0.2
I’m trying to deploy a deeplabv3 for segmentation. My onnx model can be imported to a trt engine correctly, but get Segmentation fault at context.enqueue. Here is my gdb info:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffd3a72700 (LWP 6545)]
[New Thread 0x7fffd3271700 (LWP 6546)]
[New Thread 0x7fffd2a70700 (LWP 6547)]
----------------------------------------------------------------
Input filename: ../../model/deeplabv3dupsample_bs1.onnx
ONNX IR version: 0.0.3
Opset version: 9
Producer name: pytorch
Producer version: 0.4
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[New Thread 0x7fffd1b53700 (LWP 6549)]
Thread 1 "segNet" received signal SIGSEGV, Segmentation fault.
0x00007ffff0af389b in nvinfer1::rt::cuda::PluginLayer::execute(nvinfer1::rt::CommonContext const&, nvinfer1::rt::ExecutionParameters const&) const () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
(gdb) bt
#0 0x00007ffff0af389b in nvinfer1::rt::cuda::PluginLayer::execute(nvinfer1::rt::CommonContext const&, nvinfer1::rt::ExecutionParameters const&) const () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#1 0x00007ffff0ab6c86 in nvinfer1::rt::ExecutionContext::execute(int, void**) () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#2 0x00005555555616c7 in doInference(nvinfer1::IExecutionContext&, float*, float*, int) ()
#3 0x0000555555561907 in main ()
I don’t have any plugin layer.
I alse generated th dump file, but it’s 15G, not easy to share. So can you guys figure out this issue by these info? Thanks!