TensorRT: Segmentation fault (core dumped) when running pytroch_to_trt.py example

I’m trying to run the pytorch_to_trt.py example to convert the example mnist model written in pytorch to a tensorrt inference engine on TensorRT4.

My cuda toolkit version is 9.2, and I am using python3.5. My machine has 2xTITAN Xp.

When running the pytorch_to_trt.py example the application is segfaulting on the line

builder.build_cuda_engine(network)

Here are the last few lines of the output before it fails

[TensorRT] INFO: --------------- Timing (Unnamed Layer* 3) [Pooling](8)
[TensorRT] INFO: Tactic -1 time 0.00512
[TensorRT] INFO: Tactic 5505281 time 0.004096
[TensorRT] INFO: Tactic 5570817 time 0.004096
[TensorRT] INFO: Tactic 5636353 time 0.004096
[TensorRT] INFO: Tactic 5701889 time 0.004096
[TensorRT] INFO: Tactic 5767425 time 0.004096
[TensorRT] INFO: Tactic 5832961 time 0.004096
[TensorRT] INFO: Tactic 5898497 time 0.004096
[TensorRT] INFO: Tactic 5964033 time 0.004096
[TensorRT] INFO: Tactic 6029569 time 0.003872
[TensorRT] INFO: Tactic 6095105 time 0.004096
[TensorRT] INFO: Tactic 6160641 time 0.004096
[TensorRT] INFO: Tactic 6226177 time 0.004096
[TensorRT] INFO: Tactic 6291713 time 0.004096
[TensorRT] INFO: Tactic 6357249 time 0.004096
[TensorRT] INFO: Tactic 6422785 time 0.005056
[TensorRT] INFO: Tactic 6488321 time 0.005088
[TensorRT] INFO: 
[TensorRT] INFO: --------------- Timing (Unnamed Layer* 4) [Fully Connected] + (Unnamed Layer* 5) [Activation](6)
Segmentation fault (core dumped)

I got the same error message running the same example script with cuda 9.0.

How can I resolve this issue?

The same problem I meet, I do some experiment that take out some layers and find the problem may be in
relu1 = network.add_activation(fc1.get_output(0), trt.infer.ActivationType.RELU).
Hope someone can help me resolve this issue