The infrence of inpection_v3_retrained model starts and crashes with Segementatio fault (core dumped) error

I have retrained Inception v3 model with new data set using keras, converted the h5 model file to .pb file and deployed the .pb file in Jetson TX2 and converted .pb file to .plan file.

I have done this process before and my previously generated models(.plan format) works perfectly fine. However, when I use the newly generated model for inference, it starts classifying and gives results for the first image and then it crashes with the following error message - segmentation fault ( core dumped).

nvidia@tegra-ubuntu:~/rm_project/fedexte-image-classification-tx2$ ./inference_engine -d jetson_test/ test_gen2.plan label_rm.txt inception_v3_input dense_1/Softmax
Loading TensorRT engine from plan file…
Processing input…(filename: jetson_test//001355.jpg)
Processing time for image: 48.3886ms (20.666 fps)

Classified as: vehicle

Classification Distribution:
0. background : 0.06691%

  1. person : 0.0439464%
  2. vehicle : 99.8891%
    Segmentation fault (core dumped)

Any assistance will be much appreciated.

best regards


" Segmentation fault (core dumped)" is usually caused by some invalid memory access.
Do you run TensorRT engine with a customized source?

If yes, would you mind to validate the input/output buffer for nvinfer1::IExecutionContext::enqueue() first?
It’s also recommended to check your plan file with our trtexec as below:

$ /usr/src/tensorrt/bin/trtexec --loadEngine=jetson_test/test_gen2.plan


Hi AastaLLL,

Many thanks for your assistance. I found that there was some bug on the CMakeCache files and fixing that seemed to have addressed the error.

I now have a inception V3 architecture(custom layer trained) model deployed on Jetson TX2 and I am able to perform inference. I am thinking of using NVIDIA Jetson Xavier, since it offers Tensor cores and increased cuda cores compared to TX2, enabling even greater performance. I am currently freezing the graph, generating the plan file and using an inference script on JETSON TX2, can I use the same project files and build and generate Binary on Xavier and generate the plan in Xavier? Or deployment in Xavier has a separate process?


If the JetPack version is the same, you should be able to reuse the same source.
But please noted that TensorRT engine need to be recompiled since it is hardware-dependent.