Segmentation Fault (core dumped)” error in the Jetson Nano 2GB

Request solution for the “Segmentation Fault (core dumped)” error in the Jetson Nano 2GB while predicting through TensorRT model?

I successfully converted my TF native FP32 (mymodel.h5, Size = 17.0 Mb)) model to a TF-TRT FP32 model. Now I have an output directory named mymodel_saved_model_TFTRT_FP32 folder, which has assets, variables, and saved_model.pb files. While using the mymodel_saved_model_TFTRT_FP32 for the prediction, I used a couple of utility functions mentioned below. During the execution of the python3 script, I got this error " Segmentation fault (core dumped). Also, it shows the linked TenosrRT version: 8.0.1 & Loaded TensorRT version: 8.2.1. But at the time of TF native FP32 to TF-TRT FP32 model, I did not received any TensorRT version information. Below, I have attached screenshots showing error file and memory usage.

I did some research on this and found it may be caused by Nano 2GB board RAM or TensorRT version mismatch, but not fully sure about this.

Note: https://github.com/tensorflow/tensorrt/blob/master/tftrt/benchmarking-python/image_classification/NGC-TFv2-TF-TRT-inference-from-Keras-saved-model.ipynb.

I followed this ipynb file and stuck at cell no. [17], before this, everything has done successfully.



Environment:

Jetson Nano 2GB board
jetson-nano-jp461-sd-card-image
Architecture: arm64
Python3
TensorRT: 8.2.1
Tensorflow: 2.6.2 + nv21.12
ubuntu: 18.04
Cuda: 10.2

def predict_tftrt(input_saved_model):
“”“Runs prediction on a single image and shows the result.
input_saved_model (string): Name of the input model stored in the current dir
“””
img_path = ‘./data/img0.JPG’ # Siberian_husky
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x = tf.constant(x)

saved_model_loaded = tf.saved_model.load(input_saved_model, tags=[tag_constants.SERVING])
signature_keys = list(saved_model_loaded.signatures.keys())
print(signature_keys)

infer = saved_model_loaded.signatures['serving_default']
print(infer.structured_outputs)

labeling = infer(x)
preds = labeling['predictions'].numpy()
print('{} - Predicted: {}'.format(img_path, decode_predictions(preds, top=3)[0]))
plt.subplot(2,2,1)
plt.imshow(img);
plt.axis('off');
plt.title(decode_predictions(preds, top=3)[0][0][1])

def benchmark_tftrt(input_saved_model):
saved_model_loaded = tf.saved_model.load(input_saved_model, tags=[tag_constants.SERVING])
infer = saved_model_loaded.signatures[‘serving_default’]

N_warmup_run = 50
N_run = 1000
elapsed_time = []

for i in range(N_warmup_run):
  labeling = infer(batched_input)

for i in range(N_run):
  start_time = time.time()
  labeling = infer(batched_input)
  end_time = time.time()
  elapsed_time = np.append(elapsed_time, end_time - start_time)
  if i % 50 == 0:
    print('Step {}: {:4.1f}ms'.format(i, (elapsed_time[-50:].mean()) * 1000))

print('Throughput: {:.0f} images/s'.format(N_run * batch_size / elapsed_time.sum()))

predict_tftrt(‘resnet50_saved_model_TFTRT_FP32’)

I request you to kindly help me in resolving the issue.

Thanking You

Hi,

It looks like the TensorFlow link to the TensorRT package with a different version.

How do you install TensorFlow?
It’s recommended to use our prebuilt package.

The compatible package of JetPack 4.6.1 can be found at the following link:
https://developer.download.nvidia.com/compute/redist/jp/v461/tensorflow/

Thanks.

Hello,

Thanks for the suggestions. Now, the conversion is working perfectly fine. However, in case the precision mode is set INT8, I am getting throughput lesser than FP16. Does Jetson Nano 2GB supports INT8 precision mode??

Thank You

Hi,

Nano doesn’t have Tensor Core so cannot support the INT8 model.
Below is the TensorRT supported matrix for your reference:
(Nano’s GPU architecture is 5.3)

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.