Inference issue using Tensorflow to TensorRT converted model

My TX2 is flashed with the latest jetpack(4.2.2)

I converted a tensorflow keras model to tensorrt graph using this parameters

trt_graph = trt.create_inference_graph(
    input_graph_def=frozen_graph,
    outputs=output_names,
    max_batch_size=2,
    max_workspace_size_bytes=2 << 10,
    precision_mode='FP32'
)

then load the graph and run inference in the for loop.

graph = tf.Graph()
    with graph.as_default():
        with tf.Session(config=tf.ConfigProto(gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.50))) as sess:
            # obtain the corresponding input-output tensor
            tf.import_graph_def(relevance_graph, name='')
            input = sess.graph.get_tensor_by_name(input_name)
            output = sess.graph.get_tensor_by_name(output_name)
            out_pred = sess.run(output, { input_name: tensor })

The problem is when i run the code my os is hangs or worker (redis queue is used) dies.

What am i missing?

Hi,

How do you setup your TensorFlow package?
Do you use our official release?
[url]Installing TensorFlow for Jetson Platform :: NVIDIA Deep Learning Frameworks Documentation

Thanks.