Inferencing of Inception_v2 on OEM server with V100.

Hello all,

I was trying to create trt engine in TRT-container of image(tensorrt:19.12-py3),
and following the guidelines of NVIDIA-AI-IOT repo https://github.com/NVIDIA-AI-IOT/tf_trt_models#ic_models . but every time after executing this code my jupyter lab kernel dies.

import tensorflow.contrib.tensorrt as trt

trt_graph = trt.create_inference_graph(
    input_graph_def=frozen_graph,
    outputs=output_names,
    max_batch_size=1,
    max_workspace_size_bytes=1 << 25,
    precision_mode='FP16',
    minimum_segment_size=50
)

is there any possible solution for this or any special environment setup is required to run thin?