How can I perform inference using a TLT output detectnet_v2 .trt model with in custom tensorflow and python

I have a detectnet_v2 model which is developed using TLT. I would like to know about the steps involved in performing inference on python in DGPU, and tensorflow.


I have cloned the repo and pulled the tensorrt container.
When I try to run the sample SSD model, I am getting below error.

root@1c120ecdecac:/mnt# cd SSD_Model
root@1c120ecdecac:/mnt/SSD_Model# python
2020-12-01 14:28:46.867465: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library
Traceback (most recent call last):
File “”, line 12, in
import utils.inference as inference_utils # TRT/TF inference wrappers
File “/mnt/SSD_Model/utils/”, line 60, in
import utils.engine as engine_utils # TRT Engine creation/save/load utils
File “/mnt/SSD_Model/utils/”, line 11, in
from utils.model import ModelData
File “/mnt/SSD_Model/utils/”, line 9, in
import graphsurgeon as gs
File “/usr/lib/python3.5/dist-packages/graphsurgeon/”, line 9, in
from graphsurgeon.StaticGraph import *
File “/usr/lib/python3.5/dist-packages/graphsurgeon/”, line 7, in
from graphsurgeon._utils import _regex_list_contains_string, _generate_iterable_for_search, _clean_input_name
File “/usr/lib/python3.5/dist-packages/graphsurgeon/”, line 2, in
from tensorflow import NodeDef
ImportError: cannot import name ‘NodeDef’

And it would be helpful if you can give different source, where I can get information about loading a trt model using tf-trt or tensorrt.

Refer to Apart from Deepstream where else I can deploy tlt-converted models or .trt engine files