How can I perform inference using a TLT output detectnet_v2 .trt model with in custom tensorflow and python

I have a detectnet_v2 model which is developed using TLT. I would like to know about the steps involved in performing inference on python in DGPU, and tensorflow.

Reference:

https://forums.developer.nvidia.com/t/peoplenet-coverage-output-is-always-zero/160222/14

I have cloned the repo and pulled the tensorrt container.
When I try to run the sample SSD model, I am getting below error.

root@1c120ecdecac:/mnt# cd SSD_Model
root@1c120ecdecac:/mnt/SSD_Model# python detect_objects_webcam.py
2020-12-01 14:28:46.867465: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File “detect_objects_webcam.py”, line 12, in
import utils.inference as inference_utils # TRT/TF inference wrappers
File “/mnt/SSD_Model/utils/inference.py”, line 60, in
import utils.engine as engine_utils # TRT Engine creation/save/load utils
File “/mnt/SSD_Model/utils/engine.py”, line 11, in
from utils.model import ModelData
File “/mnt/SSD_Model/utils/model.py”, line 9, in
import graphsurgeon as gs
File “/usr/lib/python3.5/dist-packages/graphsurgeon/init.py”, line 9, in
from graphsurgeon.StaticGraph import *
File “/usr/lib/python3.5/dist-packages/graphsurgeon/StaticGraph.py”, line 7, in
from graphsurgeon._utils import _regex_list_contains_string, _generate_iterable_for_search, _clean_input_name
File “/usr/lib/python3.5/dist-packages/graphsurgeon/_utils.py”, line 2, in
from tensorflow import NodeDef
ImportError: cannot import name ‘NodeDef’

And it would be helpful if you can give different source, where I can get information about loading a trt model using tf-trt or tensorrt.

Refer to Apart from Deepstream where else I can deploy tlt-converted models or .trt engine files