TensorRT deployment with engine generated from TLT example

Looking for sample python code to run TensorRT with engine that was created with TLT example.

After going through the tlt delectNet_v2 example, I was able to generating the engine file “resnet18_detector_baru.engine”.
I would like to run the engine with TensorRT on Jetson nano. However, I’m not able to find examples or clear instructions to load the engine into TensorRT. Any suggestion or direction would be greatly appreciated.
Thanks,
Terry

The following two blob helps. However, did not provide detail.

I end up with the following code:
f = open(‘/home/trafficcountuser/sl/testpy/resnet18_detector_baru.engine’, ‘rb’)
runtime = trt.Runtime(TRT_LOGGER)
engine = runtime.deserialize_cuda_engine(f.read())

sourceFileDir = os.path.dirname(os.path.abspath(file))
sourceFileDir = os.path.abspath(os.path.join(sourceFileDir,“…/”))
fileName = os.path.join(sourceFileDir, “runTime”, “Image640x360.bmp”)
thisImage = cv2.imread(fileName)
result = engine.infer(thisImage) # Single function for inference

There is an error on the last line:

Exception has occurred: AttributeError
‘tensorrt.tensorrt.ICudaEngine’ object has no attribute ‘infer’

Any idea about this message?

Hi @terry.l.lee,
Can you try running the below command on your engine?
trtexec --loadEngine=your_engine-file --batch=1
and see if you are getting the inference?

Thanks!

Hi AahankshaS,
Thanks a lot for your helps. After spending weeks on TLT, I feel that I’m close on getting it all working.

After building ertexec, I was able to run the command with:
/usr/src/tensorrt/bin/trtexec --loadEngine=resnet18_detector_baru.engine --batch=1

The output seems indicate getting the inference.

What is the next step ?

I was trying to run the .engine with “detect_objects.py”, however, run into import model issue.

Under “inference.py”, the “import pycuda.driver as cuda”
I’m having some difficulty installing the pycuda model
There are many posts discuss the installation. I have not be able to make it work yet.

Please let me know if I’m at the right path.
Thanks,
Terry

Hi @terry.l.lee,
The issue is with your inference file then.
trtexec is an alternate option to get the inference.
Have you created any file named tensorrt?

Thanks!

I have not created file name tensorrt.
Is that on Jetson nano or during TLT process?

Where can I find direction/instruction on creating the tensorrt file?
Thanks again.

Hi Aahanksha,
I’m not having much luck on the next step after generating the .engin file. Please help out with example or direction.
Thanks,
Terry

Hi @terry.l.lee,
I assume, this example should be useful for you to proceed.

You may find more samples here.

Thanks!

Hi @AakankshaS
You said trtexec --loadEngine=your_engine-file --batch=1 can be used for inference ? But i don’t see the input and output in this short command line ? and how the trtexec how to know what kind of inference to do, because there are many kind of infernce task.