Run engine trt file on image/video

Continuing the discussion from Engine Plan Inference on JetsonTX2:

Hi,
I’m trying to run an engine (.trt file) on an image but I have a special issue (see the link).

To make it clearer, I used the retinanet-example repo to train a model on a host computer. I used the export command to convert to onnx.
I sent the onnx file to my Jetson TX2 (Jetpack4.4), and by using onnx-tensorrt repo I converted it to a trt file.

You can see the scripts I used to run the engine but I have an error I can’t find a solution to. I based my scripts on these page.

If anyone has a clear workflow on how to infer using trt file I’ll take it !

Thanks

Hi,

Please correct me if anything missing.
It looks like you convert the onnx model into TensorRT engine on desktop and try to infer it on the TX2.

Please noticed that TensorRT engine is not portable.
You will need to regenerate a .trt on the TX2 directly.

Thanks.

Hey,

no as mentionned, I converted the .pth to onnx on the host, but then sent the onnx file to Jetson to convert it to trt file (so the engine has been built on Jetson)

I used onnx2trt and trtexec command but I have the same issue when trying to infer.

[EDIT]: I also noticed in this tutorial that the authors use a library called “engine” and apply “import engine as eng” but I can’t find any library called this way… the weirdest being the creation of a function called “build_engine()” but then it uses the same name as a method from the “engine” library…

Thanks

Hi,

The ‘engine’ is an object returned by TensorRT API.

engine = trt_runtime.deserialize_cuda_engine(engine_data)

Thanks.

Hey,
I might have been not really clear about my question.

I do get that this “engine”:

engine = trt_runtime.deserialize_cuda_engine(engine_data)

is an object created by the command from the function “build_engine”.

What I don’t get is that this function isn’t created as a method for a class “Engine” so why do the author use this import command

import engine as eng

at the beginning of the script and then use:

engine = eng.build_engine(onnx_path,engine_name)

It doesn’t make sense for me.
Either they created a class named “Engine” and implemented a method called “build_engine” but then they must create an instance like:

eng = Engine()
engine = eng.build_engine(onnx_path,engine_name)

Or they use directly the function “build_engine” like this:

engine = build_engine(onnx_path,engine_name) 

I don’t understand the mix being made here and that’s one of the reason I can’t reproduce the execution.

Also did you have a look on the error I get from the linked issue ?

Thanks

[EDIT]: I went through the workflow showed on the "Speeding up DeepLearning " link, step by step, to spot which command is a problem, and I found it.

>>> context.execute(batch_size=1,bindings=[int(dinput1),int(doutput)])
[TensorRT] ERROR: ../rtSafe/safeContext.cpp (133) - Cudnn Error in configure: 7 (CUDNN_STATUS_MAPPING_ERROR)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
False

The loading of engine is working, the creation and allocation of buffer to send the image to GPU also, but it’s when I want to execute the inference that the problem occurs.
What do you think of the error ?

Thanks

Hey,
any idea about the issue ?

Thanks

Hi,

Thanks for updating this.
The naming in this sample is a little bit confusing.

First of all, the first engine used is in this file:

def load_engine(trt_runtime, plan_path):
   with open(engine_path, 'rb') as f:
       engine_data = f.read()
   engine = trt_runtime.deserialize_cuda_engine(engine_data)
   return engine

The engine used here is a local variable so it it won’t affect other usage.
However, the key point is that this file is stored as ‘engine.py’.

So the following line doesn’t try to import the TensorRT engine but the function implemented in the file engine.py.

import engine as eng

And in this section, eng indicate the implementation in the ‘engine.py’ while the engine represent the compiled TensorRT engine.

 engine = eng.build_engine(onnx_path, shape= shape)
 eng.save_engine(engine, engine_name) 

Thanks.

And for this issue:

[TensorRT] ERROR: ../rtSafe/safeContext.cpp (133) - Cudnn Error in configure: 7 (CUDNN_STATUS_MAPPING_ERROR)

Could you share your environment setup and the tutorial you used with us first?
Thanks.