How to load a etlt model in python script

Hi … I have trained a detectnet_v2 model of tlt-training toolkit with my custom dataset. I am able to get a .etlt file.
Now my next step is that I need to make a module in python where I can load that .etlt model (file) and run predictions on sample images using that model.
I need to know how I can do it ?

Any help will be appreciated.

You can generate trt engine based on the etlt model.
Then load the engine in python script.
Reference: Run PeopleNet with tensorrt - #21 by carlos.alvarez

hii… as per your suggestion i am using the script provide in the link.
i have a model name :

after using this model i am getting error given below:

[TensorRT] ERROR: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 5.3 got compute 7.2, please rebuild.
[TensorRT] ERROR: engine.cpp (1546) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “”, line 243, in
inputs, outputs, bindings, stream = allocate_buffers(trt_engine)
File “”, line 62, in allocate_buffers
for binding in engine:
TypeError: ‘NoneType’ object is not iterable

any help regarding this issue ??

Which device did you run inference?
For example, if you want to run inference in Jetson Nano, you should download tlt-converter jetson version, copy etlt model into Nano and then generate trt engine in Nano.

what is the way ot convert the etlt model into engine file for jetson nano ? i cannot find the correct converter. @Morganh

Refer to tlt user guide. Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation

after referring the documentation what you suggested, i am getting this error.

./tlt-converter -k OGQ4ZGw5cXN2M3QwNDJsNGxpbnRsNXJuOHY6OTQ4ZmU2ZTAtZDcyYy00MzE0LTk1ZjEtMTgyMDJkYWFIMDgw -d 3,384,1248 -o output_bbox/BiasAdd,output_cov/Sigmoid -e /home/senquire-nano/Documents -b 1 input_file /home/senquire-nano/Documents/tlt_exp_niharika
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped).

Any idea?


You are missing etlt model file in your command line.
Except tlt user guide, you can also trigger tlt docker and run its jupyter notebook to see how to run tlt-covnerter.

More refenence: How to run tlt-converter

1 Like

still on the same page. not getting it exactly why this is happening.
senquire-nano@senquire-nano:~/Downloads$ ./tlt-converter ./resnet18_detector.etlt \

-o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,384,1248 -e resnet18_detector.trt -b 1
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Make sure your key is correct.

ok checking it once again. thanks

now that error has gone and when i start i am getting this now.

[WARNING] Int8 support requested on hardware without native Int8 support, performance will be negatively affected.
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Starting Calibration with batch size 1.
[INFO] Post Processing Calibration data in 9.636e-06 seconds.
[INFO] Calibration completed in 96.0879 seconds.
[ERROR] Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
[ERROR] Builder failed while configuring INT8 mode.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

command which i ran is :
./tlt-converter ./resnet18_detector.etlt -k OGQ4ZGw5cXN2M3QwNDJsNGxpbnRsNXJuOHY6OTQ4ZmU2ZTAtZDcyYy00MzE0LTk1ZjEtMTgyMDJkYWFlMDgw -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,384,1248 -i nchw -m 64 -t int8 -e resnet18_detector.trt -b 1

You are missing “-c”. For int8 mode, “-c” is needed.
Please refer to How to run tlt-converter

@Morganh that error has gone finally able to generate an engine file or a trt file using tlt-converter. now trying to loading the engine file in the people net script, but getting errors like …
Traceback (most recent call last):
File “”, line 250, in
detection_out, keepCount_out = predict(image, model_w, model_h)
File “”, line 115, in predict
img = process_image(image, model_w, model_h)
File “”, line 96, in process_image
image = Image.fromarray(
TypeError: int() argument must be a string, a bytes-like object or a number, not ‘NoneType’ (8.6 KB)

Please debug by yourself. I recall that other user can run that script well according the feedback from

1 Like

@Morganh thanks a lot . i am able to run it end to end for detectnet .