Tlt-infer don't work with etlt model

Hi
when I run tlt-infer for tlt, it’s work but for etlt I get this error:

2020-06-22 13:00:55,383 [INFO] iva.detectnet_v2.scripts.inference: Overlain images will be saved in the output path.
2020-06-22 13:00:55,383 [INFO] iva.detectnet_v2.inferencer.build_inferencer: Constructing inferencer
2020-06-22 13:00:55,581 [INFO] iva.detectnet_v2.inferencer.trt_inferencer: Reading from engine file at: /workspace/tmp2/experiment_dir_final/resnet18_detector.etlt
[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
  File "/usr/local/bin/tlt-infer", line 8, in <module>
    sys.exit(main())
  File "./common/magnet_infer.py", line 56, in main
  File "./detectnet_v2/scripts/inference.py", line 194, in main
  File "./detectnet_v2/scripts/inference.py", line 117, in inference_wrapper_batch
  File "./detectnet_v2/inferencer/trt_inferencer.py", line 380, in network_init
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

I run this command in TLT 2.0:

tlt-infer detectnet_v2 -e /workspace/tmp2/detectnet_v2/specs/detectnet_v2_inference_kitti_etlt.txt \
                        -o /workspace/tmp2/output \
                        -i /workspace/tmp2/trainval/image \
                        -k KEY

Hi LoveNvidia,
Could you please share detectnet_v2_inference_kitti_etlt.txt?

detectnet_v2_inference_kitti_etlt.txt (2.2 KB)

Could you please generate trt engine based on the etlt model, then set it in the spec instead?
trt_engine: “xxx.trt”

Yes I can generate tlt engine based on the etlt model with tlt-converter on jetson nano.

No, I mean you generate the trt engine in the docker and set it in the spec, then to see if tlt-infer can work.
Currently, you were setting

trt_engine: “xxx.etlt”

I generated etlt model with docker.

Please set “trt engine” to a trt engine instead of etlt file.

Because I converted into FP16 with jetson nano and my host gtx 1080 doesn’t support fp16, I guess that will return error.

Hi LoveNvidia,
In your host gtx, could you please try to generate a trt engine with fp32 mode? Then you set the trt engine in the spec and run tlt-infer.

Also, for tlt-infer, you can also run it with tlt model.
Set the tlt model in the spec, for example,

tlt_config{
model: “/workspace/tlt-experiments/resnet18_detector_pruned.tlt”

In this way, no trt engine is needed.

I generate trt engine on host gtx, and then run tlt-infer for trt egine that’s work.