Convert tensorrt engine from version 7 to 8

Hi,
I have tensorRT(FP32) engine model for inference which is converted using tlt-convertor in TLT version 2. Now i have a python script to inference trt engine. and i installed tensorrt in virtual environment with using this command
pip3 install nvidia-tensorrt.
It installed tensorrt version 8.0.0.3 now i trying to inference the same tensorRT engine file with tensorrt 8.0.0.3.on that time i got this error

[TensorRT] VERBOSE: Registered plugin creator - ::GridAnchor_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::GridAnchorRect_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::NMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Reorg_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Region_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Clip_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::LReLU_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PriorBox_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Normalize_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ScatterND version 1
[TensorRT] VERBOSE: Registered plugin creator - ::RPROI_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::FlattenConcat_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::CropAndResize version 1
[TensorRT] VERBOSE: Registered plugin creator - ::DetectionLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Proposal version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ProposalLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ResizeNearest_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Split version 1
[TensorRT] VERBOSE: Registered plugin creator - ::SpecialSlice_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::InstanceNormalization_TRT version 1
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +303, GPU +0, now: CPU 322, GPU 526 (MiB)
[TensorRT] INFO: Loaded engine size: 4 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 325 MiB, GPU 526 MiB
[TensorRT] ERROR: 1: [stdArchiveReader.cpp::StdArchiveReader::34] Error Code 1: Serialization (Version tag does not match. Note: Current Version: 43, Serialized Engine Version: 89)
[TensorRT] ERROR: 4: [runtime.cpp::deserializeCudaEngine::74] Error Code 4: Internal Error (Engine deserialization failed.)

I used this code to load trt engine
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
trt.init_libnvinfer_plugins(TRT_LOGGER, ā€˜ā€™)
trt_runtime = trt.Runtime(TRT_LOGGER)

with open(engine_path, ā€œrbā€) as f:
engine_data = f.read()

engine = trt_runtime.deserialize_cuda_engine(engine_data)

Is that version mismatch problem?

Yes, please generate trt engine again.

Ok I tried with this

tlt tlt-converter -k $KEY
-d 3,320,480
-o BatchedNMS
-e /home/recode/TLT3/trt.engine
-m 16
-t fp32
-i nchw
/yolo_mobilenet_v2_epoch_010.etlt

But i got error like this
2021-05-06 12:53:56,810 [INFO] root: No mount points were found in the /home/recode/.tlt_mounts.json file.
2021-05-06 12:53:56,810 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the ā€œuserā€:ā€œUID:GIDā€ in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the ā€œid -uā€ and ā€œid -gā€ commands on the
terminal.
2021-05-06 12:53:57,609 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(launcher) recode@RecodePC:~$ /home/.tlt_mounts.json

The path is not correct.
Please read TLT Launcher — Transfer Learning Toolkit 3.0 documentation and then map your local directory to docker.

The Mounts parameter defines the paths in the local machine, that should be mapped to the docker. This is a list of json dictionaries containing the source path in the local machine and the destination path that is mapped for the TLT commands.

1 Like

Thank you @Morganh
where should i keep this tlt_mounts.json file

The launcher instance can be configured in the ~/.tlt_mounts.json file.

1 Like

@Morganh
where could i find this json file? and how can i edit these path?

Create it by yourself.

Yes I created the json and updated the path. then ran this
tlt tlt-converter -k $KEY
-d 3,960,1472
-o BatchedNMS
-e /home/recode/TLT3/trt.engine
-m 16
-t fp32
-i nchw
/yolo_mobilenet_v2_epoch_010.etlt

now it showing like this

2021-05-06 14:37:04,065 [INFO] root: No mount points were found in the /home/recode/.tlt_mounts.json file.
2021-05-06 14:37:04,065 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the ā€œuserā€:ā€œUID:GIDā€ in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the ā€œid -uā€ and ā€œid -gā€ commands on the
terminal.
Error: no input dimensions given
2021-05-06 14:37:04,786 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Can you paste the full command and full log?

(launcher) recode@RecodePC:~/tlt_cv_samples_v1.0.2$ tlt tlt-converter -k $KEY -d 3,960,1472 -o BatchedNMS -e /home/recode/TLT3/trt.engine -m 16 -t fp32 -i nchw /media/recode/DATA9/Jetson/tlt-experiments/install-docs-old/support_scripts/TRT_INFERENCE/fp16/yolo_mobilenet_v2_epoch_010.etlt
2021-05-06 14:37:04,065 [INFO] root: No mount points were found in the /home/recode/.tlt_mounts.json file.
2021-05-06 14:37:04,065 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the ā€œuserā€:ā€œUID:GIDā€ in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the ā€œid -uā€ and ā€œid -gā€ commands on the
terminal.
Error: no input dimensions given
2021-05-06 14:37:04,786 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

This is my Json file

{
ā€œMountsā€: [
{
ā€œsourceā€: ā€œ/home/recode/tlt_cv_samples_v1.0.2ā€,
ā€œdestinationā€: ā€œ/workspace/tlt-experiments/dataā€
},
{
ā€œsourceā€: ā€œ/home/recode/tlt_cv_samples_v1.0.2ā€,
ā€œdestinationā€: ā€œ/workspace/tlt-experiments/resultsā€
},
{
ā€œsourceā€: ā€œ/home/recode/tlt_cv_samples_v1.0.2/yolo_v3/specsā€,
ā€œdestinationā€: ā€œ/workspace/tlt-experiments/specsā€
}
],
ā€œEnvsā€: [
{
ā€œvariableā€: ā€œCUDA_DEVICE_ORDERā€,
ā€œvalueā€: ā€œPCI_BUS_IDā€
}
],
ā€œDockerOptionsā€: {
ā€œshm_sizeā€: ā€œ16Gā€,
ā€œulimitsā€: {
ā€œmemlockā€: -1,
ā€œstackā€: 67108864
},
ā€œuserā€: ā€œ1000:1000ā€
}
}

Please note that the path after command ā€œtltā€ should be a path inside the docker. So, please modify your command line. For example, modify /home/recode/TLT3/trt.engine to something like that /workspace/tlt-experiments/… But in your json file, I find that you did not map /home/recode/TLT3. So, please modify json file too.

Sure Thank you

(launcher) recode@RecodePC:~/TLT3$ tlt tlt-converter -k $KEY -d 3,320,480 -o BatchedNMS -e /workspace/tlt-experiments/trt.engine -m 16 -t fp32 -i nchw /workspace/tlt-experiments/fp16/yolo_mobilenet_v2_epoch_010.etlt
2021-05-06 15:10:10,772 [INFO] root: No mount points were found in the /home/recode/.tlt_mounts.json file.
2021-05-06 15:10:10,772 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the ā€œuserā€:ā€œUID:GIDā€ in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the ā€œid -uā€ and ā€œid -gā€ commands on the
terminal.
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
2021-05-06 15:10:21,485 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Now these error came.The etlt model is trained in TLT 2.0. Is that problem?

Please check the key.
What is the $KEY? You can input it explicitly.

Yea Before i use i exported like this
export KEY=ZmpsbnVjNmJpbDdjdnAxYTHNdWViYTVsaXU6NDQwMzg3OWQtODQ2MS00YjNiLWEwNGEtZmVkZDdhZWUyY2U5

@Morganh
Actually Now i can able to convert tensorrt engine with tlt-convertor. Then i did inference with python script Its again showing the same error.

[TensorRT] INFO: Loaded engine size: 76 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 307 MiB, GPU 648 MiB
[TensorRT] ERROR: 1: [stdArchiveReader.cpp::StdArchiveReader::34] Error Code 1: Serialization (Version tag does not match. Note: Current Version: 43, Serialized Engine Version: 96)
[TensorRT] ERROR: 4: [runtime.cpp::deserializeCudaEngine::74] Error Code 4: Internal Error (Engine deserialization failed.)

Where did you generate the tensorrt engine?
And where did you run inference?
If you run inference in nano, please generate trt engine directly in nano with the tlt-converter(Jetson version).

I am working on x86_64 machine.
I created tensorRT engine on TLT version 2.0. and I can able to run inference inside TLT docker with my python script.
Now I want to run the same inference script in conda environment.In conda environment I installed tensorRT,pycuda, pyindex everything with the command

pip3 install nvidia-pyindex
pip3 install nvidia-tensorrt
pip install pycuda

So when i run the same inference script in conda environment I got this error. How can i slove this? Please help me

[TensorRT] INFO: Loaded engine size: 76 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 307 MiB, GPU 648 MiB
[TensorRT] ERROR: 1: [stdArchiveReader.cpp::StdArchiveReader::34] Error Code 1: Serialization (Version tag does not match. Note: Current Version: 43, Serialized Engine Version: 96)
[TensorRT] ERROR: 4: [runtime.cpp::deserializeCudaEngine::74] Error Code 4: Internal Error (Engine deserialization failed.)