Convert to TensorRT engine(FP16). Stop here

Please provide the following information when requesting support.

• Hardware (T4)
• Network Type (Mask_rcnn/Classification)
• TLT Version 3.22.05)

Dear professor:
I hope to use Mask RCNN to segment our dateset. When I convert the engine as below.

But I met the problem:

2022-07-06 08:54:29,440 [INFO] root: Registry: [‘nvcr.io’]
2022-07-06 08:54:29,482 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-07-06 08:54:29,492 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/d219/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[INFO] [MemUsageChange] Init CUDA: CPU +473, GPU +0, now: CPU 484, GPU 858 (MiB)
[INFO] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 484 MiB, GPU 858 MiB
[INFO] [MemUsageSnapshot] End constructing builder kernel library: CPU 638 MiB, GPU 900 MiB
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +809, GPU +350, now: CPU 1704, GPU 1250 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +126, GPU +58, now: CPU 1830, GPU 1308 (MiB)
[INFO] Local timing cache in use. Profiling results in this builder pass will not be stored.
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.

It stop here. I wait one night, but there is no improvement.
Please help me. Thank you very much

It is due to CPU memory.
Refer to the solution in Issue while converting maskrcnn model to trt from etlt on Laptops - #20 by Morganh

1 Like

Thank you very much

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.