AssertionError: Max workspace size for TensorRT inference should be positive, got 0

Hi. I have a problem with tlt inference efficientnet_b0. Without trt_engine it works fine, but with trt_engine the error is:

2021-07-20 14:48:13,111 [INFO] main : Running inference with TensorRT as backend.
Traceback (most recent call last):
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/faster_rcnn/scripts/inference.py”, line 236, in
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/faster_rcnn/scripts/inference.py”, line 90, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/faster_rcnn/spec_loader/spec_wrapper.py”, line 589, in infer_workspace_size
AssertionError: Max workspace size for TensorRT inference should be positive, got 0.
Traceback (most recent call last):
File “/usr/local/bin/faster_rcnn”, line 8, in
sys.exit(main())
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/faster_rcnn/entrypoint/faster_rcnn.py”, line 12, in main
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/entrypoint/entrypoint.py”, line 296, in launch_job
AssertionError: Process run failed.

Run command: !faster_rcnn inference --gpu_index 0 -e $SPECS_DIR/default_spec_efficientnet_b0.txt

Thanks in advance.
trt.fp16.engine (9.4 MB)

faster_rcnn.ipynb (523.9 KB)
default_spec_efficientnet_b0.txt (4.9 KB)

Can you add below in the spec and try again?
max_workspace_size_MB: 2000

1 Like

Yes. It’s work! Thanks!
Why is this not in the examples and documentation?

I will check internally for improvement.

You are running with tlt 3.0-dp-py3 docker. Please update to tlt 3.0-py3 docker.
But this parameter is deleted in tlt-3.0-py3 docker.

End user can set
[–max_workspace_size <maximum workspace size]
for export, see https://docs.nvidia.com/tlt/tlt-user-guide/text/object_detection/fasterrcnn.html#exporting-the-model

or
[-w ]
for tlt-converter, see https://docs.nvidia.com/tlt/tlt-user-guide/text/object_detection/fasterrcnn.html#using-the-tlt-converter

Both are documented properly.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.