Tao optical_inspection error

It reported the following issue when converting onnx files to engine files.

2023-09-12 16:51:12,140 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2023-09-12 16:51:12,176 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-deploy
2023-09-12 16:51:12,214 [TAO Toolkit] [WARNING] nvidia_tao_cli.components.docker_handler.docker_handler 262: 
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/lab/.tao_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
2023-09-12 16:51:12,214 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
2023-09-12 08:51:14,302 [TAO Toolkit] [INFO] matplotlib.font_manager 1544: generated new fontManager
python /usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/optical_inspection/scripts/gen_trt_engine.py  --config-path /specs --config-name experiment.yaml gen_trt_engine.onnx_file=/home/lab/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export/oi_model.onnx gen_trt_engine.trt_engine=/home/lab/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export/oi_model.engine
sys:1: UserWarning: 
'experiment.yaml' is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.
<frozen cv.common.hydra.hydra_runner>:99: UserWarning: 
'experiment.yaml' is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.
/usr/local/lib/python3.8/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
  ret = run_job(
Log file already exists at /results/status.json
Starting optical_inspection gen_trt_engine.
[09/12/2023-08:51:16] [TRT] [I] [MemUsageChange] Init CUDA: CPU +3, GPU +0, now: CPU 45, GPU 1363 (MiB)
[09/12/2023-08:51:18] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +547, GPU +118, now: CPU 646, GPU 1479 (MiB)
Parsing ONNX model
[Errno 2] No such file or directory: '/home/lab/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export/oi_model.onnx'
Error executing job with overrides: ['gen_trt_engine.onnx_file=/home/lab/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export/oi_model.onnx', 'gen_trt_engine.trt_engine=/home/lab/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export/oi_model.engine']
An error occurred during Hydra's exception formatting:
AssertionError()
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 254, in run_and_report
    assert mdl is not None
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/optical_inspection/scripts/gen_trt_engine.py>", line 3, in <module>
  File "<frozen cv.optical_inspection.scripts.gen_trt_engine>", line 83, in <module>
  File "<frozen cv.common.hydra.hydra_runner>", line 99, in wrapper
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 389, in _run_hydra
    _run_app(
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 452, in _run_app
    run_and_report(
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 296, in run_and_report
    raise ex
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 213, in run_and_report
    return func()
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 453, in <lambda>
    lambda: hydra.run(
  File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/hydra.py", line 132, in run
    _ = ret.return_value
  File "/usr/local/lib/python3.8/dist-packages/hydra/core/utils.py", line 260, in return_value
    raise self._return_value
  File "/usr/local/lib/python3.8/dist-packages/hydra/core/utils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
  File "<frozen cv.common.decorators>", line 63, in _func
  File "<frozen cv.common.decorators>", line 48, in _func
  File "<frozen cv.optical_inspection.scripts.gen_trt_engine>", line 79, in main
  File "<frozen cv.optical_inspection.engine_builder>", line 74, in create_network
  File "<frozen cv.optical_inspection.engine_builder>", line 58, in get_onnx_input_dims
  File "/usr/local/lib/python3.8/dist-packages/onnx/__init__.py", line 169, in load_model
    s = _load_bytes(f)
  File "/usr/local/lib/python3.8/dist-packages/onnx/__init__.py", line 73, in _load_bytes
    with open(typing.cast(str, f), "rb") as readable:
FileNotFoundError: [Errno 2] No such file or directory: '/home/lab/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export/oi_model.onnx'
Sending telemetry data.
Execution status: FAIL
2023-09-12 16:51:33,637 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.

This file is available and uses an absolute path

lab@lab:~/Downloads/getstart5.0/to/local/tao-experiments/optical_inspection/results/export$ ls
oi_model.onnx

Where can I find more relevant information about this case optical_inspection? Please let me know. Thank you

Please check if the onnx file is available. It should the a path inside the docker instead of local path.
The path is defined in your ~/.tao_mounts.json file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.