ValueError: Invalid infer image root - tao detectnet_v2 inference

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : x86_64 GPU machine
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Detectnet_v2
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I’m getting ValueError: Invalid infer image root error while doing inferencing with tao tarined model.

Inferencing config:
detectnet_v2_infer_config.txt (4.7 KB)

Inferencing input:
archive.zip (8.3 MB)

inferencing log:
inferencing_log.txt (2.5 KB)

ValueError: Invalid infer image root /home/soundarrajan/detectnet_v2/inference/input/train_people.jpg
ValueError: Invalid infer image root /home/soundarrajan/detectnet_v2/inference/input/

Please check if the path is available

Hi @Morganh,

Yeah given path is correct.

COMMAND: pwd
/home/soundarrajan/detectnet_v2/inference/input

COMMAND: ls -la
image

But still getting the same error…

Can you share the command how you run inference? I cannot find the command in the inference log.

Hi @Morganh,

Please find the inference command i’m using,

tao detectnet_v2 inference -e /home/soundarrajan/detectnet_v2/config/detectnet_v2_infer_config.txt -k tao_encode -i /home/soundarrajan/detectnet_v2/inference/input -o /home/soundarrajan/detectnet_v2/inference/output --log_file /home/soundarrajan/detectnet_v2/logs/inferencing_log.txt -v

The inference path need not to be added in tao_mounts file right?
or the inferencing path also should be add to the tao_mounts file?

You can run below in terminal to check if it is available.
method1:
$ tao detectnet_v2 run ls /home/soundarrajan/detectnet_v2/inference/input/train_people.jpg

method2:
$ tao detectnet_v2 run /bin/bash
then
# ls /home/soundarrajan/detectnet_v2/inference/input/train_people.jpg

Hi @Morganh,

Output for command:
tao detectnet_v2 run ls /home/soundarrajan/detectnet_v2/inference/input/train_people.jpg

2022-06-08 07:59:12,026 [INFO] root: Registry: ['nvcr.io']
2022-06-08 07:59:12,118 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.4-py3
ls: cannot access '/home/soundarrajan/detectnet_v2/inference/input/train_people.jpg': No such file or directory
2022-06-08 07:59:12,867 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Output for command:
tao detectnet_v2 run /bin/bash

2022-06-08 08:00:47,147 [INFO] root: Registry: ['nvcr.io']
2022-06-08 08:00:47,236 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.4-py3
groups: cannot find name for group ID 1015
I have no name!@512af6943bff:/workspace$ ls /home/soundarrajan/detectnet_v2/inference/input/train_people.jpg
ls: cannot access '/home/soundarrajan/detectnet_v2/inference/input/train_people.jpg': No such file or directory

what i have to do now?

Please modify the tao_mount.json to map it.

Yes, after adding the path to tao_mounts file it worked!

,
        {
            "source": "/home/soundarrajan/detectnet_v2/inference",
            "destination": "/home/soundarrajan/detectnet_v2/inference"
        }

Thanks

Hi @Morganh,

Inferencing is happening now… Bounding box also fine but is there any way that to get label the predicted image? Not the KITTI dump i want the predicted label in the annotated image it self.

annotated image:
2007_000762
2007_000663
2007_000333

For detectnet_v2, there are only annotated images and their output labels. If you want to put the label into the images, please try to write scripts to do that based on the bbox coordinates.

So this constraint is only for detectnet_v2?

what about other detection models like, YOLO, SSD, Retina etc… is it possible to put labels in the output image itself?

They will put the labels in the annotated images by default.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.