Predict image with TAO v5.5.0 Toolkit trained mask2former instance segmentation model (.pth)

Please provide the following information when requesting support.

• Hardware (x86_ Linux ubuntu jammy 22.04, RTX 3090)
• Network Type (Mask2Former)
• Training spec file(
spec.txt (1.6 KB)
)

I have trained my .pth mask2former model on my custom dataset of 5 classes . But i need to run inference of the trained model with an input image and get the prediction mask value of it .These operation like loading model and doing prediction should be in a .py file app, I need to know how to do it

You can refer to

As you say i am trying to use the tao_pytorch_backend/nvidia_tao_pytorch/cv/mask2former/scripts/inference.py at main · NVIDIA/tao_pytorch_backend · GitHub file for my trained model inference in my local system for that i just followed the redme of the repo. Its says to

For Getting Started

source ${PATH_TO_REPO}/scripts/envsetup.sh

Terminal Output:
TAO pytorch build environment set up.

The following environment variables have been set:

NV_TAO_PYTORCH_TOP /home/vanorhq/tao_pytorch_backend

The following functions have been added to your environment:

tao_pt Run command inside the container.
(base) vanorhq@vanorhq:~$

And then from the repo i had run the setup.py with (pip install -e . ) to setup for local use:

It is installed properly as well

(base) vanorhq@vanorhq:~$ pip show nvidia-tao-pytorch
Name: nvidia-tao-pytorch
Version: 5.2.0.1
Summary: NVIDIA’s package for DNN implementation on PyTorch for use with TAO Toolkit.
Home-page:
Author: NVIDIA Corporation
Author-email:
License: NVIDIA Proprietary Software
Location: /home/vanorhq/tao_pytorch_backend
Editable project location: /home/vanorhq/tao_pytorch_backend
Requires:
Required-by:
(base) vanorhq@vanorhq:~$

But when i try to import for running the inference.py i getting the below error

TAO already provides the dockers. So, the easier way is to use tao launcher or docker run.
If you use tao lacuner, after installing tao launcher, you can refer to the command mentioned in the notebook.
! tao model mask2former inference -e $SPECS_DIR/spec_inst.yaml inference.checkpoint=$RESULTS_DIR/train/mask2former_model.pth

If you use docker run, you can run below.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt /bin/bash
Then below command in it.
# mask2former inference xxx

But my requirement is different , i will explain you what i am trying to achieve

We have a python file named obj_detect_extract.py which should import the trained model and do prediction with it for an input image which has the object to be detected , as an output of it i need the predicted mask and class in the scene.

Here is the real part , the model will be the trained .pth mask2former using TAO toolkit jupyter notebook which is in tao_tutorial repo’s tao_launcher_starter_kit folder.

Just give me a way that i can use the TAO Toolkit .pth trained model in my obj_detect_extract.py file and detect and then extract requirements like predicted masks, class.

It is possible. You can leverage the source code(tao_pytorch_backend/nvidia_tao_pytorch/cv/mask2former/scripts/inference.py at main · NVIDIA/tao_pytorch_backend · GitHub) to generate a standalone code to run inference against the .pth model.

Also, you can use trtexec(refer to TRTEXEC with Mask2former - NVIDIA Docs) to generate the tensorrt engine.
And then leverage the tao-deploy code(tao_deploy/nvidia_tao_deploy/cv/mask2former/scripts/inference.py at main · NVIDIA/tao_deploy · GitHub) to generate another standalone code to run inference against the tensorrt engine.

Both of the repo (tao_pytorch_backend/nvidia_tao_pytorch/cv/mask2former/scripts/inference.py at main · NVIDIA/tao_pytorch_backend · GitHub) , (tao_deploy/nvidia_tao_deploy/cv/mask2former/scripts/inference.py at main · NVIDIA/tao_deploy · GitHub) has an unsolved issue , it is missing some module named as ‘nvidia_tao_core’, this is issue is still open in one of the repo ModuleNotFoundError: No module named 'nvidia_tao_core' · Issue #26 · NVIDIA/tao_pytorch_backend · GitHub .

Due to this issue can’t able to run the inference.py file . Can you please look into this problem and help out if you can.

Should be from nvidia_tao_pytorch.core.telemetry.telemetry import send_telemetry_data.
If you leverage the source code to generate a standalone inference code, you can ignore this send_telemetry_data.

Thanks , Now it is importing ,But i cant see any results of my prediction, it is printing out to be None , Can you help me out

python_code_py_file.txt (2.1 KB)
terminal_output.txt (26.8 KB)
spec.txt (1.6 KB)

Can you debug your code inside the docker?
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt /bin/bash

Then, run your standalone code.

BTW, the original code can be found in /usr/local/lib/python3.10/dist-packages/nvidia_tao_pytorch/cv/mask2former/

You can leverage the original code.