Please provide the following information when requesting support.
• Hardware (x86_ Linux ubuntu jammy 22.04, RTX 3090)
• Network Type (Mask2Former)
• Training spec file( spec.txt (1.6 KB)
)
I have trained my .pth mask2former model on my custom dataset of 5 classes . But i need to run inference of the trained model with an input image and get the prediction mask value of it .These operation like loading model and doing prediction should be in a .py file app, I need to know how to do it
TAO already provides the dockers. So, the easier way is to use tao launcher or docker run.
If you use tao lacuner, after installing tao launcher, you can refer to the command mentioned in the notebook.
! tao model mask2former inference -e $SPECS_DIR/spec_inst.yaml inference.checkpoint=$RESULTS_DIR/train/mask2former_model.pth
If you use docker run, you can run below.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt /bin/bash
Then below command in it. # mask2former inference xxx
But my requirement is different , i will explain you what i am trying to achieve
We have a python file named obj_detect_extract.py which should import the trained model and do prediction with it for an input image which has the object to be detected , as an output of it i need the predicted mask and class in the scene.
Here is the real part , the model will be the trained .pth mask2former using TAO toolkit jupyter notebook which is in tao_tutorial repo’s tao_launcher_starter_kit folder.
Just give me a way that i can use the TAO Toolkit .pth trained model in my obj_detect_extract.py file and detect and then extract requirements like predicted masks, class.
Should be from nvidia_tao_pytorch.core.telemetry.telemetry import send_telemetry_data.
If you leverage the source code to generate a standalone inference code, you can ignore this send_telemetry_data.