Running inference outside of TLT pipeline / getting predictions

Hello,

We’ve been using TLT to run the number of experiments with MaskRCNN framework in research context and there are few things which are limiting us to actually apply it on scale:

  1. It seems that there’s no way to get the actual predictions made by the model in either evaluate or inference mode (tlt-train maskrcnn, tlt-evaluate maskrcnn, tlt-infer maskrcnn). The only output are the annotated image samples. I am not sure if predictions are not visible in DeepStream context, however, I should be able to simply get prediction from the model even when it’s just initialized, otherwise it doesn’t make sense.

  2. There are no options to edit visualization settings during inference, which is necessary to create custom experiences.

  3. There’s no possibility to export the .tlt model to either hdf5 or savemodel or any other filetype which can be read by open-source libraries.

Please, give me a short answer, whether there are any intentions towards fixing such issues, or if I am not aware of capabilities that can help me.

Thanks for the answer

For 1) Could you follow MaskRCNN — Transfer Learning Toolkit 3.0 documentation ?
For 2) Could you give more details about your required visualization setting? I will sync with internal team.
For 3) Please see https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/instance_segmentation/mask_rcnn.html#exporting-the-model and MaskRCNN — Transfer Learning Toolkit 3.0 documentation again. The tlt model can be exported to etlt file . Then you can generate trt engine based on the etlt file.

Hi, thank you for reply!

  1. I was checking the documentation before writing here, so I cannot find the way to output model predictions (into json or whatever alongside with annotated images while using tlt infer for mask rcnn.
  2. Basically two things: control over bbox line thickness and per instance mask colorization;
  3. Sooo, as I’ve mentioned, I do not need to prepare it for DeepStream, I want to have an ability to work with the trained model outside of TLT. My understanding is that I cannot save model trained with TLT in the format different from tlt or etlt file.

If you do not want to run inference with deepstream, you need to write standalone script to do inference against a trt engine(this engine is generated by tlt-converter against the etlt file.)

For reference, in Maskrcnn jupyter notebook, there is similar way. But it is tlt-infer which is included in the docker.

#Running inference for detection on a dir of images
!tlt mask_rcnn inference -i $DATA_DOWNLOAD_DIR/raw-data/test2017
-o $USER_EXPERIMENT_DIR/maskrcnn_annotated_images
-e $SPECS_DIR/maskrcnn_train_resnet50.txt
-m $USER_EXPERIMENT_DIR/export/model.step-$NUM_STEP.engine
-l $USER_EXPERIMENT_DIR/maskrcnn_annotated_labels
-c $SPECS_DIR/coco_labels.txt
-t 0.5
–include_mask

Ok, let me put it in a different way. If I do run inference with DeepStream, will I get the actual predictions made by the trained model or will it output only annotated images as in the case with tlt mask_rcnn inference?

By default, it will not output annotated images. It will show the actual predictions.
Have you even run deepstream previously? Please free to have a try.
More, suggest you to follow MaskRCNN — Transfer Learning Toolkit 3.0 documentation

For standalone way, you can also refer to How preform inference retinanet using a TLT export .engine file by python

No, I was not running DeepStream, that’s why I am asking.

Okay, so let me summarize:

  1. tlt mask_rcnn inference - outputs annotated images only;
  2. If I export the model, convert to trt and run inference in DeepStream environment I will get the predictions