We’ve been using TLT to run the number of experiments with MaskRCNN framework in research context and there are few things which are limiting us to actually apply it on scale:
It seems that there’s no way to get the actual predictions made by the model in either evaluate or inference mode (tlt-train maskrcnn, tlt-evaluate maskrcnn, tlt-infer maskrcnn). The only output are the annotated image samples. I am not sure if predictions are not visible in DeepStream context, however, I should be able to simply get prediction from the model even when it’s just initialized, otherwise it doesn’t make sense.
There are no options to edit visualization settings during inference, which is necessary to create custom experiences.
There’s no possibility to export the .tlt model to either hdf5 or savemodel or any other filetype which can be read by open-source libraries.
Please, give me a short answer, whether there are any intentions towards fixing such issues, or if I am not aware of capabilities that can help me.
Thanks for the answer