I have gone through the process of training a Segmentation model - MaskRCNN using the coco-2017 data-set in TLT-3.0, and trained the model with the default configuration provided in NVIDIA TLT-3.0. I get errors deploying the model trained using the default data-set.
Now i am trying to make a custom Segmentation model created from custom data-set.
I have annotated the data using the Intel CVAT and exported it into TFrecord which is used to feed into the config file for the training process.
My Questions :
Is there any constrains that i have to deal with before changing the training steps parameter ?
Can it take both JPEG and PNG as data-set formats ?
How to deploy the model into deepstream or what is the application format that is for Segmentation,
Is it the deepstream-mrcnn-app or the deepstream-segmentation-test ?
Is it mandatory for the data-set to have a BG (BackGround) class for an both scemantic & instance segmentation model ?
After a Successful model generation using Default/Custom Data-set and trying to deploy the model under Deepstream with the config file similar to shown here https://developer.nvidia.com/blog/training-instance-segmentation-models-using-maskrcnn-on-the-transfer-learning-toolkit/, I end up getting errors such as
a) While trying to run the model using the deepstream-segmentation-test application
0:00:00.298115171 563 0x561db309c630 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: generate_detections: Unsupported operation _GenerateDetection_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.691056241 563 0x561db309c630 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
b) While trying to run the model using the deepstream-app as shown in the
ERROR from src_bin_muxer: Output width not set
Debug info: gstnvstreammux.c(2283): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed
The Errors where same for both models with the provided default dataset and an custom dataset. While training ,the process didn’t get any errors, the custom dataset is a single class dataset.