Hello NVIDIA. I followed the following blog to train a custom instance segmentation model with NVIDIA TAO’s Mask-RCNN. However, both the article and the following workflow only contain some stubs on how to deploy the model with DeepStream on the Jetson Nano.
Both articles outline assert that deploying the model on DeepStream is a “very simple process” by updating the spec file that configures the gst-nvinfer element to point to this newly exported model. For reference, I’ve exported the model as FP16, and I received a .etlt file. However, I’ve been unable to find any such spec file, and the corresponding article on the deepstream dev guide only seems to provide walkthroughs for pre-trained models.
I’m completely lost: is there a README that I can refer to for help with how to integrate and deploy my model with DeepStream? I’m using a Jetson Nano DevKit, for reference.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
Hi, thanks for the link! However, the article mentions that you need 2 config files, one of which is deepstream_app_source1_peoplesegnet.txt . However, I couldn’t find this file anywhere in the github, is there an up to date version or equivalent that I can use/modify for my own Mask-RCNN model [trained using NVIDIA TAO]?
Thanks, I’ll try this soon. I removed the sample files in order to save space when installing DeepStream and TensorRT onto the Jetson Nano, but I’ll try re-installing them.