Detectron2 Jetson NX

We are trying to run a Detectron2(MASK-RCNN) Model on an NX, but we are getting just 6FPS. Does anyone have a recommendation or experience on running this model on an NX.

Also, would Deepstream help with live video inference ?

Thanks

1 Like

Hi,

Here are some performance result for NX:

For live video, yes, it’s recommended to use Deepstream.

Thanks.

1 Like

Thanks, AastaLLL, now for a realtime 30 FPS live video fee, what would be better the new Jetson Xavier NX or the Jetson AGX Xavier, we would need to run Detectron 2 (Mask Rcnn) inferance.

@cpalomares, how did you convert Detectron2(Mask RCNN)? I am trying to convert a Detectron2 - Keypoint Detector and getting some issues. My post is here. Can you please give me any advice? Thanks.

@cogbot,
we do not have it converted and working yet, just running detector2 native on the board, no containers, but it is very slow… we are trying to get it converted.

I will share it if we are sussecfull.

1 Like

Hi,

We are going to check the Detectron2 model for Xavier and NX.
Will share more information with you both.

Thanks.

1 Like

@AastaLLL, that’s great. Thank you very much.

Thank you !!!

Hi any update on this?
Thanks

1 Like

Hi,

Sorry to keep you waiting.
We are still working on this. Will keep you updated.

Thanks.

1 Like

Hi any update on this?
Thanks

1 Like

Hi,

Sorry that we are still checking this.
Will share more information once we finished.

1 Like

Hi @AastaLLL is there any update? Thanks

Same

Any updates on this?

Hi,

Sorry that we are still working on this.
Will update here once the customization is done.

Thanks.

1 Like

Hi,
I’m trying to deploy a detectron2 trained FasterRCNN-Resnet-50-FPN model to Jetson AGX Xavier.
Are there any results on this? Could you please share some guidlines that I should follow?

Thanks.

Hi,

This may work but not guarantee.

  1. Convert your detectron2 model into ONNX format first.
    You can find some information below:
    detectron2.export — detectron2 0.4 documentation

  2. Try to modify TensorRT FasterRCNN (caffe-based) example.
    Although the file format is different, you should get the required plugin layer from the example:

Thanks.

@AastaLLL after the export - will it be kind of tensorrt model executed by tensorrt launcher?

Hi,

The detectron2.export will output an ONNX type model file.
You will also need to convert it with nvonnxparser and the required plugin library.

Thanks.