Convert gluoncv model to engine model

• Hardware Platform (Jetson / GPU) GTX 1650
• DeepStream Version 5
• TensorRT Version 7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only) 450.51
• Issue Type( questions, new requirements, bugs) question

I want to use slowfast_4x16_resnet50_kinetics400 from gluoncv.model_zoo in deepstream. The model has two files (.params. and .json). I tried to convert it to onnx and then to engine but I got this error in onnx conversion stage


Is there another way to convert this model into engine to be able to run it with deepstream.

Hi,

It looks like the model is defined with the MXNet model format.
If yes, have you checked this page for the conversion?

Thanks.

Yes, that’s exactly the code I used to convert the model into onnx. It gave me the error attached in the post.

Hi,

Based on their support matrix, the slice layer is supported but with some limitation.
Could you check if your layer meet the requirement or not first?

https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage

Thanks.

I found that slice is partially supported, but couldn’t understand how.

Here are the the json parts in my model .param with slice layer
image
And
image

Hi,

The doc indicates that the slice operation need to be applied on the axis=1.
But based on your network architecture, it seems the operation is applied at axis=2.

Thanks.

1 Like

So, there is no way to deploy this model with deep-stream?.
I am trying the Pytorch version of the same model from facebook repo instead of the MXNet model, managed to get the onnx file, but still can’t convert it to engine.
Here is the error

Hi,

Could you try if this model can be inferenced with TensorFlow?
If yes, you can use nvinferserver instead:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html

Thanks.