How to implement YoloV3 on TX2 with TensorRT on TX2?

Hello, everyone

I want to speed up YoloV3 on my TX2 by using TensorRT.

I have already convert Darknet model to Caffe model and I can implement YoloV2 by TensorRT now.

I have reference the deepstream2.0 yolov3 example and it didn’t has upsampling layer in plugin layer.

I guessed it use deconvolution instead of upsampling. Is it right ?

My prototxt is as below. If so. Could I directly change the “Upsample” to “Deconvolution” and rewrite the param?

layer {
    bottom: "layer85-conv"
    top: "layer85-conv"
    name: "layer85-act"
    type: "ReLU"
    relu_param {
        negative_slope: 0.1
    }
}
layer {
    bottom: "layer85-conv"
    top: "layer86-upsample"
    name: "layer86-upsample"
    type: "Upsample"
    upsample_param {
        scale: 2
    }
}
layer {
    bottom: "layer86-upsample"
    bottom: "layer62-shortcut"
    top: "layer87-route"
    name: "layer87-route"
    type: "Concat"
}

Hi,

Where is the upsampling layer in your model?
You can use some third-party library rather than TensorRT if the layer is at the end of the model.

If not, you can try to replace it with deconvolution.

Thanks.

Hi jack_gao, Have you implemented yolov3 or yolov2 in tensor-rt on tx2? If yes, please help.

Hi,

Here is a tutorial for YOLO2/YOLO3 for your reference:
[url]https://github.com/vat-nvidia/deepstream-plugins[/url]

Thanks.

The UPSAMPLE operation in the repo supports only square input. Could you please show how to replace the UPSAMPLE with a deconvolution layer? thx.

Hi,

Do you mean the upsample function here?
[url]https://github.com/vat-nvidia/deepstream-plugins/blob/master/sources/lib/trt_utils.cpp#L521[/url]

This function also check the width/height dimension of the input.
It is able to take non-square input.

Do I miss anything?

Thanks.