How to convert general Densebox caffe model to be used in deepstream

Hi I want to know the steps to convert a general Densebox model (caffe framework preferable) to be used on Deepstream. I have gone through reference samplessd.cpp. kindly support
**• Hardware Platform (Jetson / GPU)**T4
• DeepStream Version4.0
• JetPack Version (valid for Jetson only)
• TensorRT Version5.1.5
**• NVIDIA GPU Driver Version (valid for GPU only)**450.36.06

Hi,

First, it’s recommended to upgrade your package into Deepstream 5.0 for overall improvement.
To run a customized model, please check this document for detail steps:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_custom_model.html#wwpID0EEHA

Thanks.

Hi,
I like to use 4.0 (hope no extra issues will be there) and also to run a customized model will python be enough?
Also can you specify what advantage 5.0 has when compared to 4.0. I thought 4.0 was stable than 5.0
Thanks

Hi,

You can find the new features and improvements for Deepstream 5.0 in our release note:

Thanks.

Hi @AastaLLL,
I had used Densebox model and corresponding prototxt. I got the following error while converting it into engine format.
I shall paste the terminal log and command here
&&&& RUNNING TensorRT.trtexec # ./trtexec --deploy=deploy.prototxt --model=densebox.caffemodel --output=pixel-loss,bb-tile --batch=16 --saveEngine=densebox.trt
[07/03/2020-11:10:39] [I] === Model Options ===
[07/03/2020-11:10:39] [I] Format: Caffe
[07/03/2020-11:10:39] [I] Model:densebox.caffemodel
[07/03/2020-11:10:39] [I] Prototxt: deploy.prototxt
[07/03/2020-11:10:39] [I] Output: pixel-loss bb-tile
[07/03/2020-11:10:39] [I] === Build Options ===
[07/03/2020-11:10:39] [I] Max batch: 16
[07/03/2020-11:10:39] [I] Workspace: 16 MB
[07/03/2020-11:10:39] [I] minTiming: 1
[07/03/2020-11:10:39] [I] avgTiming: 8
[07/03/2020-11:10:39] [I] Precision: FP32
[07/03/2020-11:10:39] [I] Calibration:
[07/03/2020-11:10:39] [I] Safe mode: Disabled
[07/03/2020-11:10:39] [I] Save engine: densebox.trt
[07/03/2020-11:10:39] [I] Load engine:
[07/03/2020-11:10:39] [I] Inputs format: fp32:CHW
[07/03/2020-11:10:39] [I] Outputs format: fp32:CHW
[07/03/2020-11:10:39] [I] Input build shapes: model
[07/03/2020-11:10:39] [I] === System Options ===
[07/03/2020-11:10:39] [I] Device: 0
[07/03/2020-11:10:39] [I] DLACore:
[07/03/2020-11:10:39] [I] Plugins:
[07/03/2020-11:10:39] [I] === Inference Options ===
[07/03/2020-11:10:39] [I] Batch: 16
[07/03/2020-11:10:39] [I] Iterations: 10
[07/03/2020-11:10:39] [I] Duration: 3s (+ 200ms warm up)
[07/03/2020-11:10:39] [I] Sleep time: 0ms
[07/03/2020-11:10:39] [I] Streams: 1
[07/03/2020-11:10:39] [I] ExposeDMA: Disabled
[07/03/2020-11:10:39] [I] Spin-wait: Disabled
[07/03/2020-11:10:39] [I] Multithreading: Disabled
[07/03/2020-11:10:39] [I] CUDA Graph: Disabled
[07/03/2020-11:10:39] [I] Skip inference: Disabled
[07/03/2020-11:10:39] [I] Input inference shapes: model
[07/03/2020-11:10:39] [I] Inputs:
[07/03/2020-11:10:39] [I] === Reporting Options ===
[07/03/2020-11:10:39] [I] Verbose: Disabled
[07/03/2020-11:10:39] [I] Averages: 10 inferences
[07/03/2020-11:10:39] [I] Percentile: 99
[07/03/2020-11:10:39] [I] Dump output: Disabled
[07/03/2020-11:10:39] [I] Profile: Disabled
[07/03/2020-11:10:39] [I] Export timing to JSON file:
[07/03/2020-11:10:39] [I] Export output to JSON file:
[07/03/2020-11:10:39] [I] Export profile to JSON file:
[07/03/2020-11:10:39] [I]
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 834:19: Message type “ditcaffe.LayerParameter” has no field named “gs_tiling_param”.
[07/03/2020-11:10:40] [E] [TRT] CaffeParser: Could not parse deploy file
[07/03/2020-11:10:40] [E] Failed to parse caffe model or prototxt, tensors blob not found
[07/03/2020-11:10:40] [E] Parsing model failed
[07/03/2020-11:10:40] [E] Engine creation failed
[07/03/2020-11:10:40] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --deploy=deploy.prototxt --model=densebox.caffemodel --output=pixel-loss,bb-tile --batch=16 --saveEngine=densebox.trt
e]0;root@GokuDDG-B85M-D3H: /usr/src/tensorrt/binae[01;32mroot@GokuDDG-B85M-D3He[00m:e[01;34m/usr/src/tensorrt/bine[00m# exitGokuDDG

Script done on 2020-07-03 11:10:54+0530

Hi,

The error indicates that some parameter used in your caffe model is not supported.

Error parsing text-format ditcaffe.NetParameter: 834:19: Message type “ditcaffe.LayerParameter” has no field named “gs_tiling_param”.

Could you share the layer definition with gs_tiling_param with us first?
Thanks.

Hi @AastaLLL, @kayccc,
Kindly find the layer definition with gs_tiling_param below
gs_tiling_param {
stride: 8
reverse: true
}
Thanks in Advance

Hi,

Would you mind to share the layer information as well?
Ex.

layer {
  name: "conv1"
  type: "Convolution"
  bottom: "scale"
  top: "conv1"
  param {
    lr_mult: 1.0
  }
  param {
    lr_mult: 2.0
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

Thanks.

Hi @AastaLLL,
Kindly see the below definitions.
layer {
name: “pixel-tile”
type: “GSTiling”
bottom: “pixel-conv”
top: “pixel-conv-tiled”
gs_tiling_param {
stride: 8
reverse: true
}
}
layer {
name: “bb-tile”
type: “GSTiling”
bottom: “bb-output”
top: “bb-output-tiled”
gs_tiling_param {
stride: 8
reverse: true
}
}

Hi,

GSTiling is a non-supported layer.
You can find here for the support matrix of TensorRT:
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#supported-ops

Thanks.

Hi @AastaLLL,
Is there any way that I can use this layer ?

Hi,

We can check this.
Please share the prototxt and caffmodel with us first.
More, could you illustrate the detail about this layer so we can see if any alternative can be applied?

Thanks.

Hi @AastaLLL,
I have shared it to you in private message kindly do check.

Hi @AastaLLL,
Can you check pls

Hi,

Sorry for keeping you waiting.

It looks like the GSTiling is a customized operation and didn’t present from the official caffe. (Please correct me if not correct)
We recommends to use the previous pixel-conv and bb-output as output, and add the tiling implementation by yourself.

$ /usr/src/tensorrt/bin/trtexec --deploy=./densebox.prototxt --output=pixel-conv --output=bb-output

Thanks.

Hi I actually implemented this in TensorRT https://github.com/nwesem/mtcnn_facenet_cpp_tensorRT. I want to know how I can implement the same in DeepStream.Can you help out.

Hi,
How do I implement the gs-tiling layer??

Hi,

There are two possible way to achieve this:

1. Add GSTiling support into TensorRT plugin.

2. Add the GSTiling support as Deepstream bounding box parser.
You can find some sample that deals with custom output in the following folder:
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_FasterRCNN
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD

Thanks.