Hi,
We have two custom layers in our Faster R-CNN Caffe model, proposal layer and ROI align layer, which are are written by C++ and can be supported by Caffe. We are now using IPluginExt to make TensorRT support the two custom layers. But we have some questions as below:
- Do we still need to write the layers in C++ code with TensorRT style to make them supported? or can we directly register the two layers in TensorRT and modify the data interfaces to make them supported
- Do you have more specific documents to introduce adding custom layers? or we can have a direct talk
Thanks
Hi,
No. It’s required to follow the TensorRT API for implementation.
Thanks.
Hi,
This week we will focus on solving custom layer problems in Faster R-CNN Caffe model, there are still many questions I am confused, hope we can a phone call to talk about the questions
-
The custom layers are already written to Caffe layers libraries in C++ in training step, but TensorRT cannot recognize these custom layers, so we should tell TensorRT what these layers are in TensorRT’s methods. But in our custom layers, there are many operations need to be done in CPU not CUDA, so I don’t know whether TensorRT can do them
-
I don’t know how to transfer the related Caffe parts to TensorRT parts from apple to apple, please see attached pictures
-
There are many operations in the plugin samples, but I think many operations we don’t need to use, I need to know which operations are really needed for adding custom layers
Hope you can have a quick response and we’d better have a phone call, which I think is very efficient
Hi,
Do you have any feedback now?
Thanks
Hi,
Hope we have a direct contact.
Including questions in last reply, we also have questions as below:
4) how to transfer the bottom part(input part in a Caffe layer) and the top part( output part in a Caffe layer) to TensorRT’s regarding part?
Thanks
Hi,
Could you share your network architecture ? Since I’m trying to optimise Faster-RCNN based network for my use cases, would be helpful and if we could solve our problems.
Thanks.
Hi,
1. TensorRT only parse the model from caffe into TensorRT engine.
If there is any custom implementation, you will need to write it with TensorRT plugin API.
2. TensorRT also have a FasterRCNN sample.
It’s recommended to try if you can use our plugin implementation directly.
/usr/src/tensorrt/samples/sampleFasterRCNN/
3. Here is a sample for writing a TensorRT plugin:
/usr/src/tensorrt/samples/samplePlugin/
Thanks.
Hi,
Thanks for the response. I could use the sampleFasterRCNN and run it on TX2. More precisely, I’m looking for a process or steps to optimise a custom trained FasterRCNN(may be in caffe or TF) with TensorRT.
Thank you.
Hi AastaLLL,
Please double check my questions and attachments
The samples don’t show below things or we are still confused with below things:
-
How to use TensorRT methods to get Caffe Blob data, showed in my attachments, including layer bottom and top data, and other blob data
-
Our custom layer have many OpenCV parts which I suppose cannot be process with CUDA, how to deal with these parts with TensorRT?
-
There are many operations in the samples, some of which I am not sure is needed , such as storing weights data, and so on. We need to know clearly which operations are needed for plugging in custom layers
Another thing is that could you please reply faster with key messages?
Hi,
1. There is no caffe blob. TensorRT don’t support any caffe API except from the model parser.
Please implement the plugin with TensorRT directly.
For TensorRT tensor, it is run-time passed by this function:
virtual int enqueue(int batchSize, const void* const* inputs, void** outputs, void* workspace, cudaStream_t stream) override
2. You will need to copy the tensor value back to CPU.
This will have some performance degradation.
3. Please check our document:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_plugin.html
Thanks.
Hi AastaLLL,
Thanks for your quick reply
Hi,
I have finished the custom layer writing and am debugging the custom layer, it encounter below error:
"
4280:18: Message type “ditcaffe.LayerParameter” has no field named “proposal_param”.
CaffeParser: Could not parse deploy file
Segmentation fault
"
the layer information is as below:
"
layer {
name: “proposals”
type: “Proposal”
bottom: ‘cls’
bottom: ‘bbox’
bottom: ‘info’
top: “proposals”
proposal_param
{
feat_stride:
basesize:
scale:
scale:
ratio:
ratio:
boxminsize:
pre_nms_topn:
post_nms_topn:
nms_thresh:
}
}
"
Please have me check it
Hi,
I don’t know how to deal with the “proposal_param” in this custom layer using TensorRT, please provide your supports ASAP, thanks very much
Hi,
The simplest way is to define the parameters inside your plugin layer.
A hardcoded definition or an extra file reader should work well.
Thanks.
Hi AastaLLL,
I have defined the parameters inside my plugin layer, maybe they are defined in a wrong way.
Could you give me a sample to define this “proposal_param”?
Thanks
Hi,
Sorry that my statement may not be clear enough.
We don’t support custom parameters so you will need to add the information on your own.
A standard approach is to update our Caffe parser and rebuild the library.
[url]https://github.com/NVIDIA/TensoTheRT[/url]
The simplest alternative is to hardcoded the parameter on your own.
This should be works like:
1. Remove proposal_param located in the prototxt so it can be converted by the parser.
2. Add the missing information into your code, either in an extra file or define it directly on your source code.
Thanks.
Hi,
Thanks, I will try then give you feedback
Hi,
We also encountered another problem, the N(batch) in NCHW attributes of reshape layer in the prototxt file are greater than 1, than it reports below errors:
“TensorRT does not suppport reshape in N(batch) dimension”
This issue is urgent, please help to check it