A problem about the objectDetector_FasterRCNN

bool isPluginV2(const char* name) override 
{ 
  return !strcmp(name, "RPROIFused"); 
}

In caffe file, can the parameter name only be layer name?How do I create a plugin about a layer type?

layer {
  bottom: "res_5_block0_conv_sep_batchnorm"
  top: "res_5_block0_conv_sep_relu"
  name: "res_5_block0_conv_sep_relu"
  type: "PReLU"
}

I want to create a plugin that can parse all types called “PReLU”.

In caffe file, can the parameter name only be layer name?How do I create a plugin about a layer type?
>>I think the parameter name is the layer type. What’s the issue did you met?

Hi bcao
I tested it,it actually printed the name of the layer instead of the type of the layer.
it printed “res_5_block0_conv_sep_relu”

bool isPluginV2(const char* name) override 
{ 
  return !strcmp(name, "PReLU"); 
}

TENSORRTAPI nvinfer1::IPluginV2* createLReLUPlugin(float negSlope);

PReLU has one weight for each channel, but this function can only pass in one weight.

My mistake, pls ignore comment #2.
2 options for fixing your problem:
1.you can modify the caffepaser in TensorRT, refer https://github.com/NVIDIA/TensorRT/blob/release/5.1/parsers/caffe/caffeParser/caffeParser.cpp#L508
2.A simple WAR, you can add all your layer name which type is PReLU in isPluginV2() method, like

bool isPluginV2(const char* name) override 
{ 
  return !strcmp(name, "PReLU1")||!strcmp(name, "PReLU2"); 
}

Hi bcao
I use the second option,It does not print an PReLU、PReLU1 or PReLU2.
It print “conv_1_relu” the name of layer.

layer {
  bottom: "conv_1_batchnorm"
  top: "conv_1_relu"
  name: "conv_1_relu"
  type: "PReLU"
}
bool PReLUPluginFactory::isPluginV2(const char* name)
{
	std::cout << "isPluginV2:::" << name << std::endl;
	return !strcmp(name, "PReLU1")||!strcmp(name, "PReLU2");
}

Then you need to modify like this:

bool isPluginV2(const char* name) override 
{ 
  return !strcmp(name, "conv_1_relu")||!strcmp(name, "conv_2_relu"); 
}

Hi bcao
It works like you said.

But when it comes to infer,it occur a error like is:

0:00:01.250098520 10261     0x32740cd0 WARN                 nvinfer gstnvinfer.cpp:523:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:189>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:175>: Pipeline running

Creating LL OSD context new
0:00:05.287065975 10261     0x32336770 WARN                 nvinfer gstnvinfer.cpp:1157:convert_batch_and_push_to_input_thread:<primary_gie_classifier> error: NvBufSurfTransform failed with error -2 while converting buffer
ERROR from primary_gie_classifier: NvBufSurfTransform failed with error -2 while converting buffer
Debug info: gstnvinfer.cpp(1157): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from qtdemux0: Internal data stream error.
Debug info: qtdemux.c(6073): gst_qtdemux_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0:
streaming stopped, reason error (-5)
Quitting
App run failed

I located the wrong place,it occur in the funtion “convert_batch_and_push_to_input_thread” of the gstnvinfer.cpp:

err = NvBufSurfTransform (&nvinfer->tmp_surf, mem->surf,
            &nvinfer->transform_params);

  nvtxDomainRangePop (nvinfer->nvtx_domain);

  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (nvinfer, STREAM, FAILED,
        ("NvBufSurfTransform failed with error %d while converting buffer", err),
        (NULL));
    return FALSE;
  }

The return value err of function NvBufSurfTransform is not NvBufSurfTransformError_Success instead of NvBufSurfTransformError_Execution_Error.

detail:https://devtalk.nvidia.com/default/topic/1066434/deepstream-sdk/a-problem-about-the-nvbufsurftransform/post/5400763/#5400763

So we will track the new issue on the other topic

BTW, maybe you should check the color format of the network input, we can config it in config file. you can refer below:

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../models/Primary_Detector/resnet10.caffemodel
proto-file=../../models/Primary_Detector/resnet10.prototxt
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
labelfile-path=../../models/Primary_Detector/labels.txt
int8-calib-file=../../models/Primary_Detector/cal_trt.bin
batch-size=30
process-mode=1
<b>model-color-format=0</b>
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1