access output from deepstream

Hi,
I’m trying to run caffe model that gives out 4 tensors.
I need the output tensor to calculate the prob. of detection.
I’m trying to access the output tensors by
float* output_data1 = static_cast<float >(outputLayersInfo[0].buffer);
float
output_data2 = static_cast<float >(outputLayersInfo[1].buffer);
float
output_data3 = static_cast<float >(outputLayersInfo[2].buffer);
float
output_data4 = static_cast<float *>(outputLayersInfo[3].buffer);

but I see that the results are much different than the results I get by running same picture in caffe.

  1. what can be the problem?
    2.is this is the right way to access the output tensors?

thanks

Hi,

1. Please check if all the parameter in the config file is updated first.
You can file some information in our document:
>> Application Customization

2. You can check it in our source code here.
{deepstream_sdk_on_jetson}/sources

Thanks.

I have used the example and still the results are far from what I expect to get
something that I noticed is when I change from BGR -> RGB the output doesn’t change

[property]
gpu-id=0
net-scale-factor=1
offsets=0;0;0;0
#0=RGB, 1=BGR
model-color-format=1
model-engine-file=V6.1.0_face_det.caffemodel_b1_fp32.engine
model-file=V6.1.0_face_det.caffemodel
proto-file=V6.1.0_face_det.prototxt
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=21
interval=0
gie-unique-id=1
parse-func=0
is-classifier=0
output-blob-names=cls1;cls2;loc1;loc2
parse-bbox-func-name=NvDsInferParseCustomFasterRCNN
custom-lib-path=nvdsinfer_custom_impl_fasterRCNN/libnvdsinfer_custom_impl_fasterRCNN.so

[class-attrs-all]
threshold=0.2
eps=0.1
group-threshold=2
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
# Prevent background detection
[class-attrs-0]
threshold=1.1
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://test10x2min_1.mp4

[streammux]
batch-size=1
batched-push-timeout=-1
## Set muxer output width and height
width=1920
height=1080

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0

[osd]
enable=1
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[primary-gie]
enable=1
batch-size=1
gie-unique-id=1
interval=0
labelfile-path=labels.txt
model-engine-file=V6.1.0_face_det.caffemodel_b1_fp32.engine
config-file=config_infer_primary_fasterRCNN.txt

how can I know if it even run my network?
I use the deepstream-app with my costume plugin

Hi,

You can check the output log of deepstream.
By the way, we have a plugin sample for fasterRCNN. You can check it for some information first.
{deepstream_sdk_on_jetson}/sources/objectDetector_FasterRCNN

Thanks.