Can't get the results of primary infer in probe in deepstream

• Hardware Platform GPU 1080 TI
• deepstream-app version 5.1.0
• DeepStreamSDK 5.1.0
• CUDA Driver Version: 11.4
• CUDA Runtime Version: 11.1
• TensorRT Version: 7.2
• cuDNN Version: 8.0
• libNVWarp360 Version: 2.0.1d3
• Issue Type :question

Hi,everyone ,I using deepstream-app sample to get the results of primary infer , draw the result to the each frame, but in gie_primary_processing_done_buf_prob can’t get any results

run command

deepstream-app -c face.txt

face.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0


[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=400000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=/home/liulf/NN_face_det/build/same.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tracker]
enable=1
# For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process and enable-past-frame applicable to DCF only
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[tests]
file-loop=1

config_infer_primary.txt

[property]
gpu-id=0
net-scale-factor=1.0
offsets=104.0;117.0;123.0
model-engine-file=./same.engine
labelfile-path=../../models/Primary_Detector/labels.txt
batch-size=1
process-mode=1
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
force-implicit-batch-dim=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/nvdsinfer_custom_impl_Yolo/NN_parsebbox.so
maintain-aspect-ratio=1

function NvDsInferParseCustomYoloV3 in NN_parsebbox.so

extern "C" bool NvDsInferParseCustomYoloV3(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList)
{

    // inference results
    // fake data
    NvDsInferParseObjectInfo result;
    result.left =100;
    result.top = 200;
    result.width =300;
    result.height =400;
    result.detectionConfidence=0.9;
    result.classId=1;
    objectList.push_back(result);
    return true;
}

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf); //GstBuffer *buf
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
  NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
  for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {
     //can't into this space,so i think obj is 0
  }
}

Is there a problem with the configuration?
Look forward to your reply

Sorry for the late response, is this still an issue to support? Thanks

Please set “network-type=0” in your config file

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.