Why is only the first frame in a batch inferred in the sample deepstream_app?

I modified the sample deepstream_app in the function process_meta. Only one classification model is used as the primary gie. The number of every batch is set as 30. I wanted to print the frame index, the object index, the object class, the label infos. So the print line is


g_print ("frame_source_id: %d; frame_batch_id: %u; frame_num_obj: %d; object_class_id: %d; label_result_class_id, %u; label_result_prob: %f*********************\n", frame_meta->source_id, frame_meta->batch_id, frame_meta->num_obj_meta, obj->class_id, label->result_class_id, label->result_prob);*


But the feedback is


frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.897929*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.897929*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 8; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.703907*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.703907*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.648912*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 12; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.822102*********************

frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.961249*********************
frame_source_id: 0; frame_batch_id: 0; frame_num_obj: 64; object_class_id: -1; label_result_class_id, 1; label_result_prob: 0.913482*********************


I hope to know:
(1). Why are all frame_source_id 0? Only the first frame in every batch is inferred? How to get all frames in a batch?
(2). Why are all of frame_meta->num_obj_meta different? The reply are 8, 12, 42, 52 and so on.
(3). Are all object_class_ids -1?


• Hardware Platform (Jetson / GPU): 1080Ti
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): No
• TensorRT Version: 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only): 460.91.03
• Issue Type( questions, new requirements, bugs): questions
• How to reproduce the issue ? 1 change the model, 2 add a print line in the function process_meta of deepstream_app
• Requirement details: please tell me why and how


Thanks. Any help is welcome.

Please post your PGIE nvinfer config file too. What is your model? What is yout deepstream-app config?

My nvinfer config is


gpu-id=0
net-scale-factor=1
model-engine-file=resnet_fp32.engine
## mean-file=mean.ppm
labelfile-path=labels.txt
force-implicit-batch-dim=1
batch-size=64
model-color-format=1
process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.51
## input-object-min-width=128
## input-object-min-height=128
## operate-on-gie-id=1
## operate-on-class-ids=0
## scaling-filter=0
## scaling-compute-hw=0


My model is a classification engine file based on resnet.

My deepstream-app config is


[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=5
columns=6
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_cam6.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_cam6.mp4
num-sources=1
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_cam6.mp4
num-sources=1
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_cam6.mp4
num-sources=1
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source4]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/yoga.mp4
num-sources=1
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[sink3]
enable=1
type=6
sync=0
msg-conv-config=msgconv_config.txt
msg-conv-payload-type=1
msg-conv-msg2p-lib=libnvds_msgconv.so
msg-conv-comp-id=21
msg-broker-proto-lib=libnvds_kafka_proto.so
msg-broker-conn-str=127.0.0.1;9092;test
msg-broker-config=cfg_kafka.txt
topic=test
msg-broker-comp-id=22

[osd]
enable=1
gpu-id=0
border-width=16
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
display-bbox=1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=30
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=resnet_fp32.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=64
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer.txt

[tests]
file-loop=0


What are wrong? Thanks a lot.

Could anyone list the possible reasons? I can try these one by one. I have no idea about current situation.

Thanks.

hi ls2008:
could you try standard configure file of deepstream_app first, instead of yours?
such as source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt, in this configure file ,batch-size = 4.

I have modified the standard configure file of deepstream_app step by step. In fact every example has been tested. So this is not the point. The point is that I replaced the detection model with a classification model as a primary model. So, Can you test this modification, the primary model being a classification model? I am doubt that why only the first frame in every batch is inferred?

hi ls2008,
I can reproduce this issue , will check internally, thanks

Any progress? New information?

I got the same issue, but I fixed it by changing my probe position from nvosd sink pad to nvdsanalytics src pad.
Then I can get the second or third frame meta data.
Not sure why.
Hope it can help.

Environment:
Deepstream 6.0
Python binding.
Platform:
Jetson Xavier NX

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.