Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) Jetson Orin
**• DeepStream Version 6.1.1
**• JetPack Version (valid for Jetson only) 5.0.2
**• TensorRT Version 8.4.1
**• Issue Type( questions, new requirements, bugs) Question
I am trying to get a multi-task classifier set up in deepstream 6.1.1 and i am following the instructions located at the following link:
the last step in the process states:
Create label mapping and set in nvdsinfer_customclassifier_multi_task_tao.cpp
I cannot find where this file would be located. It does not appear to be located in either deepstream nor deepstream_tao_apps, but I may be missing something obvious.
As it stands, running a test configuration will crash as soon as the first instance where the multi-task classification is used.
test configuration:
main config:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:/home/nvidia/Videos/example.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 7=nv3dsink (Jetson only)
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0
[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
buffer-pool-size=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=/home/nvidia/engines/PrimaryObjectDetect/yolov4_resnet18_epoch_004.etlt_b1_gpu0_fp32.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[secondary-gie0]
enable=1
model-engine-file=/home/nvidia/engines/PeopleMultitask/mcls_export.etlt_b1_gpu0_fp32.engine
gpu-id=0
batch-size=1
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=3;
config-file=config_infer_secondary_MT_people.txt
[tests]
file-loop=0
Primary object detection config:
[property]
gpu-id=0
net-scale-factor=1
offsets=103.939;116.779;123.68
model-color-format=1
output-tensor-meta=0
labelfile-path=/home/nvidia/engines/PrimaryObjectDetect/labels.txt
model-engine-file=/home/nvidia/engines/PrimaryObjectDetect/yolov4_resnet18_epoch_004.etlt_b1_gpu0_fp32.engine
tlt-encoded-model=/home/nvidia/engines/PrimaryObjectDetect/yolov4_resnet18_epoch_004.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;736;1280
maintain-aspect-ratio=0
uff-input-order=0
uff-input-blob-name=Input
#is-classifier=0
batch-size=1
process-mode=1
#model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=7
interval=0
gie-unique-id=1
network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
force-implicit-batch-dim=1
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/nvidia/deepstream-tao-apps/post_processor/libnvds_infercustomparser_tao.so
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
Multi-task Classifier secondary config:
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/home/nvidia/engines/PeopleMultitask/labels.txt
tlt-encoded-model=/home/nvidia/engines/PeopleMultitask/mcls_export.etlt
tlt-model-key=nvidia_tlt
model-engine-file=/home/nvidia/engines/PeopleMultitask/mcls_export.etlt_b1_gpu0_fp32.engine
infer-dims=3;80;60
uff-input-blob-name=input_1
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
interval=0
gie-unique-id=1
network-type=0
scaling-filter=1
scaling-compute-hw=1
output-blob-names=action/Softmax;object/Softmax;pose/Softmax
uff-input-blob-name=input_1
classifier-threshold=0.5
maintain-aspect-ratio=0
output-tensor-meta=0
log of the run:
$ deepstream-app -c testconfig.txt
Using winsys: x11
0:00:00.106779776 8135 0xaaaaca778490 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 2]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:03.629342560 8135 0xaaaaca778490 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 2]: deserialized trt engine from :/home/nvidia/engines/PeopleMultitask/mcls_export.etlt_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input_1 3x80x60
1 OUTPUT kFLOAT pose/Softmax 5x1x1
2 OUTPUT kFLOAT object/Softmax 2x1x1
3 OUTPUT kFLOAT action/Softmax 6x1x1
0:00:03.790128416 8135 0xaaaaca778490 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 2]: Use deserialized engine model: /home/nvidia/engines/PeopleMultitask/mcls_export.etlt_b1_gpu0_fp32.engine
0:00:03.797837152 8135 0xaaaaca778490 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_0> [UID 2]: Load new model:/home/nvidia/deepstream-app/multitask/config_infer_secondary_MT_people.txt sucessfully
0:00:05.777702272 8135 0xaaaaca778490 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/nvidia/engines/PrimaryObjectDetect/yolov4_resnet18_epoch_004.etlt_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x736x1280
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200
0:00:05.943919040 8135 0xaaaaca778490 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/nvidia/engines/PrimaryObjectDetect/yolov4_resnet18_epoch_004.etlt_b1_gpu0_fp32.engine
0:00:05.959729984 8135 0xaaaaca778490 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/nvidia/deepstream-app/multitask/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running
**PERF: 33.12 (33.02)
**PERF: 30.02 (31.46)
**PERF: 29.97 (30.97)
**PERF: 30.01 (30.72)
**PERF: 30.01 (30.57)
0:00:31.233447712 8135 0xaaaaca297800 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 2]: Could not find output coverage layer for parsing objects
0:00:31.233489440 8135 0xaaaaca297800 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 2]: Failed to parse bboxes
Segmentation fault (core dumped)
Thanks for the help!
Justas