How to use a classification video as primarye gieon deepstream-app

Hello everyone.

I am doing some tests and now I’m trying to set a classification network as the primary-gie, but it seems that deepstream-app only accepts an object-detection net as primary.
The idea is for some simple and controlled scenarios, and the the hope is to check how the code improves its performance by not using an object detection net.
I found an example on trtis that does this, but I don’t know how to translate its config file to a tlt one.

=========== Some info =================

I have successfully used this very same net as secunary-gie.
The net is the tlt classification example net, running on FP32 on a Jetson Nano.
I have an error, but it occurs after the pipeline starts.
I’ve deactivated the tracker and I’m not using a secundary-gie.
I haven’t found a restriction on the documentation about this, I’ve checked here:

========= Questions and doubts =========

Do I have to do changes on the deepstream-app.c to achieve this? can it be done by config file?
Where should I look for how to do this?

Any help will be appreciated, I’ll be still looking for a way to achieve this.

========= Error Log =========

deepstream-app -c /opt/nvidia/deepstream/deepstream/controlflow/config/controlflow7.txt

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8555/ds-test ***

Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/controlflow/config/../models/Controlflow_tlt_classification/final_model.etlt.engine open error
0:00:01.389225837  7110      0x5e5d990 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/controlflow/config/../models/Controlflow_tlt_classification/final_model.etlt.engine failed
0:00:01.389310943  7110      0x5e5d990 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/controlflow/config/../models/Controlflow_tlt_classification/final_model.etlt.engine failed, try rebuild
0:00:01.389337818  7110      0x5e5d990 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/controlflow/models/Controlflow_tlt_classification/final_model.etlt_b8_gpu0_fp32.engine opened error
0:00:27.736259285  7110      0x5e5d990 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1619> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/controlflow/models/Controlflow_tlt_classification/final_model.etlt_b8_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:00:27.864729627  7110      0x5e5d990 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/controlflow/config/config_infer_secondary_controlflow_as_primary.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 
** INFO: <bus_callback:167>: Pipeline running

NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
0:00:28.701403696  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.702324543  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
0:00:28.702981220  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.703532895  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
0:00:28.704066913  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.704688589  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
0:00:28.705046303  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.705211826  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
0:00:28.705338442  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.705458132  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
0:00:28.705587561  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.705734750  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
0:00:28.705893711  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:28.706058609  7110      0x5e02450 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
double free or corruption (out)
Aborted (core dumped)

======== Config Files ========

App config file

# PR. GIE CON RED DE CLASIFICACION EN VEZ DE DETECCION
# SOURCES -> MUX -> OSD -> PR. GIE -> SINKS -> TILED

# Copyright (c) 2019-2020 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1240
height=980
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri= < my camera, this works >
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=1
bitrate=3000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming  
rtsp-port=< port >
udp-port=< port >

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=1
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=8
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=../models/Controlflow_tlt_classification/final_model.etlt.engine
batch-size=8
#Required by the app for OSD, not a plugin property
## 0=FP32, 1=INT8, 2=FP16 mode
interval=10
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_secondary_controlflow_as_primary.txt

[tracker]
enable=0
# For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=480
tracker-height=288
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

gie config file

# Copyright (c) 2020 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=<my secret key>
tlt-encoded-model=../models/Controlflow_tlt_classification/final_model.etlt
labelfile-path=../models/Controlflow_tlt_classification/labels.txt
#int8-calib-file=../models/Controlflow_tlt/dashcamnet_int8.txt
model-engine-file=../models/Controlflow_tlt_classification/final_model.etlt.engine
#input-dims=3;544;960;0
input-dims=3;224;224;0
uff-input-blob-name=input_1
batch-size= 1 #8 #3
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
#num-detected-classes=7
interval=2
gie-unique-id=1
output-blob-names=predictions/Softmax
classifier-threshold=0.2

Hello everyone.
I found my answer, I had to add this line to get things working.
process-mode=1

1 Like