How to load existing caffe model (Gender) as a secondary classifier in deepstream

• Hardware Platform (Jetson / GPU)
Jetson TX2
• DeepStream Version
DS 5.1
• JetPack Version (valid for Jetson only)
4.5.1
• TensorRT Version
7.1

Hi,

I have already trained a Face Detect model using TLT and its working and tested. I am now trying to load a caffe model (Gender) as a secondary model for detection in to the deesptream app which I used to run the face detection model from “deepstream_app_source_facedetectIR.txt” . My aim to to detect faces using the trained model (etlt or trt) as primary and use a caffe model that I already have as secondary for further classifications. Can anyone help me on the config.txt and deepstream_app.txt files that I have to create by providing any reference for secondary gie as I am unclear on the implementation of the model as a secondary one.

These are the model files that I already have: CAFFE

I think you can refer deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt under $DS_TOP/samples/configs/tlt_pretrained_models/

@bcao ,

Yes that one does the same but I want to know the process of loading caffe model instead of trt or etlt model. The deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt loads the etlt and trt models.

This is the part were i’m confused? Can you share a sample config file for loading caffe models as secondary?

Refer samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Hi @bcao,

Thanks for your reply on this post.

I referred the samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt and created separate file for my scenario i.e running FaceDetectIR as primary model to detect faces and my custom caffe gender model as the secondary model for classifying the gender on the detected faces. When I run the file it results into an error mentioned below,

ERROR:

glueck@gluecktx2DS5:/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models$ deepstream-app -c source_gender.txt
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.

Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/Secondary_Gender/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:01.651294650 17508 0x2fee0ca0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 4]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/Secondary_Gender/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:01.651448409 17508 0x2fee0ca0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 4]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/…/…/models/Secondary_Gender/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:01.651488153 17508 0x2fee0ca0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 4]: Trying to create engine from model files
ERROR: [TRT]: CaffeParser: Could not parse binary model file
ERROR: [TRT]: CaffeParser: Could not parse model file
ERROR: Failed while parsing caffe network: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_Gender/deploy.prototxt
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:01.652141750 17508 0x2fee0ca0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 4]: build engine file failed
0:00:01.652190325 17508 0x2fee0ca0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 4]: build backend context failed
0:00:01.652225269 17508 0x2fee0ca0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 4]: generate backend failed, check config file settings
0:00:01.652286325 17508 0x2fee0ca0 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:01.652314037 17508 0x2fee0ca0 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/config_infer_gender_classifier.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:655: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(812): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/config_infer_gender_classifier.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

I have attached my config file and deepstream_app file along.

source_face_gender.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1920  
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://root:Glueck321@10.0.1.36/axis-media/media.amp?streamprofile=H264
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_facedetectir.txt

[tracker]
enable=1
# For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process and enable-past-frame applicable to DCF only
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[secondary-gie0]
enable=1
model-engine-file=../../models/Secondary_Gender/resnet18.caffemodel_b16_gpu0_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_gender_classifier.txt

[tests]
file-loop=0

config_infer_sgie_gender.txt

[property]
gpu-id=0
net-scale-factor=1
model-file=../../models/Secondary_Gender/deploy.prototxt
proto-file=../../models/Secondary_Gender/deploy.prototxt
model-engine-file=../../models/Secondary_Gender/resnet18.caffemodel_b16_gpu0_int8.engine
int8-calib-file=../../models/Secondary_VehicleTypes/cal_trt.bin
mean-file=../../models/Secondary_Gender/x.ppm
labelfile-path=../../models/Secondary_Gender/labels.txt
force-implicit-batch-dim=1
batch-size=16
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
is-classifier=1
process-mode=2
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
input-object-min-width=128
input-object-min-height=128
operate-on-gie-id=1
operate-on-class-ids=0
#scaling-filter=0
#scaling-compute-hw=0

The x.ppm file was created using the reference provided in this post → Mean file "mean.ppm" of deepstream - #5 by amycao

Can you help me on this error?

I would suggest you to run deepsteram-test2-app firstly and make sure it can work

Hi @bcao ,

I tried the deepsteram-test2-app and its working fine on H264 videos but not on mp4 videos. I ran the stream video - sample_720p.h264 and the output was shown as below:

When I tried running the stream video - sample_720p.mp4 the below output is shown for a long time as if it is stuck.

If deepstream-test2 can work well, then you need to refer the config files used in deepstream-test2 to make your deepstream-app work.

Hi @bcao,

I referred and used the config files for the secondary model from the deesptream-test2 and its works perfectly for my scenario.

Great work.

@bcao

But while I add another classification model as secondary-gie1 mentioned below,

[secondary-gie1]
enable=1
model-engine-file=../../models/Secondary_FaceMask/caffe_model_1_iter_100000.caffemodel_b16_gpu0_fp16.engine
gpu-id=0
batch-size=16
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_mask_classifier.txt

It has two classes and I have mentioned them on the labels.txt file as I did for the previous gender model as below

##labels.txt

mask;nomask

But the only the first class i.e the mask is displayed while running the deepstream-app. How could I get the process right?

Can you create a new topic for your new issue?

Hi @bcao,

Thanks for your help.

I’ve created a new topic on the above mentioned label issue → [Secondary GIE] Custom Classifier in sgie outputs only random entry in label.txt