How to deploy etlt and model engine to deepstream 4.0

Hello, I want to ask, if I have finished tlt-converter, I got etlt model and model engine. and also already created config_infer_primary.txt for deepstream 4.0 and labels also already created. how to run and test my program to detect the class object that I used for training on deepstream 4.0?
I used DetectNet V2, FP16, and I want to test it on realtime by using usb-camera. Somebody have any idea?

Hi,

You can find some information int this topic:
https://devtalk.nvidia.com/default/topic/1065558/transfer-learning-toolkit/trt-engine-deployment/

Thanks.

Hello AastaLLL, actually I have done the tlt-converter and got my own engine. Do you have some specific steps on how to deploy it on deepstream 4.0? Do I need to convert my TLT Engine again to C++/Python before deploying it on deepstream? Because of the pdf, there is no step on how to create own deepstream program for our personal use

I tried this config_infer_primary.txt

This is my config_infer_primary.txt

[property]
gpu-id=0
# preprocessing parameters.
net-scale-factor=0.0039215697906911373
model-color-format=0

# model paths.
labelfile-path=/home/deepstream/Desktop/'Bill Son 2'/labels.txt
tlt-model-key=dWhrajZsbWtobW8wZ2UycmhnaDdqZmw3cGg6MWNhZGU2NTYtNjA5Yy00ZWQ0LTgxZTktYzE4ZmZkOWI4NWI1
model-engine-file=/home/deepstream/Desktop/'Bill Son 2'/resnet18_detector_fp16.engine
input-dims=3;720;1280;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format.
uff-input-blob-name=input_1
batch-size=4 
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
#enable_dbscan=0

[class-attrs-all]
threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

But I end up with this error.

(deepstream-app:7841): GStreamer-WARNING **: 20:53:44.866: Name 'src_cap_filter' is not unique in bin 'src_sub_bin0', not adding
Error: Could not parse labels file path
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:943>: failed
Creating LL OSD context new
0:00:00.299936459  7841     0x16ff02d0 WARN                 nvinfer gstnvinfer.cpp:658:gst_nvinfer_start:<primary_gie_classifier> error: Configuration file parsing failed
0:00:00.300007188  7841     0x16ff02d0 WARN                 nvinfer gstnvinfer.cpp:658:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/deepstream/Desktop/Bill Son 2/config_infer_primary.txt
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting

ERROR from primary_gie_classifier: Configuration file parsing failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(658): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/deepstream/Desktop/Bill Son 2/config_infer_primary.txt
App run failed

this is my code:

# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=1
#uri=file://../../streams/sample_1080p_h264.mp4
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0
#num-sources=8

#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=5
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
#nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=/home/deepstream/Desktop/'Bill Son 2'/resnet18_detector_fp16.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tests]
file-loop=0

Do you have any idea?

Hi,

Have you checked the sample shared in this GitHub:
https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps#deepstream-configuration-file

Thanks.