• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.5.0
• TensorRT Version 8.0.1.6
• CUDA version 10.2
• cuDNN version 8.0
• OpenCV version 4.1.1
Hello.
I have trained YOLOv5n6 with a custom dataset and I would like to deploy it on a Jetson Xavier NX via DeepStream.
Following the guide from Marcoslucianops repository in Github, I have successfully created whe .cfg and .wts files, denoted yolov5n6_custom.cfg and yolov5n6_custom.wts, with my inference desired: width 960 and height 540. I want to detect objects at different positions with respect to the camera (too far or fairly close), reason why I need high dimensions.
I have modified the config_infer_primary_yoloV5.txt
as follows:
[property]
enable-dla=1
use-dla-core=0
net-scale-factor=0.0039215697906911373
labelfile-path=/path/to/my/labelfile.txt
custom-network-config=yolov5n6_custom.cfg
model-file=yolov5n6_custom.wts
model-engine-file=yolov5n6_custom_b1_dla0_int8.engine
int8-calib-file=calib.table
model-color-format=0
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
and the deepstream_app_config.txt
as depicted:
################################################################################
# Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=1
columns=1
width=960
height=540
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=5
camera-csi-sensor-id=0
camera-width=960
camera-height=540
camera-fps-n=20
camera-fps-d=1
camera-v4l2-dev-node=0
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=1
# Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0
# [sink1]
# enable=0
# type=3
#1=mp4 2=mkv
# container=1
#1=h264 2=h265
# codec=1
#encoder type 0=Hardware 1=Software
# enc-type=0
# sync=0
#iframeinterval=10
# bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# profile=0
# output-file=out.mp4
# source-id=0
# [sink2]
# enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
# type=4
#1=h264 2=h265
# codec=1
#encoder type 0=Hardware 1=Software
# enc-type=0
# sync=0
# bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# profile=0
# set below properties in case of RTSPStreaming
# rtsp-port=8554
# udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
buffer-pool-size=4
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=960
height=540
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt
[tests]
file-loop=0
I also followed the ultralytics docs in order to perform the int8 calibration, as I was getting too low FPS.
I compiled the libnvdsinfer_custom_impl_Yolo with OpenCV exactly as described in the documentation:
CUDA_VER=10.2 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
I created the calibration text file and folder inside the DeepStream-Yolo repository cloned (~/Repo/DeepStream-Yolo
) with 500 images, as explained in step 5. I exported the environment variables as follows:
export INT8_CALIB_IMG_PATH=~/Repo/DeepStream-Yolo/calibration.txt
export INT8_CALIB_BATCH_SIZE=1
and when executing deepstream-app -c deepstream_app_config.txt
I encountered the following error:
File does not exist: calib.table
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.1.1) /home/nvidia/host/build_opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed)!ssize.empty() in function 'resize'
Aborted (core dumped)
Any help is appreciated,
Thank you