Importing image from Azure Custom Vision Compact Domain (S1) into Deepstream 6 (SSD model)

Description

I am trying to run a SSD ONNX from an azure export, but am getting a bunch of errors when building the engine… Is this a supported configuration? I am trying this because with the upgrade to DS our detection rate has dropped by an order of magnitude and we are to a point where we must have a deployment for a customer… We are having to run the YOLO exported ONNX at a threshold of 0.001, and still not getting what we need out of it. (Strange, it almost never give over detections only under detections)

I am using the following configuration lines:

parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so

These are the error logs…

(deepstream-test5-app:1): GLib-CRITICAL : 02:03:55.967: g_strrstr: assertion ‘haystack != NULL’ failed
nvds_msgapi_connect : connect success
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
ERROR: Deserialize engine failed because file path: /app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_dla0_fp32.engine open error
0:00:01.922001820 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_dla0_fp32.engine failed
0:00:01.922150491 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_dla0_fp32.engine failed, try rebuild
0:00:01.922184123 1 0x3d341560 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: DLA does not support FP32 precision type, using FP16 mode.
WARNING: [TRT]: Default DLA is enabled but layer mean_value is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer
1) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: mbox_conf/concat: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer mbox_conf/concat is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox_conf is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: node_of_mbox_loc: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: DLA Layer node_of_mbox_loc does not support dynamic shapes in any dimension.
WARNING: [TRT]: Default DLA is enabled but layer node_of_mbox_loc is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split_0 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer prior_sizes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer
172) [Shuffle] is not supported on DLA, falling back to GPU.
ERROR: [TRT]: 2: [standardEngineBuilder.cpp::buildEngine::2302] Error Code 2: Internal Error (Builder failed while analyzing shapes.)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:02.126178983 1 0x3d341560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:02.128934149 1 0x3d341560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:02.128999557 1 0x3d341560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:02.129507972 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.129541636 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /app/resources/custom_configs/config_infer_custom_vision.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
WARNING: [TRT]: Default DLA is enabled but layer multiply1_B is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 175) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer prior_centers is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 178) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer multiply2_B is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 181) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer unary is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer prior_sizes_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 185) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: concat2: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer concat2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_max_classes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_max_scores is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer non_max_suppression is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer non_max_suppression_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer slice_out_selected_box_indexes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer selected_box_reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_selected_boxes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split_detected_cxcy_wh is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split_detected_cxcy_wh_3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer value_2f is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 198) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: half_wh: DLA cores do not support DIV ElementWise operation.
WARNING: [TRT]: Default DLA is enabled but layer half_wh is not supported on DLA, falling back to GPU.
WARNING: [TRT]: detected_boxes: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer detected_boxes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_detected_classes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_detected_scores is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer squeeze_detected_classes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer squeeze_detected_scores is not supported on DLA, falling back to GPU.
[NvMultiObjectTracker] De-initialized
** ERROR: main:1455: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /app/resources/custom_configs/config_infer_custom_vision.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Disconnecting Azure…

Environment

How do I get these in a concise manner?

TensorRT Version:
GPU Type: Jetson Xavier NX
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Container

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

GPU Mode here:

(deepstream-test5-app:1): GLib-CRITICAL **: 02:21:18.893: g_strrstr: assertion ‘haystack != NULL’ failed
nvds_msgapi_connect : connect success
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
ERROR: Deserialize engine failed because file path: /app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_gpu0_fp32.engine open error
0:00:01.753218779 1 0x28041560 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_gpu0_fp32.engine failed
0:00:01.753359387 1 0x28041560 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_gpu0_fp32.engine failed, try rebuild
0:00:01.753393595 1 0x28041560 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: 4: [shapeCompiler.cpp::evaluateShapeChecks::822] Error Code 4: Internal Error (kOPT values for profile 0 violate shape constraints: reshape would change volume. IShuffleLayer mbox4_conf/transpose: reshaping failed for tensor: mbox4_conf)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:04.068056177 1 0x28041560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.071001586 1 0x28041560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.071094066 1 0x28041560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.071686834 1 0x28041560 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:04.071723314 1 0x28041560 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /app/resources/custom_configs/config_infer_custom_vision.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
[NvMultiObjectTracker] De-initialized
** ERROR: main:1455: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /app/resources/custom_configs/config_infer_custom_vision.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi,
Please check the below links, as they might answer your concerns.

Thanks!

What can we provide to help here? We get pretty good performance on YOLO on custom vision, but terrible on the jetson. This is why we are grasping at straws and trying to switch up the model…

I will include our 2 main files here…
DS CFG:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=3
columns=2
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
camera-id=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.2:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source1]
enable=1
camera-id=2
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.3:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source2]
enable=1
camera-id=3
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.4:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source3]
enable=1
camera-id=4
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.5:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source4]
enable=1
camera-id=5
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.6:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source5]
enable=1
camera-id=6
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.7:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source6]
enable=0
camera-id=7
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.8:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source7]
enable=0
camera-id=8
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=4
uri=rtsp://192.168.211.9:554/s0
num-sources=1
#drop-frame-interval=2
gpu-id=0
select-rtp-protocol=0
rtsp-reconnect-interval-sec=30

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=msgconv_config_egg_counter.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_azure_edge_proto.so
topic=DetectionToCounting
#Optional:
#msg-broker-config=…/…/…/…/libs/azure_protocol_adaptor/module_client/cfg_azure.txt

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4800000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0

set below properties in case of RTSPStreaming

rtsp-port=8554
udp-port=5400

[sink2]
enable=0
#link-to-demux=1
#source-id=0
#type: 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming/UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=3
#1=mp4 2=mkv
container=1
#codec: 1=h264 2=h265
codec=1
#encoder type: 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4800000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=mpg4_output.mp4

[osd]
enable=1
gpu-id=0
border-width=2
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
process-mode=2
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=6
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=1
nvbuf-memory-type=0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
config-file=config_infer_custom_vision.txt
#config-file=…/custom_configs/config_infer_custom_vision.txt
batch-size=6
interval=2
gie-unique-id=1
nvbuf-memory-type=0

[tracker]
enable=1

For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively

tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=…/custom_configs/config_tracker_NvDCF_accuracy.yml
gpu-id=0
enable-past-frame=1
enable-batch-process=1
display-tracking-id=1

Infer:

Following properties are mandatory when engine files are not specified:

int8-calib-file(Only in INT8)

Caffemodel mandatory properties: model-file, proto-file, output-blob-names

UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names

ONNX: onnx-file

Mandatory properties for detectors:

num-detected-classes

Optional properties for detectors:

enable-dbscan(Default=false), interval(Primary mode only, Default=0)

custom-lib-path,

parse-bbox-func-name

Mandatory properties for classifiers:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]
workspace-size=2500
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR, 2=GRAY
#model-color-format=1
model-color-format=0
#Model Engine Needs to be from the output location called by ds in the logs
onnx-file=…/custom_models/combined_iteration1.onnx
model-engine-file=…/custom_models/combined_iteration1.onnx_b6_dla0_fp32.engine
#onnx-file=…/custom_models/combined_ssd_iteration2.onnx
#model-engine-file=…/custom_models/combined_ssd_iteration2.onnx_b6_gpu0_fp32.engine

#onnx-file=…/custom_models/merge2_iteration3.onnx
#model-engine-file=…/custom_models/merge2_iteration3.onnx_b6_dla0_fp32.engine
#onnx-file=…/custom_models/iteration08.onnx
#onnx-file=…/custom_models/konosBrownFlipped_iteration1.onnx
#onnx-file=…/custom_models/iteration10.onnx
#model-engine-file=iteration10.onnx_b1_gpu0_fp32.engine
#onnx-file=…/custom_models/iteration72.onnx
#onnx-file=…/custom_models/testMerge_iteration1.onnx
#model-engine-file=…/custom_models/iteration08.onnx_b3_gpu0_fp32.engine

labelfile-path=…/custom_models/labels.txt

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=1
gie-unique-id=1
#nickchange 1 to 0
maintain-aspect-ratio=0
#nickchange comment out
parse-bbox-func-name=NvDsInferParseCustomYoloV2Tiny
custom-lib-path=…/custom_models/libnvdsinfer_custom_impl_Yolo_Custom_Vision.so
#parse-bbox-func-name=NvDsInferParseCustomSSD
#custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so
#0=Detector
network-type=0
#0=OpenCV GroupRect,1=dbscan,2=NMS,3=hybrid,4=none
#cluster-mode=2
cluster-mode=2
enable-dla=1
#enable-dla=0
use-dla-core=0
batch-size=6
#scaling-filter=4
#scaling-compute-hw=0

[class-attrs-all]
#nick change
#threshold=0.01
#pre-cluster-threshold=0.20
#B4Sean pre-cluster-threshold=0.25
pre-cluster-threshold=0.001
post-cluster-threshold=0.01
nms-iou-threshold=0.45
#pre-cluster-threshold=0.50
#nms-iou-threshold=0.50
#roi-top-offset=10
#roi-bottom-offset=14
topk=100

Hi,

This issue looks like more related to Deepstream. We are moving this post to the Deepstream forum to get better help.

Thank you.

1 Like

Thank you!!!

1 Like

From the log, TensorRT can not build your model to engine. Where did you get the model? You need to check whether your nvinfer configuration is compatible to your model.

This is a Microsoft Azure Custom Vision “Compact Domain (S1)” {SSD} generated model. We have the “Compact Domain” {Yolo} running, but were running into issues with detection regression when we moved to Deepstream 6.0.1.

We have made progress on the Yolo issue (I will raise that as a different issue) but we are wondering if there is a way to configure Deepstream to import a Azure generated SSD model… Mostly, because we aren’t experts at deepstream and TensorRT yet, I wanted a skilled second look at the logs (and any files that I need to share) to tell me if this is supported or not.

The performance issue is here:

You need to check whether the nvinfer configuration is compatible to your model or not. If it is compatible, then the log means DeepStream do not support such model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.