Deepstream 6.0 x86 custom model

I have a custom ONNX model that I want to use for inference on deepstream 6.0. There doesn’t seem to be any tutorial about how to do that (Deep-Stream-ONNX/FAQ.md at master · thatbrguy/Deep-Stream-ONNX · GitHub is for Jetson Nano). I have an ONNX file called test.onnx of a network that identifies 4 classes. The network takes input of shape 1x3x20x224x224 (20 RGB images each with height=224 and width=224). I am trying to modify /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstre
am-3d-action-recognition to get it working on the video /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.mp4. After I run “./deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt”, I get a segmentation fault :( please help, the 3 relevant files can be seen below:

deepstream_action_recognition_config.txt

# deepstream action recognition config settings.
# run:
# $ deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt

[action-recognition]

# stream/file source list
uri-list=file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.mp4

# eglglessink settings
display-sync=1


# <preprocess-config> is the config file path for nvdspreprocess plugin
# <infer-config> is the config file path for nvinfer plugin

# Enable 3D preprocess and inference
preprocess-config=config_preprocess_3d_custom.txt
infer-config=config_infer_primary_3d_action.txt

# nvstreammux settings
muxer-height=720
muxer-width=1280

# nvstreammux batched push timeout in usec
muxer-batch-timeout=40000


# nvmultistreamtiler settings
tiler-height=720
tiler-width=1280

# Log debug level. 0: disabled. 1: debug. 2: verbose.
debug=0

# Enable fps print on screen. 0: disable. 1: enable
enable-fps=1

config_preprocess_3d_custom.txt

# The values in the config file are overridden by values set through GObject
# properties.

[property]
enable=1
target-unique-ids=1

# network-input-shape: batch, channel, sequence, height, width
# 3D sequence of 20 images

# 3D sequence of 32 images
network-input-shape= 1;3;20;224;224

# 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
# 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=input_rgb

processing-width=224
processing-height=224

# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE
# 3=NVBUF_MEM_CUDA_UNIFIED  4=NVBUF_MEM_SURFACE_ARRAY(Jetson)
scaling-pool-memory-type=0

# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU
# 2=NvBufSurfTransformCompute_VIC(Jetson)
scaling-pool-compute-hw=0

# Scaling Interpolation method
# 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
# 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
# 6=NvBufSurfTransformInter_Default
scaling-filter=0

# model input tensor pool size
tensor-buf-pool-size=8

custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_custom_sequence_preprocess.so
custom-tensor-preparation-function=CustomSequenceTensorPreparation

# 3D conv custom params
[user-configs]
channel-scale-factors=0.007843137;0.007843137;0.007843137
channel-mean-offsets=127.5;127.5;127.5
stride=1
subsample=0

config_infer_primary_3d_action.txt

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0

onnx-file=test.onnx
num-detected-classes=4
is-classifier=0
labelfile-path=labels.txt

Hi @user129617 ,
Could refer to sample - deepstream_tao_apps/apps/tao_classifier at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

or

deepstream_reference_apps/README.md at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub for config files

did you try to “gdb” to check where it crashes ?

I adjusted a few things and now it works. Consider this solved

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.