• Hardware Platform: GPU
• DeepStream Version: 5.0.0
• TensorRT Version: 7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only): 460.32.03
Hi, I’m having this pipeline:
filesrc → qtdemux → h264parse → nvv4l2decoder → nvstreammux → nvinfer (pgie) → queue → nvinfer (sgie) → queue → nvvideoconvert → queue → osd ->nvvideoconvert → x264enc → filesink
with pgie config:
[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=119.8561;110.8077;104.1462
onnx-file=/opt/models/detection.onnx
labelfile-path=labels.txt
infer-dims=3;320;320
batch-size=1
# 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_primary_parser/libnvdsinfer_primary_parser.so
#scaling-filter=0
#scaling-compute-hw=0[class-attrs-all]
pre-cluster-threshold=0.1
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0# Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800
and sgie conf:
[property]
gpu-id=0
net-scale-factor=1
onnx-file=/opt/models/lmd192.onnx
batch-size=1
# Integer 1=Primary 2=Secondary
process-mode=2
gie-unique-id=2
operate-on-gie-id=1
#nw mode 0: FP32, 1: INT8, 2: FP16
network-mode=0
# Integer 0: OpenCV groupRectangles() 1: DBSCAN 2: Non Maximum Suppression 3: DBSCAN + NMS Hybrid 4:No clustering
cluster-mode=2
# Binding dimensions to set on the image input layer.
# infer-dims=3;192;192
infer-dims=1;192;192
# parse-bbox-func-name=NvDsInferParseLandmarks
# custom-lib-path=nvdsinfer_landmark_parser/libnvdsinfer_landmark_parser.so
# Color format required by the model Integer 0: RGB 1: BGR 2: GRAY
model-color-format=2
num-detected-classes=1#nw type Integer 0: Detector 1: Classifier 2: Segmentation 3: Instance Segmentation 100=Other
network-type=100
# Gst-nvinfer attaches raw tensor output as Gst Buffer metadata.
output-tensor-meta=1
When calling
gst_element_set_state (pipeline, GST_STATE_PLAYING);
gst_element_get_state (pipeline, NULL, NULL, GST_CLOCK_TIME_NONE);
the program doesn’t return from the second line. However when changing output-tensor-meta=0
in sgie config things work as expected except I don’t have the tensor meta output which I needed. What’s wrong with that stuff? I altered the deepstream-infer-tensor-meta-test’s pgie config to
network-type=1
output-tensor-meta=0
which works and I suppose gives me more or less the same setup as in my app which doesn’t. Any help appreciated. I’m stuck on this already for a while now.