Trouble running own model with Deepstream Graph Composer

• Hardware Platform : GPU
• DeepStream Version : 7.1
• Graph Composer Version : 4.1.0

Hi, I’m working on creating a graph that can run a video with my own custom inference. For now, I’m testing things on Composer because I’m new to this.

Here is the graph :

My network just takes the flux and gives it back but with changed colors. model.zip

I have trouble running it, I used this command :
/opt/nvidia/graph-composer/execute_graph.sh tests_graph.yaml tests_graph.parameters.yaml -d /opt/nvidia/graph-composer/config/target_x86_64.yaml

and have this error :

Running...
****** NvDsScheduler Runtime Keyboard controls:
p: Pause pipeline
r: Resume pipeline
q: Quit pipeline
2025-01-13 14:48:48.886 INFO  extensions/nvdsbase/nvds_scheduler.cpp@396: NvDsScheduler Pipeline ready

Failed to query video capabilities: Invalid argument
2025-01-13 14:48:49.021 INFO  extensions/nvdsbase/nvds_scheduler.cpp@381: NvDsScheduler Pipeline running

0:00:02.439865852 12561 0x728a0155a630 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:60> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:02.439881248 12561 0x728a0155a630 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:736> [UID = 1]: Failed to parse bboxes
====================================================================================================
|                            GXF terminated unexpectedly                                           |
====================================================================================================
#01 /opt/nvidia/graph-composer/gxe(+0x92fa) [0x61c3c20212fa]
#02 /opt/nvidia/graph-composer/gxe(+0x244da) [0x61c3c203c4da]
#03 /opt/nvidia/graph-composer/gxe(+0x247bc) [0x61c3c203c7bc]
#04 /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x728bdd442520]
#05 attach_metadata_detector(_GstNvInfer*, _GstMiniObject*, GstNvInferFrame&, NvDsInferDetectionOutput&, float) /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so(_Z24attach_metadata_detectorP11_GstNvInferP14_GstMiniObjectR15GstNvInferFrameR24NvDsInferDetectionOutputf+0x87) [0x728bd734b2d7]
#06 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so(+0x1c1d8) [0x728bd733a1d8]
#07 /lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x8e491) [0x728bdd33a491]
#08 /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x728bdd494ac3]
#09 /lib/x86_64-linux-gnu/libc.so.6(+0x126850) [0x728bdd526850]
====================================================================================================
Minidump written to: /tmp/5fa03c00-e1fd-49e4-388928a5-e7982b17.dmp
/opt/nvidia/graph-composer/execute_graph.sh: line 331: 12561 Segmentation fault      (core dumped) ${RUN_PREFIX} ${GXE_PATH} -app "${GRAPH_FILES}" -manifest "${MANIFEST_FILE}" ${RUN_POSTFIX}
*******************************************************************
End tests_graph.yaml
*******************************************************************

The flux run without the Video Inference block, and because of the error message, I assume this is his fault.

And I think I correctly configured the infer file :

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
onnx-file=/home/.../Documents/deepstream_test/sample/models/test_graph/model.onnx
batch-size=1
process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
interval=0
gie-unique-id=1
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=4	

Could someone help with my issue?

Let me now if you need more information.

Thanks in advance.

Gst-nvinfer postprocessing failure. Gst-nvinfer — DeepStream documentation

Please check your nvinfer configuration. We don’t know anything about your model. The nvinfer configurations canrefer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Hi,

Thanks for the reply, it helped me a lot to understand config-file’s parameters, but not everything like the net-scale-factor and the offsets parameter (which i found out that the formula was y = net scale factor*(x-mean)). And I still have the same issue even after modifying stuffs that could have resolve the problem.

Do I have to specify the offsets in my infer-config-file or just the net-scale-factor suffice?
I doubt it because i never saw it in any config sample so probably that the Video Inference block does it himself.
And what exactly is expected for these parameters? net-scale-factor is 1/255, the image size (or is it something else)? so can we enlarge it? Is 255 a good value for it?

I have another graph with an opposite issue, this one give me a flux, but the inference doesn’t seem to work. It’s not the same model.onnx and video file, but everything else is the same and seems well settled.

So maybe it’s related with my models, or my video (.mp4). I don’t know, both are 1920/1080 and should be FP32 (for the model at least)
Do you have any idea of a lead i should explore?

Sorry for asking this much questions, I hope you will help me the best you can.

Thanks in advance.

Infer config file (.txt):

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
onnx-file=/home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx
model-engine-file=/home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx_b1_gpu0_fp32.engine
batch-size=1
process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
interval=0
gie-unique-id=1
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=4	

parameter file (.parameters.yaml) :

components:
- name: Single Source Input18
  parameters:
    uri: file:///home/baury/Documents/deepstream_test/sample/streams/sample_1080p_h264.mp4
name: Single Source Input
---
components:
- name: Stream Muxer33
  parameters:
    batch-size: 1
    height: 1080
    width: 1920
name: Stream Muxer
---
components:
- name: Video Inference38
  parameters:
    config-file-path: /home/baury/Documents/deepstream_test/sample/configs/deepstream-app/config_infer_primary_test.txt
    model-engine-file: /home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx_b1_gpu0_fp32.engine
name: Video Inference

Graph file (.yaml) :

application:
  name: tests_graph
---
dependencies:
- extension: NvDsSourceExt
  uuid: a632d022-3425-4848-9074-e6483ef74366
  version: 1.6.0
- extension: NvDsBaseExt
  uuid: 56d7e3ec-62c6-4652-bcc8-4f1c3b00df03
  version: 1.6.0
- extension: NvDsVisualizationExt
  uuid: 25903cd4-fc5c-4139-987b-47bb27e8b424
  version: 1.6.0
- extension: NvDsOutputSinkExt
  uuid: 3fc9ad87-03e7-47a8-bbfc-8501c3f7ff2f
  version: 1.6.0
- extension: NvDsMuxDemuxExt
  uuid: 89b8398c-5820-4051-835c-a91f2d49766b
  version: 1.6.0
- extension: NvDsInferenceExt
  uuid: 0b02963e-c24e-4c13-ace0-c4cdf36c7c71
  version: 1.6.0
---
components:
- name: Single Source Input18
  parameters:
    audio-out-%u: Dynamic Data Output20
    uri: file:///home/baury/Documents/deepstream_test/sample/streams/sample_1080p_h264.mp4
    video-out-%u: Dynamic Data Output19
  type: nvidia::deepstream::NvDsSingleSrcInput
- name: Dynamic Data Output19
  type: nvidia::deepstream::NvDsDynamicOutput
- name: Dynamic Data Output20
  type: nvidia::deepstream::NvDsDynamicOutput
name: Single Source Input
ui_property:
  position:
    x: -592.23876953125
    y: -322.08154296875
---
components:
- name: On Screen Display25
  parameters:
    video-in: Static Data Input27
    video-out: Static Data Output26
  type: nvidia::deepstream::NvDsOSD
- name: Static Data Output26
  type: nvidia::deepstream::NvDsStaticOutput
- name: Static Data Input27
  type: nvidia::deepstream::NvDsStaticInput
name: On Screen Display
ui_property:
  position:
    x: 263.2415771484375
    y: -278.6819152832031
---
components:
- name: NVidia Video Renderer29
  parameters:
    video-in: Static Data Input30
  type: nvidia::deepstream::NvDsVideoRenderer
- name: Static Data Input30
  type: nvidia::deepstream::NvDsStaticInput
name: NVidia Video Renderer
ui_property:
  position:
    x: 521.6365356445312
    y: -254.86587524414062
---
components:
- name: Deepstream Data Connection31
  parameters:
    source: On Screen Display/Static Data Output26
    target: NVidia Video Renderer/Static Data Input30
  type: nvidia::deepstream::NvDsConnection
name: node11
---
components:
- name: Deepstream Scheduler32
  type: nvidia::deepstream::NvDsScheduler
name: scheduler
---
components:
- name: Stream Muxer33
  parameters:
    batch-size: 1
    height: 1080
    video-in-%u: On Request Data Input34
    video-out: Static Data Output35
    width: 1920
  type: nvidia::deepstream::NvDsStreamMux
- name: On Request Data Input34
  type: nvidia::deepstream::NvDsOnRequestInput
- name: Static Data Output35
  type: nvidia::deepstream::NvDsStaticOutput
name: Stream Muxer
ui_property:
  position:
    x: -291.83837890625
    y: -286.4609375
---
components:
- name: Deepstream Data Connection36
  parameters:
    source: Single Source Input/Dynamic Data Output19
    target: Stream Muxer/On Request Data Input34
  type: nvidia::deepstream::NvDsConnection
name: node12
---
components:
- name: Video Inference38
  parameters:
    config-file-path: /home/baury/Documents/deepstream_test/sample/configs/deepstream-app/config_infer_primary_test.txt
    model-engine-file: /home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx_b1_gpu0_fp32.engine
    video-in: Static Data Input40
    video-out: Static Data Output39
  type: nvidia::deepstream::NvDsInferVideo
- name: Static Data Output39
  type: nvidia::deepstream::NvDsStaticOutput
- name: Static Data Input40
  type: nvidia::deepstream::NvDsStaticInput
name: Video Inference
ui_property:
  position:
    x: -5.0347185134887695
    y: -317.95867919921875
---
components:
- name: Deepstream Data Connection41
  parameters:
    source: Stream Muxer/Static Data Output35
    target: Video Inference/Static Data Input40
  type: nvidia::deepstream::NvDsConnection
name: node14
---
components:
- name: Deepstream Data Connection42
  parameters:
    source: Video Inference/Static Data Output39
    target: On Screen Display/Static Data Input27
  type: nvidia::deepstream::NvDsConnection
name: node15

And same error message.

Depends on your model. The net-scale-factor and offsets parameters are for normalization preprocessing algorithm, if the model uses normalization in the preprocessing, you need to set the parameters.

It depends on the model too. Please consult the guy who give you the model for the preprocessing algorithm when he train the model.

Hi,

Thanks for the reply, it helps a lot.

I’m thinking that maybe I’m taking the problem in the wrong way, and that’s why I’m struggling.
I want to do image restoration, by applying an AI mask on it for example.

Is nvidia::deepstream::NvDsInferVideo the block I have to use for this case? I only saw detection models using it on the forum so maybe it’s because of not being the right block.

Is there a block dedicated for this type of models or do I have do create it myself?
I think I can do it with graph composer but even with Deepstream packages installed, I don’t have access to this option.


How can I repair that? or do I have to do the extension with python?

Again, thanks in advance for your help.

The nvidia::deepstream::NvDsInferVideo extension can be configured to adapt to different models by setting corresponding nvinfer configuration file.

You key problem is how to deploy your model with gst-nvinfer. The Graph Composer DeepStream extensions are all based on DeepStream components.

Thanks for the clarification, I know what to look now, and i now have a flux, but without inference active/mask.

One last question about this I think, do you have a skeleton of a infer config file for segmentation model like the one I have that I should follow?
(I didn’t find a sample that was the same case as me, but maybe i didn’t need)

I have this :

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
onnx-file=/home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx
model-engine-file=/home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx_b1_gpu0_fp32.engine
batch-size=1
process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
network-type=2
interval=0
gie-unique-id=15
segmentation-threshold=0.2
segmentation-output-order=1

Are there other parameters that are mandatory for this case? or not but other that could help resolve my problem, I don’t know why I have an output but without the inference.

Thanks !!

Please refer to deepstream_tao_apps sample of segmentation models.

Hi,

Thanks, do you have any image restoration/modification example?
Like applying a filter on an image or something similar?

I can’t find any segmentation config that correct my problem…

Thanks.

Can you elaborate the details of your model? What is the model’s input and output? What do you want to do with the model’s output?

Hi,

So I want this model to apply a green filter on the video. The model modify color code to have more green and yellow on the image.

name: input
tensor: float32[1,3,1080,1920]

name: output
tensor: float32[1,3,1080,1920]

What I’m expecting to have is a video covered with this green/yellow filter, like your detection samples but without detection, just an image modification / image enhancement.

I want to know how Deepstream and Graph Composer work in this cases if it’s possible. And I used a simple model like so, so I can see it clearly if it work or not.

config file :

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
onnx-file=/home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx
model-engine-file=/home/baury/Documents/deepstream_test/sample/models/test_graph/model.onnx_b1_gpu0_fp32.engine
model-color-format=0
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
network-type=2
interval=0
gie-unique-id=1
cluster-mode=4

Thanks!!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.