Dynamicly assign componentId

Hardware Platform: GPU
Deepstream: 7.0
Docker Image: 7.0-triton-multiarch
GPU Type: A4000

I’m assigning componentId to stream_id inside generate_event_msg_meta() function to seperate messages to multiple topics
meta->componentId = stream_id + 1;
I use use-nvmultiurisrcbin=1 and user can add/remove streams(Max 6 streams). Working on deepstream-test5-app
Sink config:

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-payload-type=1
msg-conv-msg2p-new-api=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_amqp_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=rabbit.app_network;5672;guest;guest
topic=deepstream1
msg-broker-comp-id=1
msg-conv-comp-id=1
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor/cfg_amqp.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so


[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_amqp_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=rabbit.app_network;5672;guest;guest
topic=deepstream2
msg-broker-comp-id=2
msg-conv-comp-id=2
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor/cfg_amqp.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_amqp_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=rabbit.app_network;5672;guest;guest
topic=deepstream3
msg-broker-comp-id=3
msg-conv-comp-id=3
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor/cfg_amqp.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so

[sink3]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_amqp_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=rabbit.app_network;5672;guest;guest
topic=deepstream.4
msg-broker-comp-id=4
msg-conv-comp-id=4
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor/cfg_amqp.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so

[sink4]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_amqp_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=rabbit.app_network;5672;guest;guest
topic=deepstream5
msg-broker-comp-id=5
msg-conv-comp-id=5
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor/cfg_amqp.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so

[sink5]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_amqp_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=rabbit.app_network;5672;guest;guest
topic=deepstream6
msg-broker-comp-id=6
msg-conv-comp-id=6
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor/cfg_amqp.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so

My msg-broker-comp-id range is 1 to 6.
The problem is after adding 6 streams stream_id reaches to 5 (Starts from 0). When a stream removed and new stream added stream_id increments by 1. meta->componentId = stream_id + 1;componentId becomes 7 (Which is not defined at config file). doesnt sent any messages and goes on.
How can i know after a stream remove which sinks are used/not used to determine unused msg-broker-comp-id at the moment?

Example:
6 stream added
stream_id=0 -> componentId = 1
stream_id=1 -> componentId = 2
stream_id=2 -> componentId = 3
stream_id=3 -> componentId = 4
stream_id=4 -> componentId = 5
stream_id=5 -> componentId = 6
Stream with stream_id=2 removed. Now componentId=3 not used
Add a new stream. Want to assign componentId = 3 to new stream
stream_id=6 -> componentId = 3

Yes. The nvmultiurisrcbin is implemented in this way.

You can check with the source code /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5/deepstream_test5_app_main.c, The stream_id is actually the “frame_meta->source_id”. The frame meta source id is the sink index. NVIDIA DeepStream SDK API Reference: _NvDsFrameMeta Struct Reference | NVIDIA Docs

The “componentId” NVIDIA DeepStream SDK API Reference: NvDsEventMsgMeta Struct Reference | NVIDIA Docs is actually the inferencing component’s id which comes from the " unique_component_id" in NVIDIA DeepStream SDK API Reference: _NvDsObjectMeta Struct Reference | NVIDIA Docs

Does pipeline creates new sinks after using all defined sinks at the config file? Defined 6 sink group at the config but frame_meta->source_id passes 6 after bunch of stream add/remove. If it works like this how can i set sinks msg-broker-comp-id and msg-conv-comp-id?

If you limit the source number by “max-batch-size” in the [source-list] group configuration, the frame_meta->source_id will not exceed max-batch-size-1.

The msg-broker-comp-id and msg-conv-comp-id have nothing to do with the sources. What do you want to do with the msg-broker-comp-id and msg-conv-comp-id?

Theoretically, you only need to guarantee the msg-broker-comp-id and msg-conv-comp-id values are unique in the pipeline. The values will be used to set “comp-id” property of the nvmsgbroker and nvmsgconv.

Using max-batch-size as 6. If you only add stream it not exceeds but after reaching to max-batch-size, stream end or removal allows you to add new streams and frame_meta->source_id exceeds max-batch-size-1 on my testings.

If pipeline creates new sinks after using all defined sinks at the config file, i need to set msg-broker-comp-id and msg-conv-comp-id to send messages. Trying to match source_id with componentId to seperate messages to multiple topics. When source_id passes max-batch-size-1, componentId doesnt match any of the defined msg-broker-comp-id and msg-conv-comp-id.

I’ve tested with deepstream-test5 with DeepStream 7.1. The frame_meta->source_id does not exceeds max-batch-size-1 after the source add/remove count exceed max-batch-size.

Can you try the DeepStream 7.1?

1 Like

Thanks, Deepstream 7.1 works.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.