The function nvds_msg2p_generate is not passed through. Which function is passed?

In the Deepstream-app sample, I add a sink “msgconv-msgbroker” into the pipeline. I should have set it rightly. But I cannot get the message in other end, kafka. I wonder if the msgconv lib is Ok, so I print something in the function nvds_msg2p_generate. It does not output anything. So this function does not work. Which function is must be conducted? How can I know what is happening?

ps. I did not receive any error feedback, but many warns about broker like the following:
WARN: <broker_queue_overrun:185>: nvmsgbroker queue overrun; Older Message Buffer Dropped; Network bandwidth might be insufficient
If I change the parameter from “msg-broker-conn-str=127.0.0.1;9092;test” to “msg-broker-conn-str=127.0.0.1;9092:test”, a error will be feedback.

Thanks a lot for any suggestion.

For Kafka - Connection string of format: host;port

so I print something in the function nvds_msg2p_generate. It does not output anything.

How did you print?
besides, you also can enable logging to get more logs.
sudo chmod +x sources/tools/nvds_logger/setup_nvds_logger.sh
sudo ./sources/tools/nvds_logger/setup_nvds_logger.sh
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvmsgbroker.html#nvds-logger-logging-framework

The important thing is what function is conducted to transfer the frame metadata to the schema payload, if the function nvds_msg2p_generate is not, in the deepstream-app sample. It is important because it is the last chance to check the message for the broker.

Message conversion called by msg2p_generate_multiple or msg2p_generate based on flag multiplePayloads in nvmsgconv plugin code sources/gst-plugins/gst-nvmsgconv/gstnvmsgconv.c and these two functions will call into generate_deepstream_message_minimal or generate_schema_message based on payload type is NVDS_PAYLOAD_DEEPSTREAM_MINIMAL or NVDS_PAYLOAD_DEEPSTREAM.

Yes. But these functions are not called because there is not any output information. Is there any other bypass way?

This is the log information. It seems that kafka is connected, but kafka consumer does not receive any message, and I do not set any consumer in my app. The function which should work does not work, and the function which should not work does work.


Aug 25 06:48:59 deepstream-app: DSLOG:NVDS_KAFKA_PROTO: kafka partition key field name = sensor.id#012
Aug 25 06:48:59 deepstream-app: DSLOG:NVDS_KAFKA_PROTO: Consumer group id not specified in cfg. Using default group id: test-consumer-group #012
Aug 25 06:48:59 deepstream-app: DSLOG:NVDS_KAFKA_PROTO: Kafka connection successful#012


Any suggestion is welcome. Thanks

Please change echo “if ($msg contains ‘DSLOG’) and ($syslogseverity <= 6) then $nvdslogfilepath” >> 11-nvds.conf
to
echo “if ($msg contains ‘DSLOG’) and ($syslogseverity <= 7) then $nvdslogfilepath” >> 11-nvds.conf
in sources/tools/nvds_logger/setup_nvds_logger.sh
get debug level message.