Implementing kafka in objectdetector_YOLO

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Ubuntu 18.04, RTX 2070
• DeepStream Version
Deepstream-6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
TensorRT 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
470.103
• Issue Type( questions, new requirements, bugs)
Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am trying to implement a simple message broker in the pre-built yolo detector. I have successfully tested and installed kafka, the kafka consumer can show messages when test4 is run. I have been unable to receive the same output with yolo application.

I have added the following lines in my config file for objectDetector_YOLO,

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest4_msgconv_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;test
topic=test
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

when i run the code, i algorithm works fine but i cannot receive output on kafka consumer.

The algorithm output is attached below,

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Unknown or legacy key specified 'is-classifier' for group [property]
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Unknown group message-broker
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:00.213031450  5719 0x5608a1e9a060 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:661 INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in 'NvDsInferCreateNetwork' implementation
Loading pre-trained weights...
Loading weights of yolov3 complete!
Total Number of weights read : 62001757
Loading pre-trained weights...
Loading weights of yolov3 complete!
Total Number of weights read : 62001757
Building Yolo network...
      layer               inp_size            out_size       weightPtr
(0)   conv-bn-leaky     3 x 608 x 608      32 x 608 x 608    992   
(1)   conv-bn-leaky    32 x 608 x 608      64 x 304 x 304    19680 
(2)   conv-bn-leaky    64 x 304 x 304      32 x 304 x 304    21856 
(3)   conv-bn-leaky    32 x 304 x 304      64 x 304 x 304    40544 
(4)   skip             64 x 304 x 304      64 x 304 x 304        - 
(5)   conv-bn-leaky    64 x 304 x 304     128 x 152 x 152    114784
-----------------output removed for conciseness---------------
(102) conv-bn-leaky   128 x  76 x  76     256 x  76 x  76    61607006
(103) conv-bn-leaky   256 x  76 x  76     128 x  76 x  76    61640286
(104) conv-bn-leaky   128 x  76 x  76     256 x  76 x  76    61936222
(105) conv-linear     256 x  76 x  76     255 x  76 x  76    62001757
(106) yolo            255 x  76 x  76     255 x  76 x  76    62001757
Output yolo blob names :
yolo_83
yolo_95
yolo_107
Total number of yolo layers: 257
Building yolo network complete!
Building the TensorRT Engine...
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
Building complete!
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1456 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/model_b1_gpu0_int8.engine opened error
0:00:25.694885617  5719 0x5608a1e9a060 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1942> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/model_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT data            3x608x608       
1   OUTPUT kFLOAT yolo_83         255x19x19       
2   OUTPUT kFLOAT yolo_95         255x38x38       
3   OUTPUT kFLOAT yolo_107        255x76x76       

0:00:25.703185699  5719 0x5608a1e9a060 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/config_infer_primary_yoloV3.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	FPS 1 (Avg)	
**PERF:  0.00 (0.00)	0.00 (0.00)	
** INFO: <bus_callback:194>: Pipeline ready

** INFO: <bus_callback:180>: Pipeline running

**PERF:  13.62 (13.39)	13.18 (12.84)	
**PERF:  12.50 (12.98)	12.44 (12.72)	
**PERF:  12.50 (12.85)	12.49 (12.68)	
**PERF:  12.42 (12.68)	12.58 (12.60)	
**PERF:  12.50 (12.67)	12.50 (12.60)	
**PERF:  12.57 (12.65)	12.50 (12.60)	

If i shutdown kafka server when the program is running, the following text is seen on the output terminal:

**PERF:  12.50 (12.51)	12.50 (12.50)	
**PERF:  12.35 (12.51)	12.53 (12.50)	
**PERF:  12.59 (12.51)	12.56 (12.50)	
%3|1658780074.018|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/0: Receive failed: Disconnected
%3|1658780074.018|ERROR|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/0: Receive failed: Disconnected
%3|1658780074.018|ERROR|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: 1/1 brokers are down
**PERF:  12.50 (12.51)	12.50 (12.50)	
**PERF:  12.48 (12.51)	12.32 (12.50)	

I have also enabled nvds logging and following is the log entry for code run:

Jul 26 01:04:33 msi deepstream-app: DSLOG:NVDS_KAFKA_PROTO: Kafka connection successful

I am using the default ‘dstest4_msgconv_config.txt’ file for kafka implementation in YOLO.

I have also tried porting yolo to test4 app but that approach isnt compatible with my use case, i need flexibility of configuration files and thus cannot go the test4 route.

I have read the README files for test4 and test5 as well. It seems i need another file for mapping data from the stream to the entries in ‘dstest4_msgconv_config.txt’. I would need help in writing a sample file and integrating it with the main objectDetector_YOLO.

Full Config File I’m using is as follows:

################################################################################
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=2
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../samples/streams/sample_youtube1_720p_h264.mp4
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../samples/streams/sample_youtube2_720p_h264.mp4
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest4_msgconv_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;test
topic=test
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=0

Update:1
Adding ‘msg-conv-comp-id=1’, 'msg-broker-comp-id=1’in config file also won’t produce any results on kafka consumer. ‘gie-unique-id=1’ in file: ‘config_infer_primary_yoloV3.txt’ has been set as well.

Any help would be highly appreciated.

  1. please capture network packets to check if data was sent to server.
  2. nvmsgborker is opensource, the path is opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvmsgbroker\gstnvmsgbroker.cpp ,please add logs to debug.

Thank you for the reply,
I have used the following command to capture packets:

sudo tshark -V -i lo -o 'kafka.tcp.port:9092' -d tcp.port=9092,kafka -f 'dst port 9092'

in case of test4-application i received the fetch request from consumers as well as payload created by test4. Following output shows that packets are being transmitted to the consumer:

Frame 19: 1947 bytes on wire (15576 bits), 1947 bytes captured (15576 bits) on interface 0

i am also able to receive the messages on a kafka consumer.

However I’m unable to receive any packets in case of objectDetector_YOLO, which is my primary concern.

Secondly, i went through gstnvmsgbroker.cpp but i am unable to apply it in my use-case of objectDetector_YOLO. I would need assistance in that regard.

I’m attaching my config file for your reference.
I have also tried using the msg-conv-msg2p-lib option as well, but im unable to receive any output on kafka consumer.

#msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_msgconv.so

Thank you for your patience
ds-conf-yolo3.txt (5.6 KB)

Can you please point out the changes i should make in my config file to receive bbox coordinates in kafka consumer?

thats all what i really need. thanks

I have also tried updating my config file and used the option ‘disable-msgconv = 1’ to create a different message converter

[message-converter]
enable=1
msg-conv-config=dstest4_msgconv_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
msg-conv-msg2p-lib=libnvds_msgconv.so
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

still no luck.

please refer to Multiple messages to multiple topics - bboxes and analytics - #4 by Amycao

thanks for the reply,

i have updated my config file as instructed in your link,

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest4_msgconv_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-comp-id=1 
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_kafka_proto.so
#msg-conv-msg2p-new-api=0
msg-conv-frame-interval=30
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_msgconv.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;test
topic=test
##disable-msgconv=1
#Optional:
msg-broker-config=cfg_kafka.txt

the problem is still not fixed. my kafka consumer is blank.

i just need something like this as output:

    "bbox" : {
      "topleftx" : 1032,
      "toplefty" : 484,
      "bottomrightx" : 1202,
      "bottomrighty" : 561
    },

thank you for your patience

in objectDetector_YOLO, you need to add generate_event_msg_meta, please refer to deepstream-test4’s osd_sink_pad_buffer_probe

i understand your approach thanks, but this can only provide bbox, what about object ids, type, time period, stream no.etc.

If possible can you provide the configuration files for the test4 sample?

something which can be run as: ‘deepstream-app -c test4_config.txt’ and includes all original functionality of the test4 sample?

in simple terms, can the test4 sample application be run as a config file as in the case of objectDetector_YOLO, if such a file is available then it will greatly simplify the problem and would help in understanding.
Is it possible for you to write a file with minimal functionality. something which uses the resnet10 model with a single source, sink and provides output messages on kafka?

thank you for your help.

Can someone provide the config.txt file which correctly implements the test4 sample?

something which can implement the test4 when run via deepstream-app…

i have been able to write one from scratch but sending meta-data via kafka has been troublesome.

thanks

As you know, nvmsgconv is opensource, the path is deepstream\deepstream\sources\libs\nvmsgconv\deepstream_schema\dsmeta_payload.cpp
deepstream\deepstream\sources\libs\kafka_protocol_adaptor\nvds_kafka_proto.cpp
you can modify it to add your own information. please add logs to debug.

how do i add logs to debug? thank you

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

  1. you can use printf in libs code, then compile, then copy .so to /opt/nvidia/deepstream/deepstream/lib/, backup the origin so first.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.