Adapting MQTT Configuration for Customized Deepstream Models: Need Assistance

Please provide complete information as applicable to your setup.

Hardware Platform (Jetson)
DeepStream Version 6.3
JetPack Version:
Package: nvidia-jetpack
Version: 5.1.3-b29
Architecture: arm64

• TensorRT Version 8.5.2-1+cuda11.4

• Issue Type( questions, new requirements, bugs)

I’ve successfully modified the primary model of the deepstream-test5-app to perform face detection, with age and gender as secondary models. However, the MQTT configuration file provided with the default app is tailored for vehicle detections, including sensor placement and analytics. As there are no specific instructions available regarding MQTT config files, I’m seeking assistance on how to adapt the MQTT file to accommodate our customized models and efficiently transmit analytics to MQTT Node-RED. I’ve attached the source files and config files below. Any guidance or examples would be greatly appreciated.

Mqtt Config File:
mqtt_config.txt (1.9 KB)

Source File:
source.txt (9.1 KB)

please refer to this topic for MQTT configurations.

We’ve already tried out Nvidia’s solution with the doc to msg-broker and implemented it using the source file shared under [sink0].

Now, we’re thinking about shifting focus from vehicle data to face age and gender detection. Any thoughts or suggestions on how we can make this transition smoothly?

Attached are the files mentioned for your reference.
mqtt_config.txt (1.9 KB)
source.txt (9.1 KB)

if age and gender models are classification model, deepstream-app already support one detection model with multiple classification models. please refer to point 4 in this link. BTW, deepstream-app is opensource. you can modify the code to customize.

Trying on the modification of the code, will update once we tried

I have configured a webcam to my source file with a primary model for face detection and secondary models for age and gender detection. I need to send the analytics obtained from the webcam, based on these models, through an MQTT message broker. When attempting to configure the MQTT config file, I encountered parameters for sensors, place, and analytics. I’m unsure about the format of these parameters. When attaching the MQTT config to my source file and sending data, I receive analytics related to vehicle detection and license plate recognition, not those from my face detection models.

  sensor: {
    id: 'HWY_20_AND_LOCUST__EBA__4_11_2018_4_59_59_508_AM_UTC-07_00',
    type: 'Camera',
    description: 'Aisle Camera',
    location: { lat: 45.293701447, lon: -75.8303914499, alt: 48.1557479338 },
    coordinate: { x: 5.2, y: 10.1, z: 11.2 }
  },
  analyticsModule: {
    id: 'XYZ_1',
    description: 'Vehicle Detection and License Plate Recognition',
    source: 'OpenALR',
    version: '1.0'
  },
  object: {
    id: '2332',
    speed: 0,
    direction: 0,
    orientation: 0,
    Car: {},
    bbox: {
      topleftx: 570,
      toplefty: 484,
      bottomrightx: 650,
      bottomrighty: 542
    },
    location: { lat: 0, lon: 0, alt: 0 },
    coordinate: { x: 0, y: 0, z: 0 },
    pose: {}
  },
  event: { id: '49175ebe-2b05-49b5-82bb-de2d6eebe31b', type: 'entry' },
  videoPath: ''

How can I configure the MQTT configuration file to properly reflect the analytics from my models, specifically face detection, age, and gender, using the DeepStream test5 app? I’m stuck and would appreciate guidance on configuring the MQTT settings correctly for my models.

source file:

################################################################################
# Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=1
#uri=file:/opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_1080p_h265.mp4
#uri=rtsp://foo.com/stream1.mp4
#uri=rtsp:/opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_1080p_h265.mp4
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0
#num-sources=1
#gpu-id=0
#nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# smart record cache size in seconds
#smart-rec-cache
# default duration of recording in seconds.
#smart-rec-default-duration
# duration of recording in seconds.
# this will override default value.
#smart-rec-duration
# seconds before the current time to start recording.
#smart-rec-start-time
# value in seconds to dump video stream.
#smart-rec-interval

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://foo.com/stream2.mp4
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# smart record cache size in seconds
#smart-rec-cache

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
#msg-conv-config=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/mqtt_config.txt
msg-conv-config=/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test5/configs/dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(2): PAYLOAD_DEEPSTREAM_PROTOBUF - Deepstream schema protobuf encoded payload
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
#(0): Create payload using NvdsEventMsgMeta
#(1): New Api to create payload using NvDsFrameMeta
msg-conv-msg2p-new-api=0
#Frame interval at which payload is generated
msg-conv-frame-interval=30
#msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_mqtt_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=10.0.1.81;1883;test
topic=test
#Optional:
#msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt
#new-api=0
#(0) Use message adapter library api's
#(1) Use new msgbroker library api's

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(2): PAYLOAD_DEEPSTREAM_PROTOBUF - Deepstream schema protobuf payload
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=/opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.

[primary-gie]
enable=1
gpu-id=0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
model-engine-file=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/Primary_FaceDetector/resnet18_detector.etlt_b2_gpu0_fp16.engine
labelfile-path=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/Primary_FaceDetector/labels.txt
config-file=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_primary_facedetectir.txt
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
#labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/labels.txt
#config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/


[tracker]
enable=1
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
gpu-id=0
display-tracking-id=1

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=1
config-file=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_genderv11_classifier.txt
labelfile-path=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/genderv11/labels.txt
model-engine-file=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/genderv11/gender11.caffemodel_b1_gpu0_fp16.engine
#config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt
#labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/labels.txt
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine


[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=1
config-file=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_age_classifier.txt
labelfile-path=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/age_model/labels.txt
model-engine-file=/home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/age_model/age.model_b1_gpu0_fp16.engine
#config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carmake.txt
#labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarMake/labels.txt
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine


#[secondary-gie2]
#enable=0
#gpu-id=0
#gie-unique-id=4
#operate-on-gie-id=1
#operate-on-class-ids=0;
#batch-size=1
#config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt
#labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_VehicleTypes/labels.txt
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine


[tests]
file-loop=1

mqtt config file:

################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[sensor0]
enable=1
type=Camera
id=HWY_20_AND_LOCUST__EBA__4_11_2018_4_59_59_508_AM_UTC-07_00
location=45.293701447;-75.8303914499;48.1557479338
description=Aisle Camera
coordinate=5.2;10.1;11.2

[sensor1]
enable=1
type=Camera
id=HWY_20_AND_LOCUST__WBA__4_11_2018_4_59_59_379_AM_UTC-07_00
location=45.293701447;-75.8303914499;48.1557479338
description=Aisle Camera
coordinate=5.2;10.1;11.2

[sensor2]
enable=1
type=Camera
id=HWY_20_AND_DEVON__WBA__4_11_2018_4_59_59_134_AM_UTC-07_00
location=45.293701447;-75.8303914499;48.1557479338
description=Aisle Camera
coordinate=5.2;10.1;11.2

[sensor3]
enable=1
type=Camera
id=HWY_20_AND_LOCUST__4_11_2018_4_59_59_320_AM_UTC-07_00
location=45.293701447;-75.8303914499;48.1557479338
description=Aisle Camera
coordinate=5.2;10.1;11.2

[place0]
enable=1
id=0
type=intersection/road
name=HWY_20_AND_LOCUST__EBA
location=30.32;-40.55;100.0
coordinate=1.0;2.0;3.0
place-sub-field1=C_127_158
place-sub-field2=Lane 1
place-sub-field3=P1

[place1]
enable=1
id=1
type=intersection/road
name=HWY_20_AND_LOCUST__WBA
location=30.32;-40.55;100.0
coordinate=1.0;2.0;3.0
place-sub-field1=C_127_158
place-sub-field2=Lane 1
place-sub-field3=P1

[place2]
enable=1
id=2
type=intersection/road
name=HWY_20_AND_DEVON__WBA
location=30.32;-40.55;100.0
coordinate=1.0;2.0;3.0
place-sub-field1=C_127_158
place-sub-field2=Lane 1
place-sub-field3=P1

[place3]
enable=1
id=3
type=intersection/road
name=HWY_20_AND_LOCUST
location=30.32;-40.55;100.0
coordinate=1.0;2.0;3.0
place-sub-field1=C_127_158
place-sub-field2=Lane 1
place-sub-field3=P1

[analytics0]
enable=1
id=XYZ_1
description=Vehicle Detection and License Plate Recognition
source=OpenALR
version=1.0

[analytics1]
enable=1
id=XYZ_2
description=Vehicle Detection and License Plate Recognition 1
source=OpenALR
version=1.0

[analytics2]
enable=1
id=XYZ_3
description=Vehicle Detection and License Plate Recognition 2
source=OpenALR
version=1.0

[analytics3]
enable=1
id=XYZ_4
description=Vehicle Detection and License Plate Recognition 4
source=OpenALR
version=1.0

can you see the detected bboxes? there are two modes for payload generation. please refer to the doc. Noticing you are using the first mode(msg-conv-msg2p-new-api=0), you need to customize generate_event_msg_meta in opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-test5\deepstream_test5_app_main.c. you can use NvDsFaceObject.

working on this now, will update if succeeded.

I can see the detected boxes, but im not getting the expected analytics from the boxes, as you said,

static void
generate_event_msg_meta (AppCtx * appCtx, gpointer data, gint class_id, gboolean useTs,
    GstClockTime ts, gchar * src_uri, gint stream_id, guint sensor_id,
    NvDsObjectMeta * obj_params, float scaleW, float scaleH,
    NvDsFrameMeta * frame_meta)
{
  NvDsEventMsgMeta *meta = (NvDsEventMsgMeta *) data;
  GstClockTime ts_generated = 0;

  meta->objType = NVDS_OBJECT_TYPE_UNKNOWN; /**< object unknown */
  /* The sensor_id is parsed from the source group name which has the format
   * [source<sensor-id>]. */
  meta->sensorId = sensor_id;
  meta->placeId = sensor_id;
  meta->moduleId = sensor_id;
  meta->frameId = frame_meta->frame_num;
  meta->ts = (gchar *) g_malloc0 (MAX_TIME_STAMP_LEN + 1);
  meta->objectId = (gchar *) g_malloc0 (MAX_LABEL_SIZE);

  strncpy (meta->objectId, obj_params->obj_label, MAX_LABEL_SIZE);

  /** INFO: This API is called once for every 30 frames (now) */
  if (useTs && src_uri) {
    ts_generated =
        generate_ts_rfc3339_from_ts (meta->ts, MAX_TIME_STAMP_LEN, ts, src_uri,
        stream_id);
  } else {
    generate_ts_rfc3339 (meta->ts, MAX_TIME_STAMP_LEN);
  }

  /**
   * Valid attributes in the metadata sent over nvmsgbroker:
   * a) Sensor ID (shall be configured in nvmsgconv config file)
   * b) bbox info (meta->bbox) <- obj_params->rect_params (attr_info have sgie info)
   * c) tracking ID (meta->trackingId) <- obj_params->object_id
   */

  /** bbox - resolution is scaled by nvinfer back to
   * the resolution provided by streammux
   * We have to scale it back to original stream resolution
    */

  meta->bbox.left = obj_params->rect_params.left * scaleW;
  meta->bbox.top = obj_params->rect_params.top * scaleH;
  meta->bbox.width = obj_params->rect_params.width * scaleW;
  meta->bbox.height = obj_params->rect_params.height * scaleH;

  /** tracking ID */
  meta->trackingId = obj_params->object_id;

  /** sensor ID when streams are added using nvmultiurisrcbin REST API */
  NvDsSensorInfo* sensorInfo = get_sensor_info(appCtx, stream_id);
  if(sensorInfo) {
    /** this stream was added using REST API; we have Sensor Info! */
    LOGD("this stream [%d:%s] was added using REST API; we have Sensor Info\n",
        sensorInfo->source_id, sensorInfo->sensor_id);
    meta->sensorStr = g_strdup (sensorInfo->sensor_id);
  }

  (void) ts_generated;

  /*
   * This demonstrates how to attach custom objects.
   * Any custom object as per requirement can be generated and attached
   * like NvDsVehicleObject / NvDsPersonObj ect. Then that object should
   * be handled in gst-nvmsgconv component accordingly.
   */
  if (model_used == APP_CONFIG_ANALYTICS_RESNET_PGIE_3SGIE_TYPE_COLOR_MAKE) {
    if (class_id == RESNET10_PGIE_3SGIE_TYPE_COLOR_MAKECLASS_ID_CAR) {
      meta->type = NVDS_EVENT_MOVING;
      meta->objType = NVDS_OBJECT_TYPE_VEHICLE;
      meta->objClassId = RESNET10_PGIE_3SGIE_TYPE_COLOR_MAKECLASS_ID_CAR;

      NvDsVehicleObject *obj =
          (NvDsVehicleObject *) g_malloc0 (sizeof (NvDsVehicleObject));
      schema_fill_sample_sgie_vehicle_metadata (obj_params, obj);

      meta->extMsg = obj;
      meta->extMsgSize = sizeof (NvDsVehicleObject);
    }
#ifdef GENERATE_DUMMY_META_EXT
    else if (class_id == RESNET10_PGIE_3SGIE_TYPE_COLOR_MAKECLASS_ID_PERSON) {
      meta->type = NVDS_EVENT_ENTRY;
      meta->objType = NVDS_OBJECT_TYPE_PERSON;
      meta->objClassId = RESNET10_PGIE_3SGIE_TYPE_COLOR_MAKECLASS_ID_PERSON;

      NvDsPersonObject *obj =
          (NvDsPersonObject *) g_malloc0 (sizeof (NvDsPersonObject));
      generate_person_meta (obj);

      meta->extMsg = obj;
      meta->extMsgSize = sizeof (NvDsPersonObject);
    }
#endif /**< GENERATE_DUMMY_META_EXT */
  }

}

is it here only that i need to customize the code and i tried doing so still no luck on getting the expected analytics from our models, i’m still getting the vehicle data, is there a example code that i can refer to on customizing the code?

yes, you need to modify generate_event_msg_meta to customize. please rebuild the test5 after modifying. Currently there is no ready-made sample. please refer to the following code snippet, the highlights are needed for your application.
generate_event_msg_meta{
meta->objType = NVDS_OBJECT_TYPE_FACE;
NvDsFaceObject *obj =
(NvDsFaceObject *) g_malloc0 (sizeof (NvDsFaceObject));
schema_fill_sample_sgie_face_metadata (obj_params, obj); //create a new function to add gender and age information.
}

ok noted, will try and update as soon as possible

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.