Custom payload schema with nvmsgconv.cpp - weird output string instead of JSON

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson-AGX Xavier
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 5.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’m using modified test5-app with nvcr.io/nvidia/deepstream-l4t:6.2-iot DeepStream SDK image to send the output to Azure IotHub.

Here are the steps that I took:

  1. used the default, not edited /opt/nvidia/deepstream/sources/libs/nvmsgconv/deepstream_schema/eventmsg_payload.cpp
  2. compiled the directory and got libnvds_msgconv.so file
  3. copied the file to /opt/nvidia/deepstream/deepstream/lib
  4. changed the sink1 in config file to use custom payload schema:
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=257
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_azure_edge_proto.so
topic=mytopic

The app works but does not send proper payload but only like a general placeholder.
Here’s what events I see in Azure IoTHub logs:

{
    "event": {
        "origin": "jetson-iothub-name",
        "module": "NVIDIADeepStreamSDK",
        "interface": "",
        "component": "",
        "payload": "CUSTOM Schema\u0000"
    }
}

I expected a full JSON payload, but only get a placeholder-like string “CUSTOM Schema”.
I guess it’s something with my app config?

You can see the full config below.

The inputs are a usb camera and a local video file.
The output is a RTSP stream.
The app works with no errors, but I don’t get the proper payload in Azure IoTHub.

Any ideas?

/* Copyright notice ... */

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=2
width=1280
height=480
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
camera-id=1
uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=257
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_azure_edge_proto.so
topic=mytopic

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=3
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=416
height=416
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
model-engine-file=../custom_models/eyra-model.onnx_b3_fp32.engine
config-file=config_infer_eyra.txt
batch-size=3
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=4
gie-unique-id=1

[tracker]
enable=1
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=1

/** Specifies a custom payload. You must implement the nvds_msg2p_*
interface. /
NVDS_PAYLOAD_CUSTOM = 0x101,
as the code shown, you need to implement the nvds_msg2p_
if using msg-conv-payload-type=257. plesse try 1 or 0, and nvmsgcov is opensource, you can check the code if needed, the path is: opt\nvidia\deepstream\deepstream\sources\libs\nvmsgconv\nvmsgconv.cpp

That is right.

Switching to msg-conv-payload-type=0 or msg-conv-payload-type=1 results in proper output in the default format (schema).

Now I want to switch to msg-conv-payload-type=257 to use my own, custom format (schema).

So the steps I took:

  1. modified the /deepstream_schema/eventmsg_payload.cpp file and implemented my own generate_event_message_minimal() function
  2. modified the default nvmsgconv.cpp file and replaced the code for all if (ctx->payloadType == NVDS_PAYLOAD_CUSTOM) clauses to use generate_event_message_minimal() function instead of a static string "CUSTOM Schema".

For example:

BEFORE:

(...)
else if (ctx->payloadType == NVDS_PAYLOAD_CUSTOM) {
    payloads[*payloadCount] = (NvDsPayload *) g_malloc0 (sizeof (NvDsPayload));
    payloads[*payloadCount]->payload = (gpointer) g_strdup ("CUSTOM Schema");
    payloads[*payloadCount]->payloadSize = strlen ((char *)payloads[*payloadCount]->payload) + 1;
    ++(*payloadCount);
} 
(...)

AFTER:

(...)
else if (ctx->payloadType == NVDS_PAYLOAD_CUSTOM) {
	message = generate_event_message_minimal (ctx->privData, events, eventSize);
    if (message) {
      len = strlen (message);
      payloads[*payloadCount] = (NvDsPayload *) g_malloc0 (sizeof (NvDsPayload));
      // Remove '\0' character at the end of string and just copy the content.
      payloads[*payloadCount]->payload = g_memdup (message, len);
      payloads[*payloadCount]->payloadSize = len;
      ++(*payloadCount);
      g_free (message);
    }
(...)

I replaced all instances of the static string, for all if (ctx->payloadType == NVDS_PAYLOAD_CUSTOM) clauses, for all functions within the nvmsgconv.cpp file.

  1. Compiled, copied to /opt/nvidia/deepstream/deepstream/lib, restarted the app. To no effect.
    I can’t find any more instances of g_strdup ("CUSTOM Schema"); anywhere in the code. I still get that "CUSTOM Schema" string as output instead of my JSON implementation.

What else should I do?

you might add logs to print that message, nvmsgconv, nvmsgbroker are opensource, you can add logs to check why “CUSTOM Schema” was sent to broker.

Okay, I made some progress.

the config line msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so references the libnvds_msgconv.so file within the Docker container file system, not my custom libnvds_msgconv.so within Jetson device!

I mounted my libnvds_msgconv.so as a volume for docker and pointed the config to the file.
But it failed, the app would not start (see the error at the bottom of this post).

I thought I made some mistakes in code before compiling, so I once again compiled the default, unmodified files in the /opt/nvidia/deepstream/deepstream/sources/libs/nvdsmsgconv/ directory. Mounted the file, and it failed again.

And so I thought it might be a problem with mounting the file to Docker container but when I mounted the original libnvds_msgconv.so file (the one you can find in /opt/nvidia/deepstream/deepstream/lib) - it worked.

So there must be an issue with compiling - even using the default files with no modifications.
BTW, the original libnvds_msgconv.so is ~1.4M in size, my compiled files are ~2.1M in size.

Here’s the error I get when using the compiled libnvds_msgconv.so:

(deepstream-test5-app:1): GLib-CRITICAL **: 12:42:32.798: g_strrstr: assertion 'haystack != NULL' failed
nvds_msgapi_connect : connect success
Opening in BLOCKING MODE 
** ERROR: <main:1504>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
ERROR from sink_sub_bin_transform1: Could not initialize supporting library.
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmsgconv/gstnvmsgconv.c(410): gst_nvmsgconv_start (): /GstPipeline:pipeline/GstBin:sink_sub_bin1/GstNvMsgConv:sink_sub_bin_transform1:
unable to open converter library
Disconnecting Azure..
App run failed

Not quite sure how to proceed now :(

here is the reason, you can print self->msg2pLib of gst_nvmsgconv_start to check if the file exists.

The file 100% exists.

The only thing I change is the docker volume bind.
When I bind the original file - everything works.
When I bind a file compiled by me (custom or default) - I get the error as above.

I added lines to print logs in gst_nvmsgconv_start but when I try sudo make to compile the /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvmsgconv I get another error:
/usr/bin/ld: cannot find -lnvds_msgconv.

gst-nvmsgconv needs lib nvds_msgconv, from the error, it is because can’t find /opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so

Okay, I solved this.

The libnvds_msgconv.so compiled with DeepStream 6.2 was actually corrupt (see my other thread about issues with compilation here).

I downloaded Deepstream 6.1, made my custom changes, compiled, and then mounted the resulting file to Docker container with Deepstream 6.2.

And it worked like a charm :-)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.