Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU • DeepStream Version
7.1 • TensorRT Version
10.3 (in DS7.1 Docker container) • NVIDIA GPU Driver Version (valid for GPU only)
565.57.01 • Issue Type( questions, new requirements, bugs)
Question
I’m attempting to add custom metadata upstream of a stream muxer, using the deepstream-gst-metadata-test sample app as an example. The calls to gst_buffer_add_nvds_meta appear to succeed and I’m able to assign the necessary values to the various attributes. I’m doing this using the incoming GstBuffer in the transform_ip function and not in a probe.
I’m having problems getting this to work in two cases:
If I place a “metadata reader” downstream of the muxer and also downstream of an NvInferAudio plugin (i.e. an audio classifier), it appears as though the first frame in the batch metadata has an invalid address for the user meta list. If I try to access it, I get a segfault. Trying to debug in gdb, I can see gdb can’t access the memory.
If I remove the NvInferAudio plugin, the “metadata reader” no longer segfaults; however, the frame user meta list is now NULL.
The above leads me to believe that perhaps NvInferAudio does something to the metadata, but even more confusing is why the custom metadata isn’t appearing downstream of the muxer in any case.
Nothing else seems to be failing, and I’ve checked and rechecked my code against the gst metadata example.
The metadata reader is pulling from a valid GstBuffer that should be coming from a connected sink pad in the chain. (I’m actually trying to do it using the gst-nvdsaudiotemplate element in the DeepStream GST plugins library, using the buffer from the submitted input buffer, i.e. not from a probe.)
I’m a bit stumped, so if anyone can help, it’d be appreciated.
I can also upload images of the pipelines I’m working with if that helps.
I’ve attached a PNG of the pipeline with the audio classifier included. The GstLevelfilter is where the code exists to attach the custom metadata (as per my original message) and the GstNvDsAudioTemplate filter is where the code exists to try to extract the custom metadata.
Are you trying to transfer NvDsMeta through the audio pipeline as deepstream-gst-metadata-test? Since audio pipeline can only use new nvstreammux, the new nvstreammux does not support NvDsMeta transferring or any other metadata transferring but only generate the new batch meta.
What kind of information do you want to transfer through new nvstreammux?
I also thought that you had to specify via an environment variable that the new muxer is to be used (I’m not setting any such variable in my own project). So I’m a bit confused, and I appreciate you helping me to get this straight.
As for the kind of information I’m looking to pass through, in this case, it’s the current audio level in dB. I might want some other info about the audio stream that I can’t seem to find in any DeepStream plugins.
What is the purpose by transferring the “audio level” through the pipeline? Do you want to control the sound card by passing such information to audio render? Can you transfer such information by the application itself?
The purpose is to analyze the audio as it comes through, similar to an IVA. I’m starting with audio levels to do some math on it and make some determinations, and this is meant to (eventually) be done alongside or in conjunction with inferencing.
That is unfortunate. Are there any plans to support this workflow? If so, when might it be released?
Would a workaround be to include the audio in a video container, e.g. MP4, and use it as a video source instead of an audio source? Is there a way to extract audio buffers downstream from a streammux’d video?