I am running deepstream 6.0 on a Jetson Nano
The jetpack version is 4.6 and TensorRT is version 220.127.116.11
I want top implement an pipeline with nvinferaudio in Python
I trie to build a pipeline as described here in Sample Audio Pipeline Graph: DeepStream Reference Application - deepstream-audio app — DeepStream 6.0 Release documentation
The problem is that the graph is about nvinferaudio, but the example of the pipeline does not include nvinferaudio.
I tried to add it at 2 different positions, like shown in the graph but neither solution works.
When a add an nvinferaudio between de nvstreammux and nvstreamdemu, i get : could not link audioconvert1 to nvinferaudio1
When I try between audioresample and alsasink, i get could not link audioresample1 to nvinferaudio0.
What is the correct usage here, , a Python example would be much appreciated
Thanks in advance
The Gst-nvinferaudio plugin does inferencing on input data using NVIDIA® TensorRT™. The plugin accepts batched audio buffers from upstream. as for how this plugin used in sample, you can check code create_audio_classifier_bin from sources/apps/apps-common/src/deepstream_audio_classifier_bin.c
Hi thanks for the reply, I am not very familiar with C. I still do not know if the models used with nvinferaudio are trained based on sound samples, or if the sound is transformed in any kind of image, ie a spectrogram and the model does recognise does spectrograms.
What does the audiof-transform parameter do exactly ?
Why is there so few documentation available for nvinferaudio compared to nvinfer ?
Any help (python examples) would be very much appreciated
Audio model used audio data like mp4, wav, etc for training.
and audio-transform did mel spectogram, audio signal transform processing. you can google search and get understanding.
we do not have python samples for nvinferaudio, you can translate the deepstream-audio pipeline into python version.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.