Nvinferaudio- still no info

Environment : Jetson Nano, Deepstream sdk 6.0, jetpack 4.6, TensorRt 8.0.1.6, Cuda 10.2.300
I really need more information concerning nvinferaudio.
How are the models used are trained? Is the training based on spectrogramms from sounds, or based on sounds directly?
If on spectrograms, how can i generate these spectrograms from audiofiles? Does nvinferaudio allows generating these spectrograms, or are they only generated internally with nvinferaudio and feed into the model classifier? Is there any chance to update the documentation to get a better understanding, and/or adding a python test example in the github repository.
Can anybody help?

Yes. they are generated internally with nvinferaudio and feed into the model classifier.

Thanks, is there a possibility to export those spectrograms?
Is the model trained with spectogram then ( ie the model sonyc_audio_classify.onnx) ?

Hi, is your last response for my topic? Seems to be the wrong answer

Sorry, it’s for another topic. just delete it, to avoid confusion.

We do not support export these spectrograms.
model sonyc_audio_classify.onnx input is spectrogram.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.