Audio Classification inference on Jetson Nsno

I have an ONNX audio classification model that came from Matlab audio toolbox. I want to run inference on Jeston Nano with TensorRT. The audio file is *.wav, a bunch of numbers. There are directions for the image classification, object detection, but not for the audio classification. Please point me to a solution.

Hi @ashmadan01, you would need to implement a program (presumably either with the TensorRT Python or C++ API) that loads your ONNX model, performs any pre-processing of the audio, calls the TensorRT engine with it, and gets the maximum-likelihood output classification. Some of this is specific to the kind of model that Matlab exported, and the input/output layers that it expects. The image classification and object detection examples had to implement something similar underneath to support models like ResNet, SSD, YOLO, ect.