Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 7.1 • JetPack Version (valid for Jetson only) • TensorRT Version10.3.0.26 • NVIDIA GPU Driver Version (valid for GPU only) 575.51.03
**• Issue Type( questions, new requirements, bugs)**question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Dear all,
I am trying to do inference on my own audio classifier based on MobilenetV3. However the inference never gets done because (i suspect) streammux is not correctly passing data downstream. I provide my .py here and my config file for nvinferaudio. config.txt (666 Bytes).
I am trying to perform inference over a 5 seconds .wav file with characteristics as per parameters in capsfilter. The output i am getting looks like this:
As you can see by bInferDone=0 the inference is not being done, besides, there is a sudden drop on the buffer’ size right after caps module. Am i making a mistake assuming data is being lost in streammux?
After more (a lot) of debugging i have come to understand that there was not problem with streammux. The 24 bytes the streammux has been sending correspond to the pointer (correct me if i’m wrong) of the 4096 bytes of actual data. However during my debugging proccess i found out that