Running the custom YOLO app .Could you direct me to the code entry where the actual inference takes place?
Is it the call to cudaYoloLayerV3 inside enqueue in yoloPlugins.cpp.?
I was able to locate the nms and parsing code , however, could not figure out where is the inference code.
The inference using the network takes place inside NvInfer plugin. This plugin(gst-nvinfer) has been opensourced with the SDK. The cudaYoloLayerV3 you are referring to implements the “yolo” layer in the network. This layer is not a native TensorRT layer and hence been implemented as a plugin.