I’m experimenting with the new gst-nvdspreprocess: I’d like to apply yolov4-tiny to a set of ROI in a video using batch size of greater than 1 for performance.
When I use the demo pipeline below YOLO inference doesn’t happen, the pipeline runs, I see video and ROI but no detection boxes from YOLO.
The pipeline output shows warning (see attached output.log for details):
nvinfer gstnvinfer.cpp:1903:gst_nvinfer_process_tensor_input:<nvinfer0> warning: nvinfer could not find input layer with name = input_1
My YOLO TensorRT model was generated using darknet to onnx to TRT:
I’ve uploaded my model and configuration files here for review: YOLO - Google Drive
Thank you for all the help!!
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Nano • DeepStream Version 6.0 • JetPack Version (valid for Jetson only) 4.6 • TensorRT Version 8.0.1.6 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
as pointed out in the log, it can’t find “input_1” input specified by “tensor-name=input_1” in /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt
Thanks but I am not sure how to resolve that. “input_1” is defined in the nvidia configuration and it may not be in the yolo model (as following the tutorial steps I didn’t come across any input layers). Any suggestion on how to resolve the issue? If not, is there any tutorial or doc to follow to generate multi bach and feed it into yolo with gst-nvdspreprocess?