Running YOLO in jetson-inference library would require adding support for the pre/post-processing that YOLO model expects. Pre-processing includes getting the input image in the right tensor format (typically NCHW), with the right color layout (i.e. BGR vs RGB), and with any necessary mean pixel subtraction and/or normalization applied. Post-processing includes the interpretation of the model outputs (i.e. the bounding box data and confidences, and the clustering). Typically this pre/post-processing is done the same way as it would be done in the training framework (i.e. DarkNet in the case of YOLO).
Right now detectNet from jetson-inference is setup for SSD-Mobilenet detection model in the ONNX path:
- jetson-inference/c/detectNet.cpp at 67f9db32f7e587e0e84ed701219173602431efba · dusty-nv/jetson-inference · GitHub
- jetson-inference/c/detectNet.cpp at 67f9db32f7e587e0e84ed701219173602431efba · dusty-nv/jetson-inference · GitHub
At this time I don’t personally intend to add support for YOLO since the SSD path is working well. You could attempt to modify the pre/post-processing in detectNet.cpp to match what yolov5s.pt expects. Alternatively, there is a YOLO ONNX sample included with TensorRT (/usr/src/tensorrt/samples/python/yolov3_onnx/)