Tiny YOLO v2 on Nano?

Has anyone been able to load the Tiny YoloV2 onnx model into detectNet successfully ? I get an error stating the network must have at least one output, but I don’t know what parameters I should be using.

Hi @brking, there isn’t support for YOLO models in jetson-inference, it would require custom pre-processing and post-processing.

However there is a YOLO sample include in the TensorRT SDK under /usr/src/tensorrt/samples/python/yolov3_onnx in addition to resources from the community about it:

A colleague pointed me to this sample using CV2. I didn’t realize CV2 could consume from a gstreamer source. Any reason this should not work for an RTSP source ? What I really want is: a) support for more custom models than jetson-inference supports and b) to work in Python.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.