Converting YOLOv8 model to caffe

Hello

I am having issues converting the YOLOv8 model to Caffe. I tried to convert it from ONNX to Caffe, but I had some issues with the split layer. I also tried converting the Pytorch model to Caffe but I faced issues with some libraries. What is the best way of converting the YOLOv8 model to Caffe?

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Thank you for responding. I have listed my specific issues below:

I used the ONNXToCaffe library, available at (GitHub - xxradon/ONNXToCaffe: pytorch -> onnx -> caffe, pytorch to caffe, or other deep learning framework to onnx and onnx to caffe.), and encountered issues stating that the library is unsuitable for converting YOLOv8 as it contains split layers that can’t be converted by this library. The same issue occurs if I try to simplify the model before converting.

As an alternative, I tried using the mmdnn library from Microsoft to do the conversion and I faced unpickling error when I tried to convert the pytorch model to caffe. I tried converting from onnx to caffe as well and encountered import errors with onnx_pb2. I later found that YOLO in the mmdnn is unsupported for caffe.

Hi,

Have you checked this with the Caffe team?

Thanks.

Not yet. I wasn’t sure if this was a problem with the conversion libraries I used or with the version conflicts with other libraries. I was able to successfully install and import caffe so I don’t think caffe is the issue.

Hi,

It looks like you are using a custom converter to translate ONNX to Caffe.
Maybe you can check with the convertor owner to see if they have any ideas.

Thanks.

Will do. I wanted to see if there was any other method I could use so that is why I reached out here. you for your help.

Hi,

We recommended to convert the ONNX model into TensorRT instead.
As TensorRT has optimized the DL use case for Jetson on both performance and memory usage.

Thanks.

Thank you. I’ll follow this.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.