Issue with Setting Resolution to 512 in Jetson Inference for ONNX Export

Hello,
I’m a student working on a project to get a car to autonomously drive in a course, and I’m trying to use Jetson Inference for object detection. I’ve been following the forums and documentation on how to set the resolution to 512 for the model, but I’m running into an issue when exporting to ONNX.

I cloned the repository using this command:
git clone -b res-512 --recursive https://github.com/dusty-nv/pytorch-ssd.git

After that, I followed the usual steps from the object detection tutorials. However, when I run the ONNX export, the resolution defaults to 300x300 instead of 512x512.

What am I doing wrong, and how can I get the correct 512 resolution during the export?

Thank you for any help or guidance!

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi,

Does the resolution indicate the image width/height?
The -b command will check out the branch for the special JetPack version instead of changing the input size.

In general, we reserve the model input size but rescale the training data.
Does this work for you?

Thanks.

Happy New Year, everyone! I exported the git clone -b res-512 --recursive https://github.com/dusty-nv/pytorch-ssd.git branch that I found on this forum: NVIDIA Forum Link. From what I gathered, by switching my SSD file to this branch, I should be able to achieve better accuracy at a distance. However, I’m having trouble getting it to work. From my understanding, I just replace my files with the ones from this branch and then follow the Jetson Inference tutorial on YouTube by NVIDIA. Am I misunderstanding something here?

heres the picture of the onnx export defaulting to 300 but the branch pulled is the 512 one


Hi,

Sorry for the missing.

The special branch does increase image size from 300 to 512.
The branch should allow you to run a model with a 512x512 image without downsampling.

Is that true in your experiment?

Thanks.

sorry for responding a bit late but i got it fixed after throwing some random tries i just had to specifiy that it was 512 onnx export when exporting

Hi,

Good to know it works.
Thanks for the feedback.