Hello,
I’m a student working on a project to get a car to autonomously drive in a course, and I’m trying to use Jetson Inference for object detection. I’ve been following the forums and documentation on how to set the resolution to 512 for the model, but I’m running into an issue when exporting to ONNX.
I cloned the repository using this command: git clone -b res-512 --recursive https://github.com/dusty-nv/pytorch-ssd.git
After that, I followed the usual steps from the object detection tutorials. However, when I run the ONNX export, the resolution defaults to 300x300 instead of 512x512.
What am I doing wrong, and how can I get the correct 512 resolution during the export?
If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.
Does the resolution indicate the image width/height?
The -b command will check out the branch for the special JetPack version instead of changing the input size.
In general, we reserve the model input size but rescale the training data.
Does this work for you?
Happy New Year, everyone! I exported the git clone -b res-512 --recursive https://github.com/dusty-nv/pytorch-ssd.git branch that I found on this forum: NVIDIA Forum Link. From what I gathered, by switching my SSD file to this branch, I should be able to achieve better accuracy at a distance. However, I’m having trouble getting it to work. From my understanding, I just replace my files with the ones from this branch and then follow the Jetson Inference tutorial on YouTube by NVIDIA. Am I misunderstanding something here?