How would I load a custom model in python?
I tried:
net = jetson.inference.detectNet(argv=“–model=/path/to/model.onnx”)
But that still loads the default model and says there are 91 classes even though mine was trained on just 5.
How would I load a custom model in python?
I tried:
net = jetson.inference.detectNet(argv=“–model=/path/to/model.onnx”)
But that still loads the default model and says there are 91 classes even though mine was trained on just 5.
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Re-training SSD-Mobilenet question | 4 | 319 | October 15, 2021 | |
Jetson inference selecting model in python code | 2 | 369 | October 15, 2021 | |
DetectNet --model | 3 | 1035 | February 4, 2022 | |
Jetsen.inference detectnet load my CUSTOM model | 0 | 469 | May 9, 2020 | |
Custom trained model detectNet jetson_inference | 7 | 3571 | October 15, 2021 | |
How Coding My Own Object Detection Program, but with My Own Models | 4 | 555 | October 18, 2021 | |
Jetson-inference python file calling only the custom dataset model | 3 | 1043 | March 16, 2022 | |
How to inference with my own model | 2 | 721 | January 13, 2020 | |
Measuring Performance of Custom Object Detection Model | 2 | 576 | September 18, 2021 | |
Need to pre-process a custom model to fit with Jetson Inference? | 2 | 375 | May 13, 2022 |