Jetson-inference python file calling only the custom dataset model

Hi, I’m using Jetson-Inference. I followed the cat_dog guide on how to train a custom model - the results are great! Now I’m trying to make the python file, but can’t call my specific custom model within resnet-18. The --network tag can only select the base networks, not my custom model. How do I set this up so it only uses my custom-trained model?

Jetson-inference: GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
cat_dog: jetson-inference/pytorch-cat-dog.md at master · dusty-nv/jetson-inference · GitHub
python file: jetson-inference/imagenet-example-python-2.md at master · dusty-nv/jetson-inference · GitHub

Hi,

Sorry for the late reply.

Have you found a solution for your use case?
Do you still need help on this issue?

Thanks.

Hi @user20249, there are a few options you can do:

  1. You can run imagenet.py with the extended command-line options for you custom model like shown here: jetson-inference/pytorch-cat-dog.md at master · dusty-nv/jetson-inference · GitHub

  2. You can pass the command-line arguments to the imageNet object when you construct it like in imagenet.py: https://github.com/dusty-nv/jetson-inference/blob/01a395892ecc8acdbec4d8e9d6e8ac676416a507/python/examples/imagenet.py#L55

  3. You can code the paths to your custom model and labels.txt into your python script like this:

net = jetson.inference.imageNet(argv=['--labels=model_path/resnet18.onnx', '--model=model_path/resnet18.onnx', '--input_blob=input_0', '--output_blob=output_0'])

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.