Cannot load custom trained or pre-trained models from x86 jetson-inference docker container

Hi I’m mostly doing development in a Jetson Nano 4GB but recently I wanted to push my work remotely and get the jetson-inference docker running in my laptop. I was able to get the docker running and tried to integrate a code to load a custom trained classification model via imageNet. I get this error as mentioned below,

jetson.inference – imageNet loading build-in network ‘hands-classification-resnet18’
jetson.inference – imageNet invalid built-in network was requested (‘hands-classification-resnet18’)

Then I tried loading a pre-trained posenet model via poseNet then it sent the below error,
jetson.inference – poseNet loading build-in network ‘resnet18_hand’

poseNet – loading pose estimation model from:
– model networks/Pose-ResNet18-Hand/pose_resnet18_hand.onnx
– topology networks/Pose-ResNet18-Hand/hand_pose.json
– colors networks/Pose-ResNet18-Hand/colors.txt
– input_blob ‘input’
– output_cmap ‘cmap’
– output_paf ‘paf’
– threshold 0.300000
– batch_size 1

[TRT] poseNet – failed to find topology file networks/Pose-ResNet18-Hand/hand_pose.json
[TRT] postNet – failed to load topology json from 'networks/Pose-ResNet18-Hand/hand_pose.json’jetson.inference – poseNet failed to load network.

What could be the possible issue?

Hi,

You will need a topology file for the pose estimation network:

Thanks.

Hi, thank you so much for your response. resnet18-hand is not an already downloaded folder I have in my directory. Whenever I pull this via posenet in my Jetson Nano it automatically downloads but this time it failed. I already have the models.json like this.

In addition, what can I do about loading my custom trained model for classification? This was trained by using dusty’s classification training repository.

I downloaded the resnet18-hands model manually and it loaded fine. I’m still having problems with the custom made model I created where I adjusted the models.json to easily import model

adjusted the models.json as follows,

“classification”: {
“hands-classification-resnet18”: {
“alias”:“hands-classification-resnet18”,
“dir”:“custom/hands-classify”,
“model”:“resnet18.onnx”,
“input”:“input_0”,
“output”:“output_0”,
“labels”: “labels.txt”,
“description”:“Classification Model based on hand gestures”
},

So far the paths are right as well.

in my python script, Im calling the model as below,

hand_model = jetson_inference.imageNet(‘hands-classification-resnet18’)

The error is as mentioned in the topic.

Hi @davex64, let’s take a quick step back - presuming your model is in fact an image classifier (not pose estimation), are you able to run it via imagenet/imagenet.py from the command-line first with a syntax similar to here:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-cat-dog.md#processing-images-with-tensorrt

Then to load fine-tuned resnet ONNX from code, see here:

Hi thank you for your response @dusty_nv I want to iterate the fact that I tested a pre-trained pose estimation hand model and a classification model that was trained by me. and I made adjustments in the models.json to import the model from the list of options itself.

I get the below error when running the imagenet.py code as below,

/jetson-inference/python/examples# python3 imagenet.py --network=hands-classification-resnet18 --input=/dev/video0
[TRT] imageNet – failed to initialize.
jetson.inference – imageNet failed to load built-in network ‘hands-classification-resnet18’

Then I tried the pose estimation model in addition, which is resnet18-hands model (Which I had to manually download), and then i got this error below,

[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 37
[TRT] – maxBatchSize 1
[TRT] – deviceMemory 9895936
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name ‘input’
– type FP32
– in/out INPUT
– # dims 4
– dim #0 1
– dim #1 3
– dim #2 224
– dim #3 224
[TRT] binding 1
– index 1
– name ‘cmap’
– type FP32
– in/out OUTPUT
– # dims 4
– dim #0 1
– dim #1 21
– dim #2 56
– dim #3 56
[TRT] binding 2
– index 2
– name ‘paf’
– type FP32
– in/out OUTPUT
– # dims 4
– dim #0 1
– dim #1 40
– dim #2 56
– dim #3 56
[TRT]
[TRT] binding to input 0 input binding index: 0
[TRT] binding to input 0 input dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 cmap binding index: 1
[TRT] binding to output 0 cmap dims (b=1 c=21 h=56 w=56) size=263424
[TRT] binding to output 1 paf binding index: 2
[TRT] binding to output 1 paf dims (b=1 c=40 h=56 w=56) size=501760
[TRT]
[TRT] device GPU, /usr/local/bin/networks/Pose-ResNet18-Hand/pose_resnet18_hand.onnx initialized.
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera – attempting to create device csi://0
[gstreamer] MIPI CSI camera isn’t available on x86 - please use /dev/video (V4L2) instead[gstreamer] gstCamera failed to build pipeline string
[gstreamer] gstCamera – failed to create device csi://0
Traceback (most recent call last):
File “/usr/local/bin/posenet.py”, line 52, in
input = videoSource(opt.input_URI, argv=sys.argv)
Exception: jetson.utils – failed to create videoSource device

Sorry if the thread was confusing to understand. Just to summarize, I loaded a custom classification model and a pose estimation model both at first in my own code, and now in the latest reply I tried imagenet to load my classification model separately and posenet.py to load the posenet model separately. But these are the errors I’m getting for both.

@davex64 imagenet/imagenet.py is only for image classification models, not pose models.

It is not finding your addition to models.json - try it using --model=hands-classification-resnet18 instead. Was there any console output in addition to this? You can also duplicate your own version of imagenet.py and use syntax like this:

https://github.com/dusty-nv/jetson-inference/blob/e8361ae7f5f3651c4ff46295b193291a93d52735/python/examples/imagenet.py#L51

net = imageNet(model="model/resnet18.onnx", labels="model/labels.txt", 
                  input_blob="input_0", output_blob="output_0")

Or you can check that this type of command-line syntax works:

imagenet.py --model=model/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=model/labels.txt input.jpg output.jpg

It sounds like you had each of your models working previously, which is good, so now it is just a matter of figuring out the paths. Don’t hesitate to start adding in debug print/printf’s into the code if you aren’t sure why certain arguments aren’t being picked up.

This one appears related to it wanting to use the positional argument for the video source instead of --input. Can you try it like posenet.py --model=xyz /dev/video0 or posenet.py --model=xyz input.jpg instead?

Hey @dusty_nv Im sorry I couldn’t respond in time. Yes I used posenet.py for a pose estimation and imagenet for a classification model separately. I will check on this in a couple of days and let you know. I’m sorry if my question seems a bit complicating to understand.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.