Hi Dustin,
I have installed JP441 and the Jetson-Inference on my nano.
The tutorial and your videos are very good.
Thank you!
I would like retrain a network with meteor scatter spectrograms for the semantic segmentation. The jetson nano is connected to a PC via a HDMI-USB3 link, so that segnet-camera for example can fetch the incoming echoes.
Now I have retrained the cat/dog example. I trained 10 epochs. Everything went without problems.
However the retrained resnet18 detects everything (even a full screen cat from the internet) as 67% dog.
Using SegNet camera everything is red. I would expect, that only one class is red.
This was the command line:
willi@willis:~/jetson-inference/python/training/classification$ imagenet --model=models/cat_dog/resnet18.onnx --labels=data/cat_dog/labels.txt --input_blob=input_0 --output_blob=output_0 /dev/video1
I tested ImageNet and SegNet, see images.
It would be very nice if you could help me.
Best regards,
Wilhelm
Hi,
Sorry that I’m not familiar with the spectrogram images.
Does this kind of data contain color information? Like RGB, YCbCr, …, etc.
If it mainly captures the luminance frequency, you will need a very different setting/model.
Thanks.
1 Like
Hi AastaLLL,
thank you for the information. Currently it does not work with color images from cats and dogs.
But yes, the spectrograms mainly have one or two colors, see the attached image.
What kind of model would you suggest?
Best regards,
Wilhelm
Hi @WiSi-Testpilot, the segmentation networks (FCN-ResNet18) are different than the classification networks (ResNet18), so loading a classification model with segnet program would not work.
Re-training the segmentation models with PyTorch isn’t yet part of the Hello AI World tutorial, but you can find some resources about it here:
1 Like
Hello Dustin,
thank you for the very interesting information.
Is it possible, that you add it to the Hello AI World tutorial?
Best regards,
Wilhelm
The datasets that I used for training the segmentation models in Hello AI World were larger, so I didn’t do that training onboard Jetson (rather my PC + GPU card). At some point I will have to do some experimentation to see if it is stable on smaller user-created datasets to train simpler segmentation models onboard Jetson.
1 Like
Dustin, my suggestion is to make only two classes with the cats and dogs images. Then we can learn the procedure.
Tomorrow I’ll get a Xavier NX.
Best regards,
Wilhelm
An Update:
I got the Xavier NX. The cat/dog retraining is four times faster than with my Nano.
Very nice.
Best regards,
Wilhelm
Hello Dustin,
do you have any news about semantic segmentation? I think, it’s ok, if you make a small data set with two classes. It’s important that we learn the procedure. I would like to detect meteor spectrograms (see images) and distinguish them from disturbances. This is currently programmed conventionally.
Thanks in advance,
best regards,
Wilhelm
Hi @WiSi-Testpilot, unfortunately I haven’t had the time yet, sorry about that. I would continue referencing the tutorial I linked to above which shows how to train segmentation models.
Hello Dustin,
thank you for your quick reply.
I’m afraid the instructions won’t work for me on the nano. Therefore I prefer to wait for your step by step instructions.
Have a nice day,
best regards,
Wilhelm