Googlenet network's place

Hello, when I use this web’s (jetson-inference/pytorch-collect.md at master · dusty-nv/jetson-inference · GitHub)method to train googlenet, the network’s place is on the picture. Can I find network on this place?

That is the base googlenet model that PyTorch is downloading. The base model was trained on the 1000-class Imagenet ILSVRC dataset. The base model is used to initialize the weights for transfer learning on your own dataset, so the model needn’t be trained entirely from scratch (that would require a lot more data and take a lot longer).

The model that you trained should be output under your ~/jetson-inference/python/training/classification/googlenet/ directory in your case. It will be output after at least one training epoch is run.

The directory “/home/bmw/.cache/~” is on the picture. Where can I see in Jetson Nano?

60fdc2254f76444329a249b961e41b76160f9ccf_2_666x500

You should be able to navigate there from the terminal - if you are trying to find it from your file browser, you need to enable Show Hidden files, because .cache is hidden.

Can the train.py be trained with googlenet .py? Or are any training network method that can change the neural network with pytorch’s googlenet?

Or is any another training googlenet’s method on jetson nano?

Hi @andy8902, you can use the --arch=googlenet argument to train.py when launching it from the command line (run python3 train.py --help for more info). Note that it is tested with resnet18 (the default arch), which I would recommend using and is newer than googlenet.

Hello, I found .pth file below the picture. Which program or file can set it?

You need to re-train it first by launching train.py with the --arch=googlenet flag.

That cached file is just the initial checkpoint that torchvision uses when creating the model before training.

In addition, are there any pytorch training program settings that can adjust the Googlenet architecture?

train.py will automatically reshape the network’s output layer to have the correct number of outputs to match the number of classes in your dataset. You can view other parameters such as learning rate, momentum, ect by running train.py --help.

If you mean modifying the googlenet architectures network topology for experimentation (i.e. by adding/removing layers, changing layer configurations, ect) you would need to manually edit that inside torchvision. Unfortunately, then you would probably be unable to use the pre-trained checkpoint that torchvision downloaded to your .cache directory, because the network had been changed - so training it would take a lot longer.

If I want to use a modified neural network architecture program, do I need to use the googlenet.py shown above? Or use any program? If I want to train, can you tell me the training steps?

That googlenet.py is where the network definition of googlenet architecture is. That is not the training program - train.py is an example of that.

The train.py is for classification training. Since you generally seem interested in PyTorch, I recommend to try the PyTorch tutorials, here is the one for classification: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py

I have a different question about the “Collecting your own Classification Datasets” tutorial. I want to use pre-exisitng photos instead of using the camera. After using the camera-capture tool to set up the template directory structure and a few test images from the camera, I added my jpg’s to the appropriate directories. I reformated the jpg files so they matched the resolution of the camera images (1289x720). It worked (more or less) using 100 test images in each of 4 classes. Reformating the images was pretty tedious. Was it necessary? I think I will need a much larger number of images to get the accuracy I want. So far, my model is stuck at predicting (~50%) the heron test picture is a bobcat.
As an alternative to reformating the jpg’s I used the camera-capture tool to take pictures of images on my laptop. This was easy enough but I worried that it might cause other issues.

Hi @rleyden, no, reformatting the images is not necessary. PyTorch will automatically downsample them to the resolution that the network expects when training (which is typically 224x224 for classification networks, except Inception I think is 299x299). So you don’t need to reformat them yourself.

That is probably correct, adding more images will improve the accuracy - and also varying the background, camera viewpoint, lighting condition, and orientation of the images will make it more robust.

Not use googlenet.py or more easy method than this method( https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py ) ?

I’m sorry, I’m not following your question. If you want to re-train googlenet on custom data, you can use train.py --arch=googlenet. If you just want to run the pre-trained googlenet model, you can run imagenet --network=googlenet (which is actually already the default network)

When I used terminal line:python train.py --arch=googlenet (the googlenet is googlenet.py )

When you run train.py --arch=googlenet (plus the other arguments like your dataset location) it will save the re-trained model after an epoch. You have to wait at least one epoch before it gets saved.

If you just want to use the existing 1000-class googlenet in TensorRT, it already comes with the jetson-inference project when you build it.

Sorry, I want to ask when I used terminal line:python train.py --arch=googlenet (is the googlenet googlenet.py?) to train network.