Extending pre-trained network to identify new objects using TX2 and jetpack 3.2.1

HI there,

I have a TX2 Jetson module and I’m using jetpack 3.2.1.
Is it possible to extend the pre-traidned libraries available at Hello AI world guide to identify new objects?
I would like that the pre-trained network to identify all objects that it’s already identifying and extend it to identify new objects as well.

Regfards,
Farough

Hi,

You can add the new object by retraining.
Please check this tutorial for details:

Thanks.

Thanks for helpful suggestions.

I installed the jet pack 4.4 on TX2 development kit. When I run the script for video image recognition, it just displays the name of a single object at the top left corner.

How can I identify multiple objects at the same time and draw a rectangle around the object and display the object name besides each object? similar to this video.
https://www.youtube.com/watch?v=4eIBisqx9_g

Can you give me an idea of how to log the output of openCV when it detects a specific object?
For example, if it detects a cup, log it with the time of detection.

Thanks for your support

Hi,

Please noticed that the classification model will generate the class label but no object location.
If you want to get the object bounding box, please use the detection model.

In jetson_inference, it draws the bouning box via CUDA overlay rather than OpenCV.
You can find the implementation here:

Thanks.

Hello,

I installed jetpack 4.4 on nvidia jetson tx2.

I tried to retrain the ssd-mobilenet following the nvidia website at :
https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md

Whn I run the following script, it gives me error:
pip3 install -v -r requirements.txt

Also when I run the following command to download the fruit data set, it does not work:
python3 open_images_downloader.py --class-names “Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon”

There is no “open_images_downloader.py” in my computer. Also there is no script train_ssd.py on my computer.

It seems that there is something missing in the nvidia documentation or I have missed something. I have “train.py” script for classification and segmentation but nothing for the detection is downloaded to my computer.

How should I solve his problem?

Regards,
Farough

Hi, I am using pytorch to train a network using my images on nvidia jetson tx2 using jetpack 4.4.
Why do I get nan during training?

2020-09-08 16:55:03 - Epoch: 0, Step: 330/349, Avg Loss: nan, Avg Regression Loss nan, Avg Classification Loss: nan

Thanks

Hi, I am using pytorch to train a network using my images on nvidia jetson tx2 using jetpack 4.4.
Why do I get nan during training? attached is the screen shot.

2020-09-08 16:55:03 - Epoch: 0, Step: 330/349, Avg Loss: nan, Avg Regression Loss nan, Avg Classification Loss: nan

Thanks

Hi,

A common issue is that the learning rate is too large for the training task.
Here are some discussion for your reference:

Would you mind to set a smaller learning rate and try it again?

Thanks.

Hello,

I need to use the nvidia tx2 module on a dev kit to extend a network.
for example, I want to use a pre-trained neural network that is trained on 20 objects and I want to add 2 or more objects to the pretrained network and make it 22 objects. Also I want to use object detection with boxes around the identified object not the object classification.
I don’t want to use the DIGITS procedure, I want just to use the tx2 on the dev kit. Is it possible to do the network extension on tx2 module itself or do I need to use the DIGITS training?

Please give me instructions for extending the network since it is critical to what I am doing. I need to put the tx2 on a drone for some demonstration and my time is very limited.

Thanks very much for your help.

Regards,
Farough

Hi,

To add a new object requires a re-training update on the model.
A simple retraining can be applied on the Jetson directly without using DIGITS.

The following tutorial (same as you shared) is a good guide for doing this:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md

Based on the log you shared in Sep.9, it seems that some annotation of data is incorrect.
Would you mind to check if first? Are you creating the data with this doc?

Thanks.

Hi there,

In order to retrain the network and extend it for more objects, I have to download the images and intonation data for the previously trained objects. When I capture new images using the link you provided, I don’t know how to merge the images and the annotation data I obtained from the camera with the images and annotation data I downloaded for the pre-trained network. How do I merge the new image/annotation data with the previous ones?

Regards,
Farough

Hello,

I want to use detection networks like YOLOV3 on transfer learning tool kit.
I need to add a new object to the pretrained network. How can I make label files for the new object?
Is there a tool that I can use to make images and labels for YOLOV3?
I went through the augmentation section of the TLT. It does not explain how to make images and corresponding labels for a new objects.
I can not make coordinate labels for new objects by hand.

Regards,
Farough

Hello,
How do I generate annotations for images of a new object to augment a pre-trained detection network like YOLOV3 on transfer learning toolkit?

Thanks

Hi,

Sorry for the late update.

You will need to write a parser for the data from other sources.
The format and path is defined in the webpage, please check it for the information.

For TLT training, you can find some information in this tutorial:
https://developer.nvidia.com/blog/training-custom-pretrained-models-using-tlt/

Preparing data

Thanks.

in the kitti format for TLT detection, the label file requires the coordinates of the bounding box. When I have a set of new images, how to do I obtain the coordinates for bounding boxes? I don’t know how to generate a bounding box and get its coordinates on new images.
Thanks

Hi,

To collect the “training data” indicates that you will need to add the label for the expected output.

Thanks.

Thanks so much for replying to my questions so far. Your assistance is greatly appreciated.

In transfer learning toolkit (TLT), I am trying to use the tlt-converter to convert a .etlt file to a trt file. This is the command and the error I get. How do I solve this problem? Please answer quickly since our project is stopped because of this problem: (pictur e is attached)

sudo ./tlt-converter /opt/nvidia/deepstream/deepstream-5.0/samples/export_to_tx2/final_model.etlt -k NTI3ZTQ1azE0Yjc0bWFmcW81cHRtaXA1OXE6ZDdjNDlkOWYtZjgxMS00ZTI2LTkxMWYtMTAzYmI5ODljYzNj -c /opt/nvidia/deepstream/deepstream-5.0/samples/export_to_tx2/final_model_int8_cache.bin -o predictions/Softmax -d 3,224,224 -i nchw -m 64 -t int8 -e /opt/nvidia/deepstream/deepstream-5.0/samples/export_to_tx2/out_classification.trt -b 4

./tlt-converter: error while loading shared libraries: libnvinfer.so.5: cannot open shared object file: No such file or directory

Another question is that:
I tried to use a .etlt file exported from a classification model and tried to use option 1 to integrate it into deepsteam. I don’t know how to make configuration files for a .etlt classification model using option 1. Can you please send me sample configuration files for a classification model (not detection) for integration into deepstream using option 1?

I am confused about how to address the required files in the configuration files.

Best Regards,
Farough

Hello,

Your timely reply to this email is greatly appreciated.
I am using the latest jetpack 4.4 and installed deepstream on jetson TX2. I am using transfer learning toolkit.
I tained a classification model (not detection) on a computer that has nvidia GPU and obtained a xxxx.etlt and xxxxx.bin file.
Now I need to integrate these files with deepstream to run them on TX2.
I tried to modify the configuration files for classification using option 1 integration. The camera shows up but nothing is classified.
My NN network worked fine on the computer with nvidia GPU since I could get classified images, but no classification happens on TX2.

I guess there is a problem with the configuration files. Attached are my configuration files. I don’t know what to write for the model-engine-file=… line in the main and secondary configuration files. I am not using the primary file, since I assume that this file is for detection (I don’t know if this assumption is right or wrong).

Since I am using option 1 integration with deepstream, what should I write for this option?
Can you check and modify my configuration files?
Can you please send me functional configuration files for a classification model for integration into deppstream using option 1?

Regards,
Farough

aa_config_infer_secondary_vehiclemakenet.txt (2.08 KB)

aa_deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt (4.15 KB)

Is it possible to run only classification without detection on Transfer learning tool kit? If yes, please send me a sample configuration file that works.
Thanks