Training UNET with TLT

Hello, I want to train Unet with TLT.
There are some configuration options that do not work for me.
I would appreciate your help

  1. use_pooling - does not work at all
  2. regularizer: the default value is L2 but the training worked for us just with L1
  3. augmentation_config - the error is:
    Message type “DatasetConfig” has no field named “augmentation_config”

besides, is it possible to train with multiple image sizes? Or multiple types of images?

• Hardware GeForce RTX 2080
• Network Type: Unet
• TLT Version: tlt-streamanalytics:v3.0-dp-py3

Thanks, Daphna

Please update your tlt version to tlt-streamanalytics:v3.0-py3

See https://docs.nvidia.com/tlt/tlt-user-guide/text/tlt_quick_start_guide.html#installing-tlt

The nvidia-tlt package is hosted in the nvidia-pyindex , which has to be installed as a pre-requisite to install nvidia-tlt .

If you had installed an older version of nvidia-tlt launcher, you may upgrade to the latest version by running the following command.

pip3 install --upgrade nvidia-tlt

  1. The error I get when using use_pooling: true is:

ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 256, 18, 10), (None, 128, 240, 136)]

  1. Thanks. Your solution helped me.
  2. I tried tlt-streamanalytics:v3.0-py3 version. Received the same error message:

google.protobuf.text_format.ParseError: 45:5 : Message type “DatasetConfig” has no field named “augmentation_config”.

Please refer to NVIDIA TAO Documentation
or you can download jupyter notebook and find the sample spec of Unet.
NVIDIA TAO Documentation

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip
unzip -u tlt_cv_samples_v1.1.0.zip -d ./tlt_cv_samples_v1.1.0 && rm -rf tlt_cv_samples_v1.1.0.zip && cd ./tlt_cv_samples_v1.1.0

Hello again,
We are now facing a new problem. Maybe related.
When working with the .tlt version you proposed (tlt-streamanalytics: v3.0-py3), we cannot see the examples folder. If we are working with the previous version, the examples folder exists.
About a month ago, working with the previous version, we used TLT for UNet training. It was possible to prune the model, and now we can not find that option. In addition, the deployment was to an .etlt file and now the export is to an engine file. Can you help us understand and resolve the difficulties we have presented?
Otherwise, where can we find a solution?
Thanks,
Daphna

Please download jupyter notebook and find the sample spec of Unet.
TLT Quick Start Guide — Transfer Learning Toolkit 3.0 documentation

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip
unzip -u tlt_cv_samples_v1.1.0.zip -d ./tlt_cv_samples_v1.1.0 && rm -rf tlt_cv_samples_v1.1.0.zip && cd ./tlt_cv_samples_v1.1.0

Please refer to UNET — Transfer Learning Toolkit 3.0 documentation

OK … Is it possible to train UNet which is not isbi?
This option was available in the previous version of tlt-streamanalytics…

Thanks for the help, we read the TLT documentation but could not find a solution to the questions we asked you. Can you refer us to a specific section that can help us?

Thank you again,
Daphna

Yes, Unet can train other dataset.
More, for your previous questions

  1. Unet pruning, please see NVIDIA TAO Documentation

  2. for exporting to an engine, please see NVIDIA TAO Documentation

As mention above, see NVIDIA TAO Documentation , could you please download jupyter notebook and then trigger notebook and find the Unet .
wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip
unzip -u tlt_cv_samples_v1.1.0.zip -d ./tlt_cv_samples_v1.1.0 && rm -rf tlt_cv_samples_v1.1.0.zip && cd ./tlt_cv_samples_v1.1.0
pip3 install jupyter
jupyter notebook --ip 0.0.0.0 --allow-root --port 8888

Please refer to its step.