Implementing custom augmentations online in tao model train

I would like to implement custom augmentations from albumentations like RandomResizedCrop to my tao training pipeline.

  1. Is this possible?
  2. If it is, can you point to to the source code where I can implement, or can you point me towards a similar post?

Thanks!

I’ve looked into the source code for model training here

A few things:

  1. I receive an error in my docker container when I run dataloader.get_dataset_tensors:
NotFoundError: in converted code:

    /workspace/tao-tf1/third_party/keras/tensorflow_backend.py:365 _map_func_set_random_wrapper  *
        return map_func(*args, **kwargs)
    /workspace/tao-tf1/nvidia_tao_tf1/cv/detectnet_v2/dataloader/drivenet_dataloader.py:174 __call__  *
        labels = self._extract_bbox_labels(example)
    /workspace/tao-tf1/nvidia_tao_tf1/cv/detectnet_v2/dataloader/drivenet_dataloader.py:265 _extract_bbox_labels  *
        sparse_coordinates = \
    /workspace/tao-tf1/nvidia_tao_tf1/blocks/multi_source_loader/types/tensor_transforms.py:171 sparsify_dense_coordinates  *
        regular_sparse_tensor = values_and_count_to_sparse_tensor(
    /workspace/tao-tf1/nvidia_tao_tf1/core/processors/processors.py:328 values_and_count_to_sparse_tensor  *
        op = load_custom_tf_op("op_values_and_count_to_sparse_tensor.so")
    /workspace/tao-tf1/nvidia_tao_tf1/core/processors/processors.py:201 load_custom_tf_op
        return tf.load_op_library(abs_path)
    /usr/local/lib/python3.8/dist-packages/tensorflow_core/python/framework/load_library.py:61 load_op_library
        lib_handle = py_tf.TF_LoadLibrary(library_filename)

    NotFoundError: /workspace/tao-tf1/nvidia_tao_tf1/core/processors/../lib/op_values_and_count_to_sparse_tensor.so: cannot open shared object file: No such file or directory

I’m in a docker container launched with (launcher) ubuntu@ip-172-31-6-30:~/tao_tensorflow1_backend$ tao_tf --gpus all --port 8888:8888. I’ve met all of the software requirements (including nvidia-container-toolkit installed). It looks like there is an issue with the tensorflow image but I’m not sure how to solve the issue.

  1. This is the moment we receive images/labels for model training. I’ve explored the dataloader class but I’m getting a bit lost. Would it make sense to apply albumentations somwhere in dataloader or after we receive images/labels from this class?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please develop your custom augmentations inside the TAO docker. After that, you can docker commit to save your changes.

$ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash

The default augmentation is in tao_tensorflow1_backend/nvidia_tao_tf1/cv/detectnet_v2/dataloader/drivenet_dataloader.py at main · NVIDIA/tao_tensorflow1_backend · GitHub.

BTW, the lib is available as below.
root@c038ddb07924:/usr/local/lib/python3.8/dist-packages# ls ./nvidia_tao_tf1/core/lib/op_values_and_count_to_sparse_tensor.so

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.