How to create an AI model with annotation and image data?

I got the annotation, YOLO format, and image data.
I am supposed to use TAO or TensorRT.
The data are like these.

annotation

1 0.510417 0.605208 0.494444 0.032292
2 0.430208 0.648177 0.204861 0.051562
0 0.376042 0.559375 0.189583 0.029167

class

barcode
name
price

Is the process to create an model with annotation and image data following these?

  1. Convert YOLO format to COCO or KITTI format
  2. Write spec.yaml
  3. tao train
    $ ngc registry model download-version 
    nvidia/tao/pretrained_classification_tf2:efficientnet_b0
    $ tao classification_tf2 train -e /path/to/spec.yaml 
    
  4. tao evaluate
    $ tao classification_tf2 evaluate -e /path/to/spec.yaml
    

And I do not know how I should write spec.yaml with the annotation data.

Any advice is appreciated.

Ubuntu: 22.04
TAO Toolkit: 5.3

You are mentioning classification network instead of detection network.
For classification network, please refer to Data Annotation Format - NVIDIA Docs.

To get started, you can find the classification notebooks under tao_tutorials/notebooks/tao_launcher_starter_kit at main · NVIDIA/tao_tutorials · GitHub.

@Morganh
Thank you for your reply and advice.
I am sorry, I made a mistake.
It was not classification_tf2 but detectnet_v2.
So, here is what I did and what I encountered an error.

I followed the notebook of detectnet_v2.ipynb and run several cells.

# I only changed this one.
os.environ["LOCAL_PROJECT_DIR"] = "/home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test"

But when I run following this, a permission error occured.

!tao model detectnet_v2 dataset_convert -d $SPECS_DIR/spec_tfrecords_kitti.txt \
                                        -o $LOCAL_DATA_DIR/tfrecords_aikata

2024-06-01 23:35:09,257 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2024-06-01 23:35:09,318 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2024-06-01 23:35:09,549 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True
2024-06-01 14:35:10.459957: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2024-06-01 14:35:10,495 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2024-06-01 14:35:11,562 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:11,591 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:11,595 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:12,635 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-fbqv3xf1 because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2024-06-01 14:35:12,807 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:14,027 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:14,053 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:14,056 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-01 14:35:14,415 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.build_converter 87: Instantiating a kitti converter
2024-06-01 14:35:14,416 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 71: Creating output directory /home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test/data
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py", line 168, in <module>
    raise e
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py", line 137, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py", line 131, in main
    converter = build_converter(dataset_export_config, args.output_filename, args.validation_fold)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/build_converter.py", line 91, in build_converter
    converter = KITTIConverter(**constructor_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/kitti_converter_lib.py", line 87, in __init__
    super(KITTIConverter, self).__init__(
  File "/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/dataset_converter_lib.py", line 72, in __init__
    os.makedirs(output_dir)
  File "/usr/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/usr/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/usr/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  [Previous line repeated 3 more times]
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/home/ym7'
Execution status: FAIL
2024-06-01 23:35:18,806 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 363: Stopping container.

How can I solve this Permission Error?
I think I followed the document of DetectNet_V2, but is what I tried something wrong?

【FYI】
Terminal

$ pwd
/home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test
$ ls -lh
drwxrwxrwx  9 ym7 ym7 4.0K Jun  1 22:34 data
-rw-rw-r--  1 ym7 ym7  68M May 29 00:03 detectnet_v2_MyTest.ipynb
drwxrwxr-x  3 ym7 ym7 4.0K Jun  1 22:42 detectnet_v2_test
drwxrwxr-x  2 ym7 ym7 4.0K May 31 15:22 specs
...
$ ls -lh data
drwxrwxrwx 6 ym7 ym7 108K May 30 17:49 images_aikata
drwxrwxrwx 5 ym7 ym7  88K May 30 17:48 kitti_labels_aikata
drwxrwxrwx 2 ym7 ym7 4.0K Jun  1 22:34 tfrecords_aikata
...
$ ls -lh specs
-rw-rw-r-- 1 ym7 ym7  670 May 31 12:18  spec_tfrecords_kitti.txt
-rw-rw-r-- 1 ym7 ym7 9.7K May 31 15:22  spec_train_kitti.txt
...
$ jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root

spec_tfrecords_kitti.txt

kitti_config {
 root_directory_path: "/workspace/tao-experiments/data/training"
 image_dir_name: "images_aikata"
 label_dir_name: "labels_aikata"
 image_extension: ".jpg"
 partition_mode: "random"
 num_partitions: 2
 val_split: 20
 num_shards: 10
}
image_directory_path: "/workspace/tao-experiments/data/training"
target_class_mapping {
 key: "barcode"
 value: "barcode"
}

It is related to the path. The path should be a path inside the docker. Please check ~/.tao_mounts.json file how the paths are mapped.

@Morganh
Thank you.

Here are the codes.

▪ Cell 1

# Setting up env variables for cleaner command line commands.
import os

%env NUM_GPUS=1
%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/detectnet_v2_test
%env DATA_DOWNLOAD_DIR=/workspace/tao-experiments/data

# Set this path if you don't run the notebook from the samples directory.
# %env NOTEBOOK_ROOT=~/tao-samples/detectnet_v2

# Please define this local project directory that needs to be mapped to the TAO docker session.
# The dataset expected to be present in $LOCAL_PROJECT_DIR/data, while the results for the steps
# in this notebook will be stored at $LOCAL_PROJECT_DIR/detectnet_v2
# !PLEASE MAKE SURE TO UPDATE THIS PATH!.

os.environ["LOCAL_PROJECT_DIR"] = "/home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test"

os.environ["LOCAL_DATA_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "data"
)
os.environ["LOCAL_EXPERIMENT_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "detectnet_v2_test"
)

# Make the experiment directory 
! mkdir -p $LOCAL_EXPERIMENT_DIR

# The sample spec files are present in the same path as the downloaded samples.
os.environ["LOCAL_SPECS_DIR"] = os.path.join(
    os.getenv("NOTEBOOK_ROOT", os.getcwd()),
    "specs"
)
%env SPECS_DIR=/workspace/tao-experiments/detectnet_v2_test/specs
CLEARML_LOGGED_IN = False
WANDB_LOGGED_IN = False

# Showing list of specification files.
!ls -rlt $LOCAL_SPECS_DIR
env: NUM_GPUS=1
env: USER_EXPERIMENT_DIR=/workspace/tao-experiments/detectnet_v2_test
env: DATA_DOWNLOAD_DIR=/workspace/tao-experiments/data
env: SPECS_DIR=/workspace/tao-experiments/detectnet_v2_test/specs
合計 64
-rw-rw-r-- 1 ym7 ym7 6360 Apr  5 23:58  detectnet_v2_train_resnet18_kitti.txt
-rw-rw-r-- 1 ym7 ym7  310 Apr  5 23:58  detectnet_v2_tfrecords_kitti_trainval.txt
-rw-rw-r-- 1 ym7 ym7 6474 Apr  5 23:58  detectnet_v2_retrain_resnet18_kitti_qat.txt
-rw-rw-r-- 1 ym7 ym7 2395 Apr  5 23:58  detectnet_v2_inference_kitti_tlt.txt
-rw-rw-r-- 1 ym7 ym7 2424 Apr  5 23:58  detectnet_v2_inference_kitti_etlt.txt
-rw-rw-r-- 1 ym7 ym7 2433 Apr  5 23:58  detectnet_v2_inference_kitti_etlt_qat.txt
-rw-rw-r-- 1 ym7 ym7 6375 May 19 12:54  detectnet_v2_retrain_resnet18_kitti.txt
-rw-rw-r-- 1 ym7 ym7  670 May 31 12:18  spec_tfrecords_kitti.txt
-rw-rw-r-- 1 ym7 ym7 9889 May 31 15:22  spec_train_kitti.txt

▪ Cell 2

# Mapping up the local directories to the TAO docker.
import json
mounts_file = os.path.expanduser("~/.tao_mounts.json")

# Define the dictionary with the mapped drives
drive_map = {
    "Mounts": [
        # Mapping the data directory
        {
            "source": os.environ["LOCAL_PROJECT_DIR"],
            "destination": "/workspace/tao-experiments"
        },
        # Mapping the specs directory.
        {
            "source": os.environ["LOCAL_SPECS_DIR"],
            "destination": os.environ["SPECS_DIR"]
        },
    ],
    "DockerOptions":{
        "user": f"{os.getuid()}:{os.getgid()}"
    }
}

if CLEARML_LOGGED_IN:
    if "Envs" not in drive_map.keys():
        drive_map["Envs"] = []
    drive_map["Envs"].extend([
        {
            "variable": "CLEARML_WEB_HOST",
            "value": os.getenv("CLEARML_WEB_HOST")
        },
        {
            "variable": "CLEARML_API_HOST",
            "value": os.getenv("CLEARML_API_HOST")
        },
        {
            "variable": "CLEARML_FILES_HOST",
            "value": os.getenv("CLEARML_FILES_HOST")
        },
        {
            "variable": "CLEARML_API_ACCESS_KEY",
            "value": os.getenv("CLEARML_API_ACCESS_KEY")
        },
        {
            "variable": "CLEARML_API_SECRET_KEY",
            "value": os.getenv("CLEARML_API_SECRET_KEY")
        },
    ])

if WANDB_LOGGED_IN:
    if "Envs" not in drive_map.keys():
        drive_map["Envs"] = []
    # Weights and biases currently requires access to the
    # /.config directory in the docker. Therefore, the docker
    # must be instantiated as root user. With the cells mentioned below
    # we will be deleting the cells that set user roles.
    if "user" in drive_map["DockerOptions"].keys():
        del(drive_map["DockerOptions"]["user"])
    drive_map["Envs"].extend([
        {
            "variable": "WANDB_API_KEY",
            "value": os.getenv("WANDB_API_KEY")
        }
    ])

# Writing the mounts file.
with open(mounts_file, "w") as mfile:
    json.dump(drive_map, mfile, indent=4)

▪ Cell 3 - I checked ~/.tao_mounts.json here.

!cat ~/.tao_mounts.json
{
    "Mounts": [
        {
            "source": "/home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test",
            "destination": "/workspace/tao-experiments"
        },
        {
            "source": "/home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test/specs",
            "destination": "/workspace/tao-experiments/detectnet_v2_test/specs"
        }
    ],
    "DockerOptions": {
        "user": "1000:1000"
    }
}

▪ Terminal

$ ls /home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test
data                       detectnet_v2_MyTest.ipynb  specs
detectnet_v2               detectnet_v2_test          TFRecords.ipynb
detectnet_v2_AIKATA.ipynb  __init__.py
detectnet_v2.ipynb         ngccli

$ ls /home/ym7/tao-jupyter/getting_started_v5.3.0/notebooks/tao_launcher_starter_kit/detectnet_v2_test/specs
 detectnet_v2_inference_kitti_etlt_qat.txt
 detectnet_v2_inference_kitti_etlt.txt
 detectnet_v2_inference_kitti_tlt.txt
 detectnet_v2_retrain_resnet18_kitti_qat.txt
 detectnet_v2_retrain_resnet18_kitti.txt
 detectnet_v2_tfrecords_kitti_trainval.txt
 detectnet_v2_train_resnet18_kitti.txt
 spec_tfrecords_kitti.txt
 spec_train_kitti.txt

$ ls -lh /home
drwxr-x--- 71 ym7 ym7 4.0K Jun  3 13:43 ym7

$ id -u && id -g
1000
1000

I think the paths in ~/.tao_mounts.json are mapped correctly.
What do I miss for this Permission Error?

Thank you.

Ah, perhaps should I set the arguments of tao model detectnet_v2 dataset_convert not like this

!tao model detectnet_v2 dataset_convert -d $SPECS_DIR/spec_tfrecords_kitti.txt \
                                        -o $LOCAL_DATA_DIR/tfrecords_aikata

but like this?

!tao model detectnet_v2 dataset_convert -d /workspace/tao-experiments/detectnet_v2/specs/spec_tfrecords_kitti.txt \
                                        -o /workspace/tao-experiments/data/tfrecords_aikata

Correct.
You can confirm the path by using:
! tao model detectnet_v2 run ls /workspace/tao-experiments/detectnet_v2/specs/spec_tfrecords_kitti.txt

@Morganh

Thanks.

!tao model detectnet_v2 dataset_convert -d /workspace/tao-experiments/detectnet_v2/specs/spec_tfrecords_kitti.txt \
                                        -o /workspace/tao-experiments/data/tfrecords_aikata

2024-06-03 14:36:05,241 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2024-06-03 14:36:05,309 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2024-06-03 14:36:05,545 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True
2024-06-03 05:36:06.559040: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2024-06-03 05:36:06,599 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2024-06-03 05:36:07,666 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:07,695 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:07,698 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:08,777 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-2ezznsjb because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2024-06-03 05:36:08,958 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:10,221 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:10,248 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable  TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:10,251 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable  TF_ALLOW_IOLIBS=1.
2024-06-03 05:36:10,613 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.build_converter 87: Instantiating a kitti converter
2024-06-03 05:36:10,615 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.kitti_converter_lib 176: Num images in
Train: 1036	Val: 258
2024-06-03 05:36:10,616 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.kitti_converter_lib 197: Validation data in partition 0. Hence, while choosing the validationset during training choose validation_fold 0.
2024-06-03 05:36:10,616 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 0
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/dataset_converter_lib.py:181: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.

2024-06-03 05:36:10,616 [TAO Toolkit] [WARNING] tensorflow 137: From /usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/dataset_converter_lib.py:181: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.

2024-06-03 05:36:10,626 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 1
2024-06-03 05:36:10,634 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 2
2024-06-03 05:36:10,642 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 3
2024-06-03 05:36:10,650 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 4
2024-06-03 05:36:10,658 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 5
2024-06-03 05:36:10,666 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 6
2024-06-03 05:36:10,674 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 7
2024-06-03 05:36:10,683 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 8
2024-06-03 05:36:10,690 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 0, shard 9
2024-06-03 05:36:10,701 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 250: 
Wrote the following numbers of objects:
b'card': 126
b'name': 153
b'barcode': 162
b'price': 156
b'card7p': 31

2024-06-03 05:36:10,701 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 0
2024-06-03 05:36:10,737 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 1
2024-06-03 05:36:10,771 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 2
2024-06-03 05:36:10,805 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 3
2024-06-03 05:36:10,839 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 4
2024-06-03 05:36:10,873 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 5
2024-06-03 05:36:10,907 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 6
2024-06-03 05:36:10,944 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 7
2024-06-03 05:36:10,979 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 8
2024-06-03 05:36:11,013 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 166: Writing partition 1, shard 9
2024-06-03 05:36:11,050 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 250: 
Wrote the following numbers of objects:
b'card': 1335
b'name': 1500
b'price': 1486
b'barcode': 1509
b'card7p': 196

2024-06-03 05:36:11,050 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 89: Cumulative object statistics
2024-06-03 05:36:11,050 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 250: 
Wrote the following numbers of objects:
b'card': 1461
b'name': 1653
b'barcode': 1671
b'price': 1642
b'card7p': 227

2024-06-03 05:36:11,050 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 105: Class map. 
Label in GT: Label in tfrecords file 
b'card': b'card'
b'name': b'name'
b'barcode': b'barcode'
b'price': b'price'
b'card7p': b'card7p'
For the dataset_config in the experiment_spec, please use labels in the tfrecords file, while writing the classmap.

2024-06-03 05:36:11,050 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 114: Tfrecords generation complete.
Execution status: PASS
2024-06-03 14:36:15,658 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 363: Stopping container.

It seems I could convert KITTI files to TFRecods files at last.
Thank you for your help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.