Tao toolkit detectnet training kitty format error

Please provide the following information when requesting support.

• Hardware (tesla P4)
• Network Type (Detectnet_v2)
• TLT Version (Configuration of the TAO Toolkit Instance
task_group: [‘model’, ‘dataset’, ‘deploy’]
format_version: 3.0
toolkit_version: 5.1.0
published_date: 10/10/2023)
• Training spec file(-rw-rw-r-- 1 glueck glueck 3384 Dis 21 2022 detectnet_v2_train_resnet18_kitti.txt
-rw-rw-r-- 1 glueck glueck 303 Dis 21 2022 detectnet_v2_tfrecords_kitti_trainval.txt
-rw-rw-r-- 1 glueck glueck 3378 Dis 21 2022 detectnet_v2_retrain_resnet18_kitti.txt
-rw-rw-r-- 1 glueck glueck 3372 Dis 21 2022 detectnet_v2_retrain_resnet18_kitti_qat.txt
-rw-rw-r-- 1 glueck glueck 1456 Dis 21 2022 detectnet_v2_inference_kitti_tlt.txt
-rw-rw-r-- 1 glueck glueck 1485 Dis 21 2022 detectnet_v2_inference_kitti_etlt.txt
-rw-rw-r-- 1 glueck glueck 1498 Dis 21 2022 detectnet_v2_inference_kitti_etlt_qat.txt)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

Creating a new directory for the output tfrecords dump.

print(“Converting Tfrecords for kitti trainval dataset”)
!mkdir -p $LOCAL_DATA_DIR/tfrecords && rm -rf $LOCAL_DATA_DIR/tfrecords/*
!tao model detectnet_v2 dataset_convert
-d $SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt
-o $DATA_DOWNLOAD_DIR/tfrecords/kitti_trainval/kitti_trainval
-r $USER_EXPERIMENT_DIR/

Converting Tfrecords for kitti trainval dataset
2023-12-07 16:39:57,697 [TAO Toolkit] [INFO] root 160: Registry: [‘nvcr.io’]
2023-12-07 16:39:57,829 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2023-12-07 16:39:57,870 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
2023-12-07 08:39:58.722931: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2023-12-07 08:39:58,774 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2023-12-07 08:40:00,255 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:00,291 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:00,295 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:01,659 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-na53nkbb because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2023-12-07 08:40:01,876 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:03,490 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:03,526 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:03,532 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-07 08:40:04,010 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.logging.logging 197: Log file already exists at /workspace/tao-experiments/detectnet_v2/status.json
2023-12-07 08:40:04,010 [TAO Toolkit] [INFO] root 2102: Starting Object Detection Dataset Convert.
2023-12-07 08:40:04,011 [TAO Toolkit] [INFO] root 2102: [Errno 2] No such file or directory: ‘/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt’
Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 168, in
raise e
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 137, in
main()
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 128, in main
with open(expand_path(args.dataset_export_spec), “r”) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt’
Execution status: FAIL
2023-12-07 16:40:11,061 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.

but i have the spec file
ls -l /home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/

total 28
-rw-rw-r-- 1 glueck glueck 1498 Dis 21 2022 detectnet_v2_inference_kitti_etlt_qat.txt
-rw-rw-r-- 1 glueck glueck 1485 Dis 21 2022 detectnet_v2_inference_kitti_etlt.txt
-rw-rw-r-- 1 glueck glueck 1456 Dis 21 2022 detectnet_v2_inference_kitti_tlt.txt
-rw-rw-r-- 1 glueck glueck 3372 Dis 21 2022 detectnet_v2_retrain_resnet18_kitti_qat.txt
-rw-rw-r-- 1 glueck glueck 3378 Dis 21 2022 detectnet_v2_retrain_resnet18_kitti.txt
-rw-rw-r-- 1 glueck glueck 303 Dis 21 2022 detectnet_v2_tfrecords_kitti_trainval.txt
-rw-rw-r-- 1 glueck glueck 3384 Dis 21 2022 detectnet_v2_train_resnet18_kitti.txt

cat /home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt

kitti_config {
root_directory_path: “/workspace/tao-experiments/Data/Train”
image_dir_name: “Images”
label_dir_name: “Labels”
image_extension: “.jpeg”
partition_mode: “random”
num_partitions: 2
val_split: 20
num_shards: 10
}
image_directory_path: “/workspace/tao-experiments/Data/Train”

Please note that the path $SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt in the command line should be the path inside the docker. It is defined in your ~/.tao_mounts.json.
Please check it.

{
“Mounts”: [
{
“source”: “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port”,
“destination”: “/workspace/tao-experiments”
},
{
“source”: “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs”,
“destination”: “$LOCAL_PROJECT_DIR/specs”
}
],
“DockerOptions”: {
“user”: “1000:1000”
}
}

Suggest you to set as below. It is more convenient.

{
“Mounts”: [
{
“source”: “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port”,
“destination”: “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port”
},

],
“DockerOptions”: {
“user”: “1000:1000”
}
}

import os

Replace the placeholder below with your actual data directory path.

DATA_DIR = “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/data”
os.environ[“LOCAL_DATA_DIR”] = DATA_DIR

train_image_dir = os.path.join(DATA_DIR, “train/image”)
train_label_dir = os.path.join(DATA_DIR, “train/label”)
test_image_dir = os.path.join(DATA_DIR, “Test/Images”)

num_training_images = len(os.listdir(train_image_dir))
num_training_labels = len(os.listdir(train_label_dir))

Check if the test directory exists

if os.path.exists(test_image_dir):
num_testing_images = len(os.listdir(test_image_dir))
print(“Number of images in the test set: {}”.format(num_testing_images))
else:
print(“No test directory found.”)

print(“Number of images in the train set: {}”.format(num_training_images))
print(“Number of labels in the train set: {}”.format(num_training_labels))

Number of images in the test set: 981
Number of images in the train set: 9807
Number of labels in the train set: 9807

print(“TFrecords conversion spec file for kitti training”)
!cat $LOCAL_SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt

TFrecords conversion spec file for kitti training
kitti_config {
root_directory_path: “/workspace/tao-experiments/Data/Train”
image_dir_name: “Images”
label_dir_name: “Labels”
image_extension: “.jpeg”
partition_mode: “random”
num_partitions: 2
val_split: 20
num_shards: 10
}
image_directory_path: “/workspace/tao-experiments/Data/Train”

Creating a new directory for the output tfrecords dump.

print(“Converting Tfrecords for kitti trainval dataset”)
!mkdir -p $LOCAL_DATA_DIR/tfrecords && rm -rf $LOCAL_DATA_DIR/tfrecords/*
!tao model detectnet_v2 dataset_convert
-d $SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt
-o $DATA_DOWNLOAD_DIR/tfrecords/kitti_trainval/kitti_trainval
-r $USER_EXPERIMENT_DIR/

Converting Tfrecords for kitti trainval dataset
2023-12-08 15:09:55,656 [TAO Toolkit] [INFO] root 160: Registry: [‘nvcr.io’]
2023-12-08 15:09:55,796 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2023-12-08 15:09:55,848 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
2023-12-08 07:09:56.649071: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2023-12-08 07:09:56,710 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2023-12-08 07:09:58,150 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:09:58,185 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:09:58,190 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:09:59,568 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-rbpx2e3j because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2023-12-08 07:09:59,811 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:10:01,454 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:10:01,495 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:10:01,499 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 07:10:01,977 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.common.logging.logging 197: Log file already exists at /workspace/tao-experiments/detectnet_v2/status.json
2023-12-08 07:10:01,977 [TAO Toolkit] [INFO] root 2102: Starting Object Detection Dataset Convert.
2023-12-08 07:10:01,977 [TAO Toolkit] [INFO] root 2102: [Errno 2] No such file or directory: ‘/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt’
Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 168, in
raise e
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 137, in
main()
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 128, in main
with open(expand_path(args.dataset_export_spec), “r”) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt’
Telemetry data couldn’t be sent, but the command ran successfully.
[WARNING]:
Execution status: FAIL
2023-12-08 15:10:13,614 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.

i changed the mount code, now this is the flow of mine

Please set to below and retry.

-d /home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt

/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/data/Train/Images
/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/data/Train/Labels

Creating a new directory for the output tfrecords dump.

print(“Converting Tfrecords for kitti trainval dataset”)
!mkdir -p $LOCAL_DATA_DIR/tfrecords && rm -rf $LOCAL_DATA_DIR/tfrecords/*
!tao model detectnet_v2 dataset_convert
-d /home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/specs/detectnet_v2_tfrecords_kitti_trainval.txt
-o $DATA_DOWNLOAD_DIR/tfrecords/kitti_trainval/kitti_trainval
-r $USER_EXPERIMENT_DIR/

Converting Tfrecords for kitti trainval dataset
2023-12-08 17:01:05,047 [TAO Toolkit] [INFO] root 160: Registry: [‘nvcr.io’]
2023-12-08 17:01:05,165 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2023-12-08 17:01:05,196 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
2023-12-08 09:01:05.982075: I tensorflow/stream_executor/platform/default/dso_loader.cc:50] Successfully opened dynamic library libcudart.so.12
2023-12-08 09:01:06,036 [TAO Toolkit] [WARNING] tensorflow 40: Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2023-12-08 09:01:07,449 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:07,484 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:07,489 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:08,878 [TAO Toolkit] [WARNING] matplotlib 500: Matplotlib created a temporary config/cache directory at /tmp/matplotlib-jt0tvnfk because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
2023-12-08 09:01:09,107 [TAO Toolkit] [INFO] matplotlib.font_manager 1633: generated new fontManager
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:10,725 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use sklearn by default. This improves performance in some cases. To enable sklearn export the environment variable TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:10,766 [TAO Toolkit] [WARNING] tensorflow 42: TensorFlow will not use Dask by default. This improves performance in some cases. To enable Dask export the environment variable TF_ALLOW_IOLIBS=1.
WARNING:tensorflow:TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:10,770 [TAO Toolkit] [WARNING] tensorflow 43: TensorFlow will not use Pandas by default. This improves performance in some cases. To enable Pandas export the environment variable TF_ALLOW_IOLIBS=1.
2023-12-08 09:01:11,307 [TAO Toolkit] [INFO] root 2102: Starting Object Detection Dataset Convert.
2023-12-08 09:01:11,308 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.build_converter 87: Instantiating a kitti converter
2023-12-08 09:01:11,308 [TAO Toolkit] [INFO] root 2102: Instantiating a kitti converter
2023-12-08 09:01:11,308 [TAO Toolkit] [INFO] nvidia_tao_tf1.cv.detectnet_v2.dataio.dataset_converter_lib 71: Creating output directory /workspace/tao-experiments/Data/tfrecords/kitti_trainval
2023-12-08 09:01:11,308 [TAO Toolkit] [INFO] root 2102: Generating partitions
2023-12-08 09:01:11,308 [TAO Toolkit] [INFO] root 2102: [Errno 2] No such file or directory: ‘/workspace/tao-experiments/Data/Train/Images’
Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 168, in
raise e
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 137, in
main()
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/scripts/dataset_convert.py”, line 132, in main
converter.convert()
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/dataset_converter_lib.py”, line 82, in convert
partitions = self._partition()
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/detectnet_v2/dataio/kitti_converter_lib.py”, line 167, in _partition
sorted(os.listdir(images_root)) if
FileNotFoundError: [Errno 2] No such file or directory: ‘/workspace/tao-experiments/Data/Train/Images’
Execution status: FAIL
2023-12-08 17:01:27,184 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.

OK, new error. The same solution as above. Maybe you have not set the $DATA_DOWNLOAD_DIR and $USER_EXPERIMENT_DIR correctly.
You can set it explicitly.

Also, change any path inside the spec file since your tao_mounts.json file is already changed.

Setting up env variables for cleaner command line commands.

import os

%env KEY=tlt_encode
%env NUM_GPUS=1
%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/detectnet_v2
%env DATA_DOWNLOAD_DIR=/workspace/tao-experiments/Data

Set this path if you don’t run the notebook from the samples directory.

%env NOTEBOOK_ROOT=/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port

Please define this local project directory that needs to be mapped to the TAO docker session.

The dataset expected to be present in $LOCAL_PROJECT_DIR/data, while the results for the steps

in this notebook will be stored at $LOCAL_PROJECT_DIR/detectnet_v2

!PLEASE MAKE SURE TO UPDATE THIS PATH!.

os.environ[“LOCAL_PROJECT_DIR”] = “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port”

os.environ[“LOCAL_DATA_DIR”] = os.path.join(
os.getenv(“LOCAL_PROJECT_DIR”, os.getcwd()),
“data”
)
os.environ[“LOCAL_EXPERIMENT_DIR”] = os.path.join(
os.getenv(“LOCAL_PROJECT_DIR”, os.getcwd()),
“detectnet_v2”
)

Make the experiment directory

! mkdir -p $LOCAL_EXPERIMENT_DIR

The sample spec files are present in the same path as the downloaded samples.

os.environ[“LOCAL_SPECS_DIR”] = os.path.join(
os.getenv(“NOTEBOOK_ROOT”, os.getcwd()),
“specs”
)
%env SPECS_DIR=$LOCAL_PROJECT_DIR/specs

Showing list of specification files.

!ls -rlt $LOCAL_SPECS_DIR

Mapping up the local directories to the TAO docker.

import os
import json

mounts_file = os.path.expanduser(“~/.tao_mounts.json”)

Define the dictionary with the mapped drives

drive_map = {
“Mounts”: [
{
“source”: “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port”,
“destination”: “/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port”
}
],
“DockerOptions”: {
“user”: “1000:1000”
}
}

Writing the mounts file.

with open(mounts_file, “w”) as mfile:
json.dump(drive_map, mfile, indent=4)

TFrecords conversion spec file for kitti training
kitti_config {
root_directory_path: “/workspace/tao-experiments/Data/Train”
image_dir_name: “Images”
label_dir_name: “Labels”
image_extension: “.jpeg”
partition_mode: “random”
num_partitions: 2
val_split: 20
num_shards: 10
}
image_directory_path: “/workspace/tao-experiments/Data/Train”

this is my training datasets path
/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/data/Train
/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/data/Train/Images
/home/glueck/getting_started_v5.0.0/notebooks/tao_launcher_starter_kit/penang_port/data/Train/Labels

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Need to change above path.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.