How to convert my own dataset for Image Segmentation with TensorFlow

Hi, I am new to segmentation, it may be a basic question…
val_images.tfrecords (14.6 MB)
.
I am doing the DLI self-pacing course of “Getting Started with Image Segmentation”
Courses – NVIDIA
I am still learning about segmentation labeling. Which software is commonly used? labelme? Also, how to convert the dataset into tf.record?

Also, how can I import the onnx file to dusty-nv inference?

Thx

Hi,
I have attached the working project example, I tried the solution.ipynb
It is working with its own tfrecords files.

Right now, I use “labelme” to do the segmentation annotation.
Then, I covert the data to
coco format using labelme2coco.py
But how to convert it to the tfrecords file??

I did try to generate the tfrecords by following this link

python create_coco_tf_record.py --logtostderr --train_image_dir=images/train --test_image_dir=images/test --train_annotations_file=images/train.json --test_annotations_file=images/test.json --output_dir=./

But when I run the solution.ipynb, I got error of the tfrecords content… Is there any tutorial I shall follow? Thx
I need to do segmentation of the MRI pictures. Or is there any example of segmentation on MRI pictures?
Thx

print(len(list(parsed_training_dataset)))
print(len(list(parsed_val_dataset)))
2022-08-30 16:51:31.609942: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at example_parsing_ops.cc:94 : INVALID_ARGUMENT: Feature: depth (data type: int64) is required but could not be found.
2022-08-30 16:51:31.609995: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at example_parsing_ops.cc:94 : INVALID_ARGUMENT: Feature: depth (data type: int64) is required but could not be found.

Solution.ipynb (83.6 KB)
NGC.zip (30.0 MB)