Building Image Segmentation Faster Using Jupyter Notebooks from NGC

Originally published at: Building Image Segmentation Faster Using Jupyter Notebooks from NGC | NVIDIA Developer Blog

The NVIDIA NGC team is hosting a webinar with live Q&A to dive into this Jupyter notebook available from the NGC catalog. Learn how to use these resources to kickstart your AI journey. Register now: NVIDIA NGC Jupyter Notebook Day: Image Segmentation. Image segmentation is the process of partitioning a digital image into multiple segments…

Thank you for providing today’s webinar !

When attempting to train with the suggested command

./UNet_1GPU.sh /results /data 1

the script crashes due to the following error:

Not found: /data/raw_images/private/Class1/train_list.csv; No such file or directory
	 [[{{node IteratorGetNext}}]]

I downloaded the dataset from two different sources (*) and none includes the CSV file.
Could you elaborate on how to generate the CSV file from the provided dataset ?

Thanks !

(*) data sources

Just found that the order of some commands should be different than suggested. Rather than executing

chmod + x ./download_and_preprocess_dagm2007.sh

./download_and_preprocess_dagm2007.sh /data

# download dataset from https://hci.iwr.uni-heidelberg.de/content/weakly-supervised-learning-industrial-optical-inspection

docker cp /home/<your file path> :/data/raw_images/ private

unzip /folder/path/.zip

one should first download the dataset from Weakly Supervised Learning for Industrial Optical Inspection | Heidelberg Collaboratory for Image Processing (HCI) and then execute:

docker cp /home/<your file path> :/data/raw_images/ private

chmod + x ./download_and_preprocess_dagm2007.sh

./download_and_preprocess_dagm2007.sh /data

Thank you for your comment. these two lines download the public part of data:
chmod + x ./download_and_preprocess_dagm2007.sh

./download_and_preprocess_dagm2007.sh /data

after that we can download the private part of the dataset manually from provided link by the script and put it inside the container using these two commands:

docker cp /home/ :/data/raw_images/ private

unzip /folder/path/.zip

the order is correct.

For the demo, I used the link that script provided to make the account. When you have the account you can send a request for private data. they will send you a link to the correct dataset. you will have access to the dataset for limited time to download the correct dataset. I didn’t generate csv file, the link that I received after sending request for private data has everything.