Exploring the SpaceNet Dataset Using DIGITS

Originally published at: https://developer.nvidia.com/blog/exploring-spacenet-dataset-using-digits/

DigitalGlobe, CosmiQ Works and NVIDIA recently announced the launch of the SpaceNet online satellite imagery repository. This public dataset of high-resolution satellite imagery contains a wealth of geospatial information relevant to many downstream use cases such as infrastructure mapping, land usage classification and human geography estimation. The SpaceNet release is unprecedented: it’s the first public…

Do you know the date of acquisition of these images? thanks.

Question: Do you have to / do you input the building rotation into the training/validation label file? Probably into the last column?

Awaiting your future post on this subject...

The Kitti format (described here https://github.com/NVIDIA/D... has a field for rotation. This is not currently used within DetectNet - all bounding boxes produced will have edges parallel to the input image. It would be straightforward to modify DetectNet to estimate rotation, but the DIGITS interface would not currently support drawing rotated bounding boxes.

Hi, in reference to 'We modified the default DetectNet network by changing both network architecture and training parameters'.

I've read you used different input image dimensions, random cropping and made some parameters changes (min. coverage value, min. box height...), but I compared DetectNet and SpaceNet and I don't see any difference in the architecture... what are the network architecture modifications? Thanks!

Hi there, are there any tools or open source code available to create the signed distance label images from vector format labels such as geojson? Thanks

SpaceNet is not a neural network architecture - it is the name of the dataset described in the blog post. The object detection neural network applied to the dataset is DetectNet without modification except to the parameters you mentioned

Here's an example of how to apply a Euclidean distance transform to a bitmap image: http://www.logarithmic.net/...

The SpaceNet Challenge Utilituies repo provides a number of utiliity functions for converting the geoJSON files to other formats: https://github.com/SpaceNet...

hi,

In the article "We binned the signed distance function values into 128 bins ranging in value from 64 to 192" - how you do it ? is the 0 - building edge - is set to 128 and th2 neg' values are stretched between 64 and 128 and positive 129 to 192 ?

hi,
why we need to normalize the value between 64 - 192 ?

Hello and thanks for interesting material!
As it's known the performance is a crucial issue for such tasks, so what time does it take to get a semantic segmentation for tested image via your version of SharpMask CNN (image resolution and time)?

Hi, Thanks for explanation of Building detector method. I have calculated distance map(labels) label image is -64 to 64 based on Yuan's approach and next step is how do we setup and train the images?. Initially, I would like to approach Yuan's approach. Please some one help me in this regard. It would be greatly appreciated.

Thank you

I am still not clear on how you preprocessed the images to convert from TIFF format to another format. If I understand it correctly, DIGITS only accepts PNG, JPEG, JPG, formats and alike. So the original GeoTIFF files had to be converted to one of the approved formats. Second, the original files are about 650x650 pixels, so to resize them to 1280x1280 you had to transform both the original image and the coordinates in the labels as well. Did you just double the coordinates for the boxes, or was there a more precise method?

Please see my responses on the github issues:

https://github.com/NVIDIA/D...
https://github.com/NVIDIA/D...

Hello,

"[...]we changed the minimum allowable coverage map value for a bounding box candidate to be 0.06[...]and the minimum number of bounding boxes that must be clustered to produce a final output bounding box to 4"

Where should I specify this settings?