Running classification_chest_xray model on Clara_Train_SDK

After installed Clara_Train_SDK, I ran classification_chest_xray model (train.sh) and found that its input was hard-wire to the PLCO data set as:

FileNotFoundError: [Errno 2] No such file or directory: ‘/workspace/data/CXR/PLCO/PLCO_256_original/AJ00534042806155119_v2.png’

So if we want to use our own dataset then how do we do it? Are there any how-to documents on this part?

Thanks in advance for any helps, pai.

Hi

Thanks for trying clara train.

You should modify the config/environment.json and set the DATA_ROOT and “DATASET_JSON” to point to your data. Full documentation for other parameters is available online https://docs.nvidia.com/clara/tlt-mi/tlt-mi-getting-started/index.html#mmar_configuration

Please let me know if you have other questions

Thanks a lot for your help. The documents are very useful, so will train on our own dataset and see how it’s come out.
pai.

I am glad your issue got resolved.

Please make sure to attend our Clara AI webinar that will cover Clara Train and Clara deploy
https://info.nvidia.com/accelerate-discoveries-with-the-nvidia-clara-ai-toolkit-reg-page.html

Thanks

Now the training program can be run through on our own dataset. Before start training, we look at the image input part (pre_transforms part in config_train.json). Pretty much we can understand on randomly translate and rotate parts but we have two questions on the others as:

  1. Gray-scale images have been duplicated to three channels (256x256x3). Our question is why the images need to be duplicated? Why not to be 256x256?
  2. Images have been scaled by fixed value as pixel_value-2876.37/883 at CenterData routine. So we are not sure that the output of our dataset were scaled correctly or not. In our dataset the min-max of our data is 0 to 65359 and had been scaled to -3.25 to 68.49. I wonder that these scaled values are appropriated?
    Thanks for your help, pai.
    ps: I have a diagram on input part in pdf but do not know to load up.

Hi
Please keep in mind that we are providing these sample models as an example for different problems/tasks. With new clara train you can use your own model and your own transformations.

I am not sure which config file are you referring to, but I think I can answer your questions

  1. It sound like the architecture you are looking at uses pretrained weights form imagenet (trained on natural images) so the input is 3 channels rgb that is why the gray scale medical images are duplicated. Please feel free to train from scratch with one channel if you have enough labeled data
  2. It is usually a good idea to normalize your data, specially to remove any outlier values that could occur as negative values in mri images, or for CT the range should be -1024 to 1024 but some images place -3000 for air outside the scanner. You can use the transformations to scale this to -1,1 or 0,1 and clip values out side your range. I am not familiar with the X-ray range

Thanks for your reply.

ok so for training with one channel, we should start from scratch then (hopefully we can use less memory on the model and then can increase image-input resolution).
By the way if we still would like to use pre-trained weights (from DenseNet in this case) then any requriements on the image input that need to be followed except for resolution of 256x256 and 16 bits(such as does the input image (chest x-ray) need to be scaled according to scaling factor in DICOM header before converted to JPEG)? Because if we use different scaling factor to the pre-trained dataset then our training will be off, right?

Hello, I’m trying out inference with classification chest x-ray model of Clara Train (model.trt.pb in the models folder), on a set of CXRs. I have transformed the images to 256x256 PNG 16-bit gray-scale images. I created the datalist.json file that indicates the path to each image, under the key of validation, as specified in the instruction webpage. I had to set a dummy label forcefully to every entry (as [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] ) since it could not run inference if no label was specificed. Even though I used the infer.sh file, not the validate.sh

I modified environment paths to correctly get images and json. It runs and generates de preds_model.csv. However, predictions are strange numbers (not binary, they are all negative float values). I suspected it could be due to image IO, however I double checked that images are in the correct format as I mentioned above.

Do you have any idea what could be wrong during inference?
Thank you