I. have a CT scan dataset of skull consisting of lesion and normal cases, the CT scans are in Dicom format. I want to do multi-class classification. But not every Dicom image series has the same depth, some series may have 250 slices, some have 30 slices. My dataset is really small, so I don’t want to drop any 2d slice. And at the end, I will convert Dicom into Nifti for 3d CNN.
I also think to increase the size to max limit by keeping depth as 256. Cover up with empty images if the number of slices are lessor than 256.
Is there any other better option while training model as well during inference?
Number of slices vary based on the length of body part chosen or it also varies based on slice thickness. What we observed from Kaggle datasets is, it varies from 50-200 slices.
Based on number slices, we required to build 3D model to process those many slices. In this case, how can we train model if number of slices or dicom files across CT scan images are varying
Its interesting. This mainly be used to reduce 2D file size. Whereas, we are looking for 3D.
I feel this is generic problem where every needs to handle while processing CT or MRI scan data. Could you please check how do they handle if the number of frames or slices or different for each of the scanned data.
It is used in 3D CNN for 3D medical image classification. Of course, there might be other ways to do preprocessing(for example, FSL+FLIRT), I would recommend you take a look at the latest research paper for other approaches. Thanks.