3D model varying slices length

I. have a CT scan dataset of skull consisting of lesion and normal cases, the CT scans are in Dicom format. I want to do multi-class classification. But not every Dicom image series has the same depth, some series may have 250 slices, some have 30 slices. My dataset is really small, so I don’t want to drop any 2d slice. And at the end, I will convert Dicom into Nifti for 3d CNN.

I also think to increase the size to max limit by keeping depth as 256. Cover up with empty images if the number of slices are lessor than 256.

Is there any other better option while training model as well during inference?

Could you share the reason why you have various slices for one sample? What are the difference? It will decide the best way to do pre-processing.

Thanks @sluo for response.

Number of slices vary based on the length of body part chosen or it also varies based on slice thickness. What we observed from Kaggle datasets is, it varies from 50-200 slices.

Based on number slices, we required to build 3D model to process those many slices. In this case, how can we train model if number of slices or dicom files across CT scan images are varying

It is possible to use scipy.ndimage.zoom to unify the number of slices, for example 64.

Best regards,
Sincerely Sheng

Its interesting. This mainly be used to reduce 2D file size. Whereas, we are looking for 3D.

I feel this is generic problem where every needs to handle while processing CT or MRI scan data. Could you please check how do they handle if the number of frames or slices or different for each of the scanned data.

It is used in 3D CNN for 3D medical image classification. Of course, there might be other ways to do preprocessing(for example, FSL+FLIRT), I would recommend you take a look at the latest research paper for other approaches. Thanks.

Hi
I assume all images cover the full skull however your resolution is different. You should use some out of the box transformation as:

  1. Spacingd transformation to resample the volumes to 2x2x2 by setting pixdim=(2,2,2)
  2. since you have CT images with HU intenisties you should set it to a certain window using ScaleIntensityRanged
  3. do a random crop using one of RandSpatialCropd or RandCropByPosNegLabeld or RandCropByLabelClassesd depending on how your labels are set

Hope that helps

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.