Problem with running NVIDIA Digits on images from electronics field

I have successfully trained NVIDIA Digits for Kitty Database and for artificial images with 4 black rectangles in white background.
However, there is no effect for images from electronics field (pictures of integrated circuits and X-ray photos of rollers). These pictures have different size than Kitty pictures. Additionally, they are from different field. What is the reason and what should be modified to fix the problem? Should all labeled images in all pictures have approximately the same size?

For DIGITS image classification, it is usually the case that all input images should have the same size, and the network model normally has an input layer size that is consistent with the image size.

The database creation step in DIGITS should allow you to resize all incoming pictures to the same size as they are placed in the database.

Thanks for fast answer. I have DIGITS resized all my photos to 1224x370. Now mAP is about 30%. It seems that small objects are especially hard to find. Another problem is that for big objects algorithms “thinks” that these are 2 objects and divides the big objects into two small ones. Would training separately for each object size give better result?