Trained model shows the image as negative


I created a new classifier model using DIGITS and trained for detecting Human eyes using cropped 32x16 size images. The images were from GitHub. This model predicts eye as eye when a cropped eye of an image taken from the web is given. However, it detects as no-eye for the cropped eye image taken from an IP Camera source.

I feel that this is due to the difference in the resolution of the images used for training(higher resolution) and test image(lower resolution).

I used LeNet model.

My questions:
a) Is the resolution of the train and test images matter for classification model?
b) Does the light intensity variation matter in classification?