Possible to train faster rcnn in batch?

All my training images have same size. Is it possible to train in batch for faster rcnn?
I can’t find to train in batch in configuration file.

Hi batu,
What do you mean by “train in batch”?
Actually there is “batch_size_per_gpu” in training spec. You can set it.

I mean more than single image in one time forward/backward training. SDD can be trained in batch. FRCNN accept different image sizes in training, so original training algorithm is single image training. Now my image sizes are all same, so does tlt’s frcnn train allow training multiple images in one iteration?

For 1.0.1 docker, according to https://docs.nvidia.com/metropolis/TLT/tlt-release-notes/index.html, FasterRCNN currently supports only single GPU training with a batch-size of 1.

thank you