my question is why I test same model for TensorRT3 engine and deploy in caffe
but engaged gpu memory size in TensorRT3 engine is more than deploying in caffe?
(my network is resnet-101)
GPU memory engaged in TensorRT engine: about 1100MB - tested in Tx2
GPU memory engaged in caffe deploying: about 400MB - tested in Titan and 1080