[i]We recommend to use TensorRT for deep model on Jetson.
In TensorRT, we provide fp16 mode which can cut the required memory in half.
This will allow users to run a deeper model on a memory limited device, such as TX2.
when i look into tensor rt these are the following observations
1.Tensor rt supports ssd arch
2.Tensor rt support c++ version of Faster Rcnn built from caffe
Can you pls share the link or references where i can use tensor rt for running faster rcnn + inception v2 models