Jetson TX2 for production

We are looking for an AI board to run the inference of our DL models (Mainly Mobilenet and MaskRCNN models).
We investigate whether Jetson TX2 could be a good option, but we would like some confirmation that it could run those models at at least 12FPS for Mobilenet 24/7 in a production environment.
Also could it support more than one GigE cameras (with a PCIe card)?

Please feel free to add any documentation that I might missed during my research

Thanks

Hi evabeacons,

Sorry for the late reply, I think the TX2 should be capable to run those use cases in 24/7 environment by referring the result at our DeepStream SDK performance, https://developer.nvidia.com/deepstream-sdk

Regarding the camera, please find those partner supported camera from https://developer.nvidia.com/embedded/jetson-partner-supported-cameras