Depth sensing is a critical function for robotic tasks such as localization, mapping and obstacle detection. There has been a significant and growing interest in depth estimation from a single RGB image, due to the relatively low cost and size of monocular cameras. This project explores learning-based monocular depth estimation, targeting real-time inference on embedded systems. We propose an efficient and lightweight encoder-decoder network architecture and apply network pruning to further reduce computational complexity and latency. We deploy our proposed network, FastDepth, on the NVIDIA Jetson TX2 platform, where it runs at 178 fps on the GPU and at 27 fps on the CPU, with active power consumption under 10 W. FastDepth achieves close to state-of-the-art accuracy on the NYU Depth v2 dataset.