I have downloaded the latest version SDK 1.5 and in its manual pdf, it’s stated that NVIDIA recommends that DeepStream be run on a hardware platform with an NVIDIA Tesla® P4 or P40 graphics card. Is this just a recommendation or a prerequisite system requirement? i.e. can we deploy SDK 1.5 on Jetson TX1?
In GitHub page of Jetson Inference ( https://github.com/dusty-nv/jetson-inference ), there exist source code where compilation from it explained in detail. Can we use that source code instead of downloading DeepStream SDK 1.0 (which is currently unavailable to download) from Nvidia’s website?
Hi burakmandira, this is referring to “DeepStream for Tesla”, there is also a version in development “DeepStream for Jetson”. Both have the same shared API for portability across the platforms, however the binaries for the Tesla version won’t run on Jetson.
You can certainly follow the jetson-inference repo to get started training & deploying your own deep learning networks, the tutorial highlights the DIGITS->TensorRT workflow. jetson-inference is meant mostly for deployment on Jetson, although some folks on GitHub have it building and running on x86 with discrete GPU (unofficially).