Hi,
I know it’s probably too early for such a topic but I would like to understand what technologies Nvidia recommends to write Xavier-accelerated robotics, computer vision and deep learning applications.
I understand that JetPack will be available, so I expect that a bunch of low-level libraries will be available, more or less inline with what was available with Jetson.
Will OpenCV be available on JetPack, with Xavier CPU/GPU optimizations enabled? Any information on performance of Halide or OpenCL backend on Xavier?
Will Deepstream SDK still be available and maintained for Xavier? Can I expect that Deepstream applications developed on Jetson TX2 will just run out of the box on Xavier with no or few modifications (and with higher performance)?
Is Isaac SDK the recommended SDK for new applications? If yes, it would be nice to have a bit more insight into it starting from now. If I am not wrong, it’s still in early access for the moment.
Will TensorRT still be the recommended runtime for (accelerated) inference on Xavier?
In general, with Jetson I found good documentation for each individual framework/tool/SDK, but I found it very hard to decide upfront what would be the best tool for the job. It would be nice to have some pointers in that sense with Xavier.
Thanks
Hi benelgiac, yes, JetPack will be available for Jetson Xavier with libraries like CUDA, cuDNN, TensorRT, OpenCV, and VisionWorks.
OpenCV 3.3.0 will be provided with ARM CPU optimizations enabled, you can re-build with CUDA enabled if desired.
In the past OpenCL hasn’t been supported on Jetson, and is not available for Jetson Xavier.
DeepStream SDK for Jetson Xavier will be available an upcoming release, it should run Jetson TX2 applications upgraded from DeepStream v1.5.
Isaac SDK is nearing early access release, please check back in a few weeks for more info.
Yes this is correct, and the Two Days to a Demo tutorial uses TensorRT for inferencing on Jetson.
Thanks Dusty, sounds like good news and like a smooth transition path from TX1/2 to Xavier. Will look forward to the software releases you mention.
Cheers
Giacomo
Dusty,
Will the jetson-inference repository be updated soon for the Xavier module? Also, will custom models created on the Jetson TX2 require any changes to run on the Xavier module? Thanks again.
I recently updated the jetson-inference code to build and run on the Xavier GPU (see commits e2d7d7 and 54c6aa).
I’ll also make another update that enables the use of the DLA’s and using INT8 mode on GPU.
Not that I know of, for GPU it should be straightforward. If you want to run a custom model on DLA and not all layers are supported by DLA, you’ll need to enable GPU fallback (will be an option in the API).
Thanks for all the great feedback (and tutorials on github) @dusty_nv!
I was wondering if you had an ETA for the DeepStream SDK release that would work on the new Xavier?
Thanks!
Sarah
We are targeting a new DeepStream SDK release in December, before the end of the year.