Build pytorch v1.8.1 from source for the drive agx

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.5.0.7774
other

Host Machine Version
native Ubuntu 18.04
other

Hey,

I am trying to build pytorch v1.8.1 from source for the AGX, following this blog post: Build the pytorch from source for drive agx xavier

But I constantly get errors:

I try to build pytorch, to get a cuda compatible package, because I want to run yolov5 (GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite) as a python ros2 node.

Hi yannick.otten.2010,

Would you please help to update your registration info to be the business email instead of the private one, and the full name if company?

Thanks

Hi @kayccc ,
I updated my registration info.

Dear @yannick.otten,
I could see error related to compiling .cu files. I would expect this in DRIVE OS 5.2.0 due to unavailability of nvcc on target where as DRIVE SW 10 has nvcc on target.

PyTorch is not officially supported and optimized for DRIVE platform. We recommend using ONNX → TensorRT conversion and use TensorRT model for inference to get optimal performance. Please check Developer Guide :: NVIDIA Deep Learning TensorRT Documentation for more details.

Hey @SivaRamaKrishnaNV ,

I don’t understand why PyTorch is supported on the Jetson and not on the DRIVE platform. Converting models to ONNX and then to TensorRT is a really time consuming workflow and shouldn’t be a “solution”.

The example you named uses the TensorRT python module, but according to the documentation the DRIVE platform doesn’t support this module: TensorRT Support Matrix :: NVIDIA Deep Learning SDK Documentation

Dear @yannick.otten,
Note that, The releases for Jetson and DRIVE are different. TensorRT is well optimized for DRIVE platform and we are optimizing it further and adding support for new op/layers in each release. We have provided parsers to integrate ONNX models into TensorRT. We recommended to convert your model to ONNX for optimal performance . You can prepare your model and convert to ONNX on host machine and then use TRT APIs for ONNX → TRT conversion on target. Please check sampleONNXMnist sample for converting ONNX → TRT.

As I said, we do not officially support building instructions for pyTorch as the versions keep changing You may seek help from the community