How to run python programming on the px2

@SivaRamaKrishnaNV hi ,there are some problem,how can i run python programming on px2 and how to install numpy on px2,thank your reply

Dear @liubingxiang,
Python is already installed in DRIVE PX2. Can I know why you are looking to install numpy? Did you check using pip? Do you plan to use DNN frameworks?

@SivaRamaKrishnaNV there is a lane Detection programming,It runs with the numpy, and so on ,but I install numpy and so on through pip,It will failure

@SivaRamaKrishnaNV hi,how can i install pytorch and opencv-python on the px2

@SivaRamaKrishnaNV hi, I need your help, how can I install pytorch on the px2, I try some other ways,but it can not work.Look forward your reply

Dear @liubingxiang,
Note that, DRIVE PX2 is as an inference platform. You can get your PyTorch model on host and use TensorRT on DRIVE PX2 to optimize your model furthur and perfrom inference.
We have not tested installing pytorch, opencv-python on DRIVE PX2 as we don’t support them officially.

1m

@SivaRamaKrishnaNV thanks your reply,I appreciate it . Are there some examples, I can test my Pytorch model on host and use TensorRT on PX2 to optimize my model.

@SivaRamaKrishnaNV ,are there some examples introduce that how to get my PyTorch model on host and use TensorRT on DRIVE PX2 to optimize your model furthur and perfrom inference.

Dear @liubingxiang,
Could you check the TensorRT Developer Guide :: Deep Learning SDK Documentation . If you can convert your pyTorch model to ONNX, ONNX- > TRT can be done using TRT C++ APIs. You can refer TensorRT Developer Guide :: Deep Learning SDK Documentation for reference. The unsupported layers has to implemented as a custom plugin in TensorRT.
Note that DRIVE PX2 has TRT 4.x which is very old and there are no releases targetted for DRIVE PX2. Please consider upgrading to DRIVE AGX platform to get latest SW releases.

Dear @SivaRamaKrishnaNV ,I find inforation in the website:Sample Support Guide :: NVIDIA Deep Learning TensorRT Documentation took a screenshot ,


I notice that 5.13. “Hello World” For TensorRT Using PyTorch And Python,It’s similar to my solution about python and ptorch,but I could not find folder like the picture said that on the px2,the picture is below,you can refer that,Do you know about the sulution.

Dear @liubingxiang,
Note that you are referring to TRT 7.2(latest x86 release). But, the last release for DRIVE PX2 has Tensorrt 4.x.

hi @SivaRamaKrishnaNV after i flash in px2,I get the cuda9.2 ,now i need cuda10,Do you know how to install cuda10 to px2,
I find a file ,It’s “TensorRt-Developer-Guide.pdf-TensorRT 4.0 Release Candidate”,I find the sample about Training A model in Pytorch,and Do you know how to install pytorch to px2.I took a screenshot,you can see below.

Dear @liubingxiang,
Note that, the only way to upgrade SW stack on DRIVE PX2 is via sdkmanager. It seems you are already running the latest DRIVE PX2 release and there are no more releases targeted for DRIVE PX2. Please consider upgrading to DRIVE AGX platform to get DRIVE SW updates.
DRIVE platform is not used for training and expected to used for inference. The section that you pointed shows code snippet to show training a model.
The expected workflow with DRIVE PX2 is, train your model on host and convert it to onnx format. Convert ONNX → TRT on DRIVE PX2. Hope it is clear now.

@SivaRamaKrishnaNV thanks your reply,it’s clear .if I train my network based on pytorch on host,
and I covert it to trt format , now do i need install pytorch on the px2 when i inference the network using tensorrt on the px2.

Dear @liubingxiang,
Once your model(Trained in any framework) is ready on host, you need to convert it to ONNX/UFF format on host. On DRIVE platform, ONNX/UFF → TRT conversion can be done using TRT c++ APIs or trtexec tool. So you don’t need to install any DNN framework on Drive platform to perform inference as you TRT model to perform inference