The cross-compile can not use the DLA in the device Xavier NX

HI,All

The sampleGoogleNet can use the DLA devices compiled by cross-compile , but ths same code can run on the DLA device compiled directed in NX device.

The more detail in the website
Cant use DLA in xavier nx · Issue #1328 · NVIDIA/TensorRT (github.com)

Hi,

TensorRT needs to evaluate the algorithm when creating the engine.
Since DLA is standalone hardware, it is not available on the host when cross-compiling.

Thanks.

HI,

  • Is there other way to cross-compiling the code using DLA device? like providing the hardware simulater, Will there be any support after that?

  • If the DLA device is used, the code must be compiled in the device with the DLA device.

Hi,

Sorry for the confusion.

You can cross-compiling an app for using DLA-related API.
For example, you can cross-compile the TensorRT plugin library on a desktop environment.

However, it’s not supported to serialize a TensorRT engine file cross-platform, both GPU and DLA.
That’s because the serialized engine is hardware-dependent, and needs to be created on the target directly.

Thanks.

Hi, AastaLLL

  • The newest information, If the caffe model is used to cross-compile , The DLA core can notbe used.
  • If the onnx model is used to cross-compile, the DLA core can be used.
  • Would you give me more information, what the difference between using caffe model and using onnx model
    Thanks for your time.

Hi,

Could you share more information about the DLA usage here?
For cross-compiling, do you generate an app and run it on the device?
Or you will also serialize a TensorRT engine?

Since the TensorRT engine doesn’t support portability, it cannot be generated from the environment not Xavier NX.

More, we do support DLA placement for both ONNX and Caffemodel.
Would you mind sharing detailed steps to reproduce the issue and error you meet?

Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.