TensorRT needs to evaluate the algorithm when creating the engine.
Since DLA is standalone hardware, it is not available on the host when cross-compiling.
You can cross-compiling an app for using DLA-related API.
For example, you can cross-compile the TensorRT plugin library on a desktop environment.
However, it’s not supported to serialize a TensorRT engine file cross-platform, both GPU and DLA.
That’s because the serialized engine is hardware-dependent, and needs to be created on the target directly.
Could you share more information about the DLA usage here?
For cross-compiling, do you generate an app and run it on the device?
Or you will also serialize a TensorRT engine?
Since the TensorRT engine doesn’t support portability, it cannot be generated from the environment not Xavier NX.
More, we do support DLA placement for both ONNX and Caffemodel.
Would you mind sharing detailed steps to reproduce the issue and error you meet?