Cross compiling (TensorRT serialization, OpenCV, etc)


I’d like to know if anyone here could cross-compile opencv (or any other lib like grpc or similar) from a PC. Are there some docs about toolchain?

One important thing I’d like to know too, is about TensorRT seriailized engines. Is there a possibility to cross-generate them? Until now I am generating them on the destination board, but it’s so slow…
One other question related to TensorRT serialized engines, is what are the dependencies, I’d like to have a kind of Models Zoo with TensorRT serialized engines, but I’m not sure about dependencies (GPU? TensorRT version, cuDNN version? etc).

I’d like to generate these things with Jenkins (or any CI) and I prefer to not have a jetson of each type to compile or serialize models, I would spare a lot of time.



TensorRT engine is not portable so you will need to generate it directly for the target machine.
But you can serialize the engine into a file so you don’t need to do it every time.

In general, please install everything from the same JetPack version.
The toolkit combination from the same JetPack is verified and tested.


We already serialize and keep de serialized engine, I just wanted to be sure there was no possibility of cross generating them.
Thanks for the answer!

Hi @AastaLLL,

Just one question more about what I asked before.
I keep the TensorRT serialized engines on a server, can you tell me what are the dependencies of a serialized engine, I mean, does it depend on CUDA version + TensorRT version + GPU compute capabilities?
Or CUDA version doesn’t matter.
Is it per GPU compute caps, for example, an engine serialized on a RTX2070 would run on a RTX2080?

For now I keep my serialized engines in directories like “gtx1060_cuda10.0_tensorrt6”

Thanks in advance.