I’ve used this guide https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#deploy_embed, making a model and converting it from .pb to .uff to .plan. Now that I have the plan file, I would like to deploy it onto my Drive PX2. Is there any way to make an executable to do this that doesn’t require dependencies? TensorRT is not installed on the PX2.
The code I’ve used so far is in Python but I’m not opposed to using C++ to pass the .plan file and run inferencing.