Hi,
I would like to use jetson nano on a robot to do object detection. If I use one of the pre build network with the DectectNet program from the jetson-inference repo it takes several minutes to build the cuda engine for the first time the program. I dont want to build the cuda engine when the jetson is running on the robot using the battery. I read from the tensorrt docs that the .engine file can be stored and used later during inference. I am not sure how this is handled in the jetson-inference repo. Is it possible to do this ?
Thanks
Best Regards
Suraj
1 Like
Hi,
YES. TensorRT have serialization/deserialization API.
You can save the compiled TensorRT engine directly.
[url]jetson-inference/tensorNet.cpp at master · dusty-nv/jetson-inference · GitHub
Thanks.
Note that jetson-inference automatically saves the serialized TensorRT engine to disk the first time you run a model, and loads it from disk upon subsequent runs. So recommend running the app first when connected to AC power source, then when you are on battery the TensorRT engine will have already been created.
@AastaLLL and @dusty_nv thanks a lot for the answer I know how it works now :)