I’m planning on buying the Jetson Nano but I want to make sure the workflow is smoothened out before buying it.
I researched a bit about converting tensorflow models to be executed on the jnano and found Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation and How to run TensorFlow Object Detection model on Jetson Nano | DLology .
In both the articles a saved model / .pb file is used to create a TensorRT inference graph.
So, here are my questions:
- Is that all I need to do before loading it in the jnano?
- If my model was successfully converted into a TensorRT inference graph does that mean I should be able to run it on jnano without any issues?
- To load and infer using the model on jnano do I just boot jnano, open a jupyter notebook, start a session, load the graph and run it?
- If I have unsupported layers in my model at what point am I notified about it–when creating the inference graph or only when running the inference on jnano?
- If I do have unsupported layers is there a way to split execution between Nvidia gpu and the ARM processor?
Thanks for any help.