I’ve been working with CUDA for the last couple of year on desktops/servers.
I’ve looked for quite some time now, for a TensorFlow to C++ demo/tutorial - taking a tensor flow module and running the inference in C++ app, optimize it etc.
I’ve looked at the Hello AI demo, but it doesn’t show this as far as I could tell.
Also, this is something else I’ve not fully understood. Once I have a trained net in TF, do I must convert it to UFF/ONNX and then somehow to nvidia’s tensor flow plan? why so many error-prone steps? Isn’t there something simpler to take a trained net in TF and run the inference with C++ TensorRT?
Hope this makes sense :)