I read the topic of how to transfer a tensorflow model to px2 and run it to do inferencing.
the link is as follows:
there are four steps according to the post.
- Tensorflow model -> UFF model(on Host)
- Copy UFF file to PX2
- UFF model -> tensorRT engine(on DPX2) (Refer sampleUFFMnist sample)
- Load tensorRT engine using C++ API on DPX2 for inferencing (on DPX2)
however, I still have two questions.
1.after I copy a UFF file to PX2, how could I use driveworks C++ API to do inferencing?
2.In the dw sample of object detection, I only see you Nvidia loads a tensorrt_engine.bin to do inferencing. But how can I get such a bin file from UFF model? the TensorRT Optimization Tool only support caffemodel.