tensorflow model implementation on PX2(round two)

I read the topic of how to transfer a tensorflow model to px2 and run it to do inferencing.
the link is as follows:
https://devtalk.nvidia.com/default/topic/1030068/driveworks/-solved-tensorrt3-tensorflow-implementation-in-px2/

there are four steps according to the post.

  1. Tensorflow model → UFF model(on Host)
  2. Copy UFF file to PX2
  3. UFF model → tensorRT engine(on DPX2) (Refer sampleUFFMnist sample)
  4. Load tensorRT engine using C++ API on DPX2 for inferencing (on DPX2)

however, I still have two questions.

1.after I copy a UFF file to PX2, how could I use driveworks C++ API to do inferencing?

2.In the dw sample of object detection, I only see you Nvidia loads a tensorrt_engine.bin to do inferencing. But how can I get such a bin file from UFF model? the TensorRT Optimization Tool only support caffemodel.

Dear patrick-wu,

Could you please refer to below for your topic? Thanks.

TensorRT Python API Documentation

Webinar : Optimizing Self-Driving DNNs on NVIDIA DRIVE PX with TensorRT

Dear SteveNV,

thank you for your prompt reply.

I want to know is it possible that we use opencv on px2 to get images from cameras and then use uff model to predict the images?

Dear patrick-wu,

Here are links that explains how to install OpenCV on DPX2.

https://devtalk.nvidia.com/default/topic/1032172/faq/drivepx2-opencv3-cuda9-patch/

Could you please check the link for your topic?
And if encounter any issue, please let us know. Thanks.