I read the topic of how to transfer a tensorflow model to px2 and run it to do inferencing.
the link is as follows:
https://devtalk.nvidia.com/default/topic/1030068/driveworks/-solved-tensorrt3-tensorflow-implementation-in-px2/
there are four steps according to the post.
Tensorflow model → UFF model(on Host)
Copy UFF file to PX2
UFF model → tensorRT engine(on DPX2) (Refer sampleUFFMnist sample)
Load tensorRT engine using C++ API on DPX2 for inferencing (on DPX2)
however, I still have two questions.
1.after I copy a UFF file to PX2, how could I use driveworks C++ API to do inferencing?
2.In the dw sample of object detection, I only see you Nvidia loads a tensorrt_engine.bin to do inferencing. But how can I get such a bin file from UFF model? the TensorRT Optimization Tool only support caffemodel.
Dear patrick-wu,
Could you please refer to below for your topic? Thanks.
TensorRT Python API Documentation
This is the API Reference documentation for the NVIDIA TensorRT library. The following set of APIs allows developers to import pre-trained models, calibrate networks for INT8, and build and deploy optimized networks with TensorRT. Networks can be...
Webinar : Optimizing Self-Driving DNNs on NVIDIA DRIVE PX with TensorRT
Dear SteveNV,
thank you for your prompt reply.
I want to know is it possible that we use opencv on px2 to get images from cameras and then use uff model to predict the images?
Dear patrick-wu,
Here are links that explains how to install OpenCV on DPX2.
Pre-Conditions: DrivePX2 hardware: AutoCruise or AutoChauffeur. The latest Nvidia Drive Linux SDK is installed, SDK can be downloaded from : https://developer.nvidia.com/nvidia-drive-downloads Git is installed , otherwise run : sudo apt-get...
https://devtalk.nvidia.com/default/topic/1032172/faq/drivepx2-opencv3-cuda9-patch/
Could you please check the link for your topic?
And if encounter any issue, please let us know. Thanks.