What the different between dwDNN and tensort

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
[y] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[y] Linux
QNX
other

Hardware Platform
[y] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
[y] 1.6.1.8175
1.6.0.8170
other

Host Machine Version
[y] native Ubuntu 18.04
other

I was reviewing the sample_object_detector_tracker.cpp. I can see that dwDNN is created to infer model which is the same as equeueV2 api of tensorrt?

If I need to use DW apis in Xavier instend of TensorRT apis?
Whether using DW apis will be more efficient and heplful for the construct of the pipeline.

About dwDNN, the model file is needed generate by tensorrt_optimization tool. Is this tool meaning the tensorrt tool and the model file meaning the engine?

Dear @wang_chen2,
DW is high level library built on top of low level libraries like CUDA,TensorRT,Nvmedia etc. DW APIs makes use of these low level libraries to implement functionalities efficiently on Xavier.
TensorRT has API calls which can build optimize model and perform inference(forward pass) on network. If the network needs any preprocessing/post processing operations, we need to take care of it. But using DW gives you the flexibility about performing preprocessing/post processing/inferencing. DW supports a couple of preprocessing operations which requires few changes in model.JSON file to enable them.

The optimized models generated using trtexec will be directly loaded using dwDNN APIs. If you want to integrate/use any model into DW, the optimized model has to generated using tensorrt_optimization tool in DW.

OK, I get it.
Thanks for you replay.

Hi,
The optimized models generated using my own tools like sampleINT8 will not be loaded using dwDNN APIs.
But, the tools both trtexec and tensorrt_optimization can not to generate models in INT8 directly, because calib file is needed. And generating calib file needs calib data(some images or other), I need use sampleINT8 to get calib file.
So, firstly I need to use tool to get calib file, then use tensorrt_optimization to generate optimized models for dwDNN.
It is complex, maybe using tensort APIs to infer is more easily.
Or, there is some other ways to get cailb file easily?

Dear @wang_chen2,
Yes. TensorRT_Optimization tool requires calibration file to generate INT8 model and DW does not have APIs to generate calibration file. You need to use TensorRT calibration APIs(used in sampleINT8) to generate a calibration file.

I would recommend to you check https://developer.nvidia.com/video/integrating-dnn-inference-autonomous-vehicle-applications-nvidia-driveworks-sdk-0 ( slide 34 onwards)

OK, Thank you very much.