How to use a custom deep learning network on Drive Xavier?

Hi,
I am trying to use my own trained CNN detection model(format .uff or .onnx) on Drive Xavier with Sekonix GMSL cameras to do inference.

The only related documents I can find are located at driveworks-2.0/samples/src/sensors/camera, driveworks-2.0/samples/src/dnn/sample_object_detector_tracker and tensorrt/samples/*.
My original idea is dwImage captured from the camera should be an input to my network model, and tensorRT is used to parse the .uff or .onnx file.

However sample_object_detector_tracker.cpp uses function dwDNN_initializeTensorRTFromFileNew to read the network model in a .bin format which is different from the model files in tensorrt/samples/*,where are most .caffemodel, .pb, .uff, .onnx etc. Im another word, sample_object_detector_tracker.cpp doesn’t parse a common network format, which cannot be applied to my costumed situation.

Moreover, sample_object_detector_tracker.cpp runs function dwImage_getCUDA and dwDataConditioner_prepareData to get the camera input as dwImageCUDA which is fed in dwDNN_infer. But all samples under tensorrt/samples/* just infer image list not camera input.

The core question is how the camera stream(dwImage type) can be used as an input parameter in the inference function in tensorRT?

Thank you in advance.

Dear huchenyang,

TensorRT optimization tool included in DriveWorks enables optimization of a given Caffe, UFF or onnx model using TensorRT.
Could you please refer to README-tensorRT_optimization.md file in /usr/local/driveworks-2.0/tools/dnn on your hostPC?

And could you please refer to the webinar session for NVIDIA Webinars — Optimizing DNN Inference Using CUDA and TensorRT on NVIDIA DRIVE AGX : [url]https://devtalk.nvidia.com/default/topic/1064456/general/nvidia-webinars-mdash-optimizing-dnn-inference-using-cuda-and-tensorrt-on-nvidia-drive-agx/[/url]

Thanks a lot! That’s exactly what I want.
However, when I used yolov3.bin model as the substitute input in sample_object_detector_tracker.cpp, here was is error log:
"
sample_object_detector_tracker: engine.cpp:868: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.
"

yolov3.bin is converted from yolov3.onnx(yolov3_to_onnx.py provided at tensorrt/samples/python/yolov3_onnx) by driveworks-2.0/tools/dnn/tensorRT_optimization.

It seems that function dwDNN_initializeTensorRTFromFileNew doesn’t read the dnn network correctly. Can you tell me what’s the problem?

Thanks a lot! That’s exactly what I want.
However, when I used yolov3.bin model as the substitute input in sample_object_detector_tracker.cpp, here was is error log:
"
sample_object_detector_tracker: engine.cpp:868: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.
"

yolov3.bin is converted from yolov3.onnx(yolov3_to_onnx.py provided at tensorrt/samples/python/yolov3_onnx) by driveworks-2.0/tools/dnn/tensorRT_optimization.

It seems that function dwDNN_initializeTensorRTFromFileNew doesn’t read the dnn network correctly or the step of transferring yolov3 to onnx goes wrong. Can you tell me what’s the problem?

Dear huchenyang,

Could you please help to make sure the path for serializing model is valid? Thanks.