How to send dwImageCUDA to tensorrt model?

I have got the frame from the camera which is dwImageCuda or dwImageGL (RGBA format) by learning the samples. But I found in the tensorrt samples, it directly read a pgm picture and send to tensorrt model as input. How can I deal with dwImage to link the samples of the two kinds, so that I can send dwImage to tensorrt model? Thanks!

Dear laizheyuan,

Could you please refer to object_detector_tracker sample in /usr/local/driveworks-2.0/samples/src/dnn/sample_object_detector_tracker on your hostPC for this topic?

Thanks for your reply! Yes,I have refered to object_detector_tracker sample. I find it use the function dwDNN_initialzeTensorRTFFromFileNewto initialize a ***.bin tensorrt model.

My question is if I want to use a uff or onnx model, I can’t directly use the above function dwDNN_initialzeTensorRTFFromFileNew. I can refer to tensorRT documents and samples. But how can I send dwImageCUDA with RGBA_UNIT8 format from camera to the tensorRT input. Does TensorRT supports the dwImage?

Dear laizheyuan,

Could you please help to check TensorRT Optimizer Tool readme file first?
The file is in /usr/local/driveworks-2.0/tools/dnn on your hostPC.
Using the TRT optimization tool, you can create bin file from onnx and uff files.

Thanks, I have found the TensorRT Optimizer Tool readme file and try to tansform a onnx model to tensorRT.bin.
However, when I use my own model to take place the **.bin model in the sample_object_detect_tracker, it doesn’t work, throw a message: blobIndex is a larger than output binding count.
By the way, my own *.bin model is yolov3.So does the API in the sample sample_object_detect_tracker have limitations of models? What if I use a different model not satisfied?

Dear laizheyuan,

Can you tell what arguments you used when using TRT optimization tool?
Would you like to try it by adding the following in your arguments? Thanks.
–outputLayers=coverage,bboxes

I use the following command: ./tensorRT_optimization --modelType=onnx --onnxFile=yolov3.onnx
(the readme file shows it doesn’t need to add “–ouputBolobs=bboxes, coverage” when using onnx)

Dear laizheyuan,

Could you please try to re-run the tool with the argument then merge your code and re-try it? Thanks.

Well, I have tried but it doesn’work,thanks anyway! Another question is if I get a model based on higher version of onnx , for example, it has ir_version(0.0.5), but the ./tensorRT_optimization only support onnx ir_version=0.0.3. Do I have to use the old version of onnx or there’s a higher version of tensorRT_optimization?

Dear laizheyuan,
you need to use the tensorRT_optimization supported version.