It is my understanding that I am not able to utilize TensorRT engines that have custom plugin layers and feed GMSL camera input using the Object Detector classes of DriveWorks 1.2. However, from my conversation in that thread, it seems that I should be able to accomplish this by implementing this functionality myself using TensorRT, CUDA, and NvMedia APIs.
I created a TensorRT runtime engine for FasterRCNN using the code provided in the TensorRT 4 sample. Out of curiosity, I attempted to utilize this engine with SampleObjectDetector and got an error concerning a “bad magic number”. I assumed that this probably meant that the only way to utilize the SampleObjectDetector was to go through DW’s TensorRT Optimizer tool. Unfortunately, it seems that this program does not allow you to create TensorRT engines that require custom layer plugins.
After doing all this I was wondering:
Is the “bad magic number” error related to not having created the TensorRT runtime engine with the TensorRT Optimizer tool in DriveWorks 1.2?
Is the optimizer tool the bottleneck that prevents me from using these TensorRT runtime engines with SampleObjectDetector?
Does the TensorRT Optimizer tool in DriveWorks 1.2 support converting UFF / Caffe models with custom plugin layers?
If TensorRT Optimizer tool doesn’t support custom plugin layers, am I supposed to serialize the engine for the neural network via the methods shown in TensorRT 4’s custom plugin layer samples?
What exact version of TensorRT do I need on my workstation to create a usable TensorRT runtime engine for Drive PX2 with Nvidia DRIVE OS 5.0.10?
Thanks in advance for your help!