Hi,
I’ll get right to the point: can I use in Driveworks and on DPX2 a TensorRT model that was initially trained in python with TensorFlow?
I have successfully converted my model to an engine on PX2 and can run inference with great accuracy using the TRT libraries. However, when I try to load the same binary via DW I get the following warning:
DNN: TensorRT model file has wrong magic number. Please ensure that the model has been created by TensorRT_optimization tool in DriveWorks. The model might be incompatible.
My code that works is an extension of the TRT sample sampleUffMNIST. After I convert the UFF, I serialize to disk. load from disk, run inference. The predicted values are the same as with the original python TF model on the PC.
My code that doesn’t work (and prints the warning above) is a re-purposed version of the DW sample dnn/sample_object_detector. My goal is to recreate PilotNet. I have found no other sample to use as a basis for that task.
The documentation implies that only models trained via Caffe are usable in DW.
SO, is there any way to work around that?
According to the documentation I need the following input for the TRT optimization tool:
--prototxt: Absolute path to a Caffe deploy file.
--caffemodel: Absolute path to a Caffe model file that contain weights.
--outputBlobs: Names of output blobs combined with a comma.
I’m only certain of the blob, which is a serialized TRT engine (created on the PX2 with the C++ api).
The prototxt is it realted to either the *.uff.pbtxt, *.uff, or *.pb files that are generated during the creation of the engine?
I also get a *-0.data-00000-of-00001 file, is that usable?
Nonetheless, if your reply is negative and there is no way to use my current model, I’d like to request 2 things:
- Some tips on how to use the GMSL cameras with the TRT inference libraries that are already working for me. Can OpenCV work with the GMSL cameras?
- Some link to a hands-on in depth tutorial on how to use Caffe? Ideally I’d like a resource that outlines the whole process, from model design to inference on the PX2.