Driveworks inference with TRT engine generated from UFF

Hi,

I’ll get right to the point: can I use in Driveworks and on DPX2 a TensorRT model that was initially trained in python with TensorFlow?

I have successfully converted my model to an engine on PX2 and can run inference with great accuracy using the TRT libraries. However, when I try to load the same binary via DW I get the following warning:

DNN: TensorRT model file has wrong magic number. Please ensure that the model has been created by TensorRT_optimization tool in DriveWorks. The model might be incompatible.

My code that works is an extension of the TRT sample sampleUffMNIST. After I convert the UFF, I serialize to disk. load from disk, run inference. The predicted values are the same as with the original python TF model on the PC.

My code that doesn’t work (and prints the warning above) is a re-purposed version of the DW sample dnn/sample_object_detector. My goal is to recreate PilotNet. I have found no other sample to use as a basis for that task.

The documentation implies that only models trained via Caffe are usable in DW.

SO, is there any way to work around that?

According to the documentation I need the following input for the TRT optimization tool:

--prototxt: Absolute path to a Caffe deploy file.
 --caffemodel: Absolute path to a Caffe model file that contain weights.
 --outputBlobs: Names of output blobs combined with a comma.

I’m only certain of the blob, which is a serialized TRT engine (created on the PX2 with the C++ api).
The prototxt is it realted to either the *.uff.pbtxt, *.uff, or *.pb files that are generated during the creation of the engine?
I also get a *-0.data-00000-of-00001 file, is that usable?

Nonetheless, if your reply is negative and there is no way to use my current model, I’d like to request 2 things:

  1. Some tips on how to use the GMSL cameras with the TRT inference libraries that are already working for me. Can OpenCV work with the GMSL cameras?
  2. Some link to a hands-on in depth tutorial on how to use Caffe? Ideally I’d like a resource that outlines the whole process, from model design to inference on the PX2.

Dear georgios.varisteas,
The Driveworks 0.6 supports only caffe models. We are including support for uff models in next Drive release.

Dear georgios.varisteas,

I am trying to implement something similiar to your project right now. I would be really interested in the documentation you are refering to. Is it possilbe for you to give me accesss to this documentation (link, etc.)?
Thanks you!

Hi rapha,

The documentation is part of the Driveworks SDK and not freely available. You need to have purchased Drive PX2. If you have, just check the SDK installation on your host system for a pdf called NVIDIA_DriveWorks_DevGuide.pdf

Hi georgios.varisteas,

thanks for the reply. I am also working with this documentation, I thought you might also have another reference :)

Dear SivaRamaKrishna, do you know the appoximate time of the next release? Is it more likely to be released in the next weeks or more likely in the next months?

Dear raphaDev,
The release schedule is not fixed yet. We will update you once it is done.

Ok, thank you!

Dear SivaRamaKrishna,

do you have news regarding the next release of DriveWorks?

Thanks!

Dear raphaDev,
The next release in scheduled in mid september

Dear SivaRamaKrishna,

thank you for your fast reply!

Dear raphaDev,
We have released new PDK. Please check Devzone(https://developer.nvidia.com/nvidia-drive-downloads)

That is great news!

I tried the same as georgios.varisteas and got the same error “TensorRT model file has wrong magic number. Please ensure that the model has been created by TensorRT_optimization tool in DriveWorks. The model might be incompatible.”
I am currently using the latest DriveWorks version (1.2). Is TensorFlow still not compatible or is there something wrong with my code?

Dear stefan,
Have you created model using TensorRT_optimization tool provided by DW? Note that your network should not have any plugin layers in order to work in DW.

Dear SivaRamaKrishna,

no I did it as explained here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/workflows/tf_to_tensorrt.html. However, I tried using the TensorRT_optimization tool you mentiond, but I get the error:

Parameter check failed at: ../builder/Network.cpp::addInput::377, condition: isValidDims(dims)
Segmentation fault (core dumped)

I tried networks with tensor dimensions “NCHW” and “NHWC” but the error is the same. Do you have an idea what the error could be?

What exactly do you mean by plugin layer?

Thanks!

I could solve the segmentation fault error and create an engine. DW is now able to read the engine!

Can you please tell me how did you solve this problem?