Generate TensorRT from Caffemodel Error

Hi,
A problem occurred when I transfered my own Caffemodel to the TensorRT model.
LSTM modules are added in my caffemodel, and it failed when I used the driveworks API to generate TensorRT model from my Caffemodel.

The error is like:
Error parsing text-format ditcaffe.NetParameter: 15:19: Message type “ditcaffe.LayerParameter” has no field named “recurrent_param”.)

It seems like TensorRT can not parse the LSTM layer in Caffe model. However, in TensorRT developer guide, I find that the LSTM layer is supported by tensorRT4.

The version of driveworks on my ubuntu16.04 is 1.2.
Please let me know if you need any other info.

Thanks
Issac

Dear wh120355,
We are looking into the issue. We will update you.

Dear wh120355,
Unfortunately we don’t support LSTM layer in caffe model currently.

I also have the same problem on parsing the lstm layer of caffe model.
Is there any solutions to solve this unsupport problem? Wait till you suport or code by myself?

From TensorRT perspective, it can be done by implementing a custom layer. You can take a look at [url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#add_custom_layer[/url].

But for DriveWorks, We focus on the models we develop. And to enable a plugin layer may need to compile the custom code into library, which is not available for currently DWs.