Hi, when I learning the TensorRT C++ API, I didn’t find any guides for writing custom plugins and CUDA device functions (.cu files) other than those contained in the plugin files from the TensorRT samples and in this article https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending.
In particular, I’m interested in how accurately user-defined functions should copy tensorflow functions (for example, should I completly copy the tensorflow::Scope class to write my own function
tensorflow::ops::TensorArrayRead(const ::tensorflow::Scope & scope, ::tensorflow::Input handle, ::tensorflow::Input index, ::tensorflow::Input flow_in, DataType dtype)
and which arguments this functions should have?).
Perhaps someone can provide some other information that could help?