Confused about .engine / plugin

I am confused about the following…

If I create a .engine / plan file programmatically using TensorRT, does that .engine file include everything needed to run? I asked because if there are custom layers that were used to create the .engine, is that all encapsulated in the serialized .engine file?

If I load the .engine in deepstream, does it need access to the same custom routines?

The whole custom-lib-path thing confuses me. If I only want to use a serialized .engine without building it as part of deepstream application, is that custom library needed?

@kbass

Yes, customized layers are also encapsulated in the serialized in engine file.
But nvinfer_plugin module of TensorRT should have corresponding plugin solutions of the customized layer so that TensorRT can handle the engine with customized layer embedded.
The same custom layer will be used if DeepStream loads the same engine because DeepStream exactly use TensorRT C++ APIs to load the engine.

Here is the documentation talking about how to add a customized layer to engine using C++ APIs or python APIs:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#extending

You can refer to TensorRT OSS repository here https://github.com/NVIDIA/TensorRT/tree/master/plugin for more examples of customized layer implementations of TensorRT.