_engine = std::shared_ptr<nvinfer1::ICudaEngine>(builder->buildEngineWithConfig(*network, *config), InferDeleter());
It always takes quite some time to optimize the network (around 3 minutes).
Is there some config to skip the optimization part during development or at least speed it up for some performance loss. When I just want to try out stuff? Currently it’s quite annoying to test anything because for every code change I have to wait 3 minutes to run it.
I already tried to add:
But it does not seem to change anything.
Using TensorRT v7.0.0