Hello,
I have a network layer that can be run on DLA (which has been validated with the function config->canRunOnDLA(layer)) and it also has been INT8 quantized, but when I declare it as running on DLA with the API config->setDeviceType( layer, DeviceType::kDLA), it still returns “DLA validation failed” error when building engine.
Here is my code for this part:
If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.
How to output verbose if the source code does not build the engine successfully?
The attachment is the verbose log output using trtexec instead of my source code: trtexec_yolov5.log (315.8 KB)
But I want to have some network layers running on the DLA, not the whole model, so I can’t use trtexec, but need to use the TensorRT API to write the corresponding code. But currently running into the problem I described above.
I finally found out that this is because after I changed my header file to inherit Calibrator for DLA, since the header file is not in the target file’s dependencies, the target file won’t compile the header file changes again, and the target file will still use Calibrator which is not applicable to DLA.
My solution: add the header file into makefile (or My solution is to add the header file to the makefile (or cmake) file and recompile the engine to use Calibrator for DLA.