I’m upgrading to JetPack 4.3 and in that process, I have to create a new TensorRT engine file for a custom Tiny Yolo3 network. I use the CUDA, cuDNN and TensorRT versions from JetPack 4.3 and the trt-yolo-app from https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps based on the latest commit before the app was removed (3a8957b2d985d7fc2498a0f070832eb145e809ca). I do not need Deepstream, just optimized inference with TensorRT in C++. I keep getting the error below even though I have tried with batch size = 1 and different maximum workspace settings. Building an engine based on the native Tiny Yolo3 also fails. Any ideas?
ERROR: Internal error: could not find any implementation for node mm1_19, try increasing the workspace size with IBuilder::setMaxWorkspaceSize() ERROR: ../builder/tacticOptimizer.cpp (1461) - OutOfMemory Error in computeCosts: 0 trt-yolo-app: /home/nvidia/src/deepstream_reference_apps/yolo/lib/yolo.cpp:460: void Yolo::createYOLOEngine(nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `m_Engine != nullptr' failed. Aborted (core dumped)