Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• TensorRT Version 188.8.131.52
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type( questions, new requirements, bugs) bug
converting yolov8m onnx file to engine file consumes too much GPU (11.8GB <) and eventually it crashes. why does it consumes this much GPU memory.
It depends on the model itself. The more layers, the more complicated operations, the more GPU memory consumption.
Do you mean when you run “trtexec” command to generate TensorRT engine from ONNX model it consumes too much memory and crashes?
is there any other way i can successfully create an engine file of yolov8(m) on 12 GB GPU(NVIDIA GeForce GTX 1080 Ti )
I run a deepstream application that utilizes an inference plugin, where I define the ONNX file path within the model configuration file. The engine file creation runs under the hood.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
You can try the “trtexec” way.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.