LLVM ERROR: out of memory

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU RTX 4060
• DeepStream Version 6.2 or 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5 or 8.6
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Reproduce

I am working with deepstream and nvidia-tao.
With RTX 3060, with the code taken from deepstream python app, i created a pgie config file with tao etlt model, then I successfully to convert the .etll model to tensorrt engine.

However, with RTX 4060 I got an error LLVM ERROR: out of memory (DS6.4 and trt 8.6). The log detail is below:

Creating Pipeline 
 
Creating streammux 
 
Creating source_bin  0  
 
Creating uridecodebin for [file:///ws/threat_detection/version_007_14.mp4]
source-bin-00
Creating Pgie 
 
Creating nv optical flow element 

Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating H264 Encoder
Creating H264 rtppay
Adding elements to Pipeline 

Linking elements in the Pipeline 

LLVM ERROR: out of memory
Aborted (core dumped)

Please help me to check it

How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Thank Fiona, i can convert the etlt model to trt by using tao-converter

Good news.

But
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)