cuDNN train undefined symbols

I’m trying to train the following model with GPU inference, but getting undefined symbols errors
error.txt (15.6 KB)

My setup:
Fedora 40
CUDA Toolkit 12.4 installed from Fedora 39 repository
cuDNN 8.9.7 installed from tarball. Can’t use cuDNN 9 since onnxruntime-gpu doesn’t support it
onnxruntime-gpu installed with ‘pip install onnxruntime-gpu --extra-index-url Azure DevOps Services | Sign In’ as per onnx documentation

cuDNN itself seems to be working fine since I’m able to run fast-plate-ocr benchmark on GPU

It is also unable to find TensorRT for some reason, which was installed via tarball as well (10.0.1.6 version) and added to LD_LIBRARY_PATH

I’d appreciate any help

upd: tensorflow backend working fine