I’ve tried two ways of limiting GPU growth: session config and at the GPU level
For both, I get this error:
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Upon googling, I found people recommending setting this in bashrc: export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1:/$LD_PRELOAD
This unfortunately did not help my problem
This is what I’m using
Python 3.6.9
Tensorflow 2.4.1 (also tried 2.5.0+nv21.8)
JetPack 4.6.2
Full stack trace:
2022-10-11 10:14:29.349921: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 449 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
1 Physical GPUs, 1 Logical GPUs
Traceback (most recent call last):
File "/home/user/face.py", line 1, in <module>
import cv2
File "/usr/lib/python3.6/dist-packages/cv2/__init__.py", line 89, in <module>
bootstrap()
File "/usr/lib/python3.6/dist-packages/cv2/__init__.py", line 79, in bootstrap
import cv2
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Any ideas on how to fix this?