Jetson nano status: Internal: too many resources requested for launch

I’m just trying to run codes on a 2GB Nano board by GPU, but get the following status: Internal: too many resources requested for launch error.

An error will occur when executing any of the following codes
import os
os.environ[‘CUDA_VISIBLE_DEVICES’] = ‘0’
import tensorflow as tf
x = tf.signal.fftshift([ 0, 1, 2, 3, 4, -5, -4, -3, -2, -1])

022-05-26 21:48:48.435047: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2022-05-26 21:48:57.157756: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2022-05-26 21:48:57.173363: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:48:57.173711: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1734] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 1.93GiB deviceMemoryBandwidth: 194.55MiB/s
2022-05-26 21:48:57.173865: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2022-05-26 21:48:57.180181: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.10
2022-05-26 21:48:57.180460: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.10
2022-05-26 21:48:57.185066: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2022-05-26 21:48:57.186268: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2022-05-26 21:48:57.192794: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.10
2022-05-26 21:48:57.198667: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.10
2022-05-26 21:48:57.199491: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2022-05-26 21:48:57.199890: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:48:57.200289: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:48:57.200496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1872] Adding visible gpu devices: 0
2022-05-26 21:48:57.203585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:48:57.203965: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1734] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 1.93GiB deviceMemoryBandwidth: 194.55MiB/s
2022-05-26 21:48:57.204262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:48:57.204559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:48:57.204677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1872] Adding visible gpu devices: 0
2022-05-26 21:48:57.204815: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2022-05-26 21:49:01.735889: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-05-26 21:49:01.736047: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2022-05-26 21:49:01.736129: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2022-05-26 21:49:01.736725: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:49:01.737473: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:49:01.737906: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1001] ARM64 does not support NUMA - returning NUMA node zero
2022-05-26 21:49:01.738188: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 93 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2022-05-26 21:49:01.825194: F tensorflow/core/kernels/roll_op_gpu.cu.cc:84] Non-OK-status: GpuLaunchKernel(RollKernel, cfg.block_count, cfg.thread_per_block, 0, d.stream(), cfg.virtual_thread_count, num_dims, input, output, reinterpret_cast<const int32*>(dim_buf), reinterpret_cast<const int32*>(thres_buf), reinterpret_cast<const int64*>(range_buf)) status: Internal: too many resources requested for launch

so would really appreciate some help to get my code running, thx.

python3.6.9
tensorflow 2.5.0
CUDA Version 10.2.89

Hi,

This is a known issue when using TensorFlow on Nano.

The root cause is that the register/resources configuration in TensorFlow is not appropriate for Nano.
However, since TensorFlow is a third-party library, it is not easy for us to fix this.

There is a workaround by turning off the GPU for your reference:

Thanks.

Much thanks for your reply. I know it will work with the CPU. I am wondering can I configure register or resources to suit for Nano

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.