CUDA is installed and activated but echo $CUDA_VISIBLE_DEVICES returns nothing and can't use CUDA at all

Hi, I have some questions about CUDA
I installed Jetpack 5.0.2 on Jetson Xavier NX developer kit but cuda is not detected in every library such as torch or opencv. (For example, torch.cuda.is_available() or cv2.cuda.getCudaEnabledDeviceCount() returns False and 0. For 2 weeks I thought openCV building method or compatibility with pytorch and opencv is the reason which occurs error but after more than 10 times of rebuilt I am quite sure that CUDA itself has problem.)

After searching, I found a topic struggling with similar problem - Cuda canā€™t work on AGX orin.
The only difference between he/she and me is status of GPU : My Jetson board can activate GPU which I can confirm on jtop.

ā†’ This means GPU is available and can be used by 5 process (listed PID : 4 seo and 1 root), am I right?

In linked post above, jtop status screenshot shows GPU is offline so recommended solution was reflash the device but I think it might not work for me.

Can you tell me how to solve the problem?

  • when I run ./deviceQuery at /usr/local/cuda-11.4/samples/1_Utilities/deviceQuery, it shows
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 100
-> no CUDA-capable device is detected
Result = FAIL
  • When I run jetson_release -v, it shows
Software part of jetson-stats 4.2.3 - (c) 2023, Raffaello Bonghi
Model: NVIDIA Jetson Xavier NX Developer Kit - Jetpack 5.0.2 GA [L4T 35.1.0]
NV Power Mode[8]: MODE_20W_6CORE
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - 699-level Part Number: 699-13668-0001-301 F.0
 - P-Number: p3668-0001
 - Module: NVIDIA Jetson Xavier NX
 - SoC: tegra194
 - CUDA Arch BIN: 7.2
 - Codename: Jakku
Platform:
 - Machine: aarch64
 - System: Linux
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.104-tegra
 - Python: 3.8.10
jtop:
 - Version: 4.2.3
 - Service: Active
Libraries:
 - CUDA: 11.4.239
 - cuDNN: 8.4.1.50
 - TensorRT: 5.0.2
 - VPI: 2.1.6
 - Vulkan: 1.3.203
 - OpenCV: 4.6.0 - with CUDA: YES
  • ls /usr/src | grep nvidia shows
nvidia
  • echo $CUDA_VISIBLE_DEVICES returns nothing.

  • nvcc -V shows

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_May__4_00:02:26_PDT_2022
Cuda compilation tools, release 11.4, V11.4.239
Build cuda_11.4.r11.4/compiler.31294910_0
  • cat /etc/nv_tegra_relase shows
# R35 (release), REVISION: 1.0, GCID: 31346300, BOARD: t186ref, EABI: aarch64, DATE: Thu Aug 25 18:41:45 UTC 2022

I did sudo apt install cuda, sudo apt dist-upgrade, sudo apt install nvidia-jetpack

Moving this topic to the Jetson forums for visibility.

My comment isnā€™t really about the specific problem, but L4T R35.1.0 is rather old on what was the start of a completely new L4T major release. Iā€™ll suggest that you should start by flashing with the most recent L4T/JetPack, in which case the issue might just go away. The earliest R35.x releases had a lot of issues that were fixed. For a list of releases via L4T version or JetPack verison, see:

@linuxdev Hi, thanks for your reply. I read your comment in other page : your comment

In my case, I am using jepack5.0.2 which is higher than 4.3.1 so can I reflash and upgrade jetpack up to 5.1.2 with apt command? My jetson board is attached to other board so It is quite difficult to reflash it by pressing the power button located below the Jetson board.

BTW, even if it is possible to update wireless, for some reason I have to stick to jetpack5.0.2 as much as possisble so I will try reflashing method at last if other methods donā€™t work. Can you see any other suspicious things except l4t version in my post? Thanks.

If you use sudo before the deviceQuery, does that provide different output?

$ sudo ./deviceQuery

@Kangalow Hi, I just tried with sudo command but get the same result.

seo@ubuntu:~$ cd /usr/local/cuda-11.4/samples/1_Utilities/deviceQuery
seo@ubuntu:/usr/local/cuda-11.4/samples/1_Utilities/deviceQuery$ sudo ./deviceQuery 
[sudo] password for seo: 
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 100
-> no CUDA-capable device is detected
Result = FAIL

Earlier L4T 35.x (JetPack 5.x) should be able to migrate to newer L4T releases (I have not done so myself, I always just flash) using the existing OTA.

It is interesting that the deviceQuery is not showing a GPU, in which case even the correct CUDA probably wonā€™t work. Does the GUI run ok? If so, what do you see from (you might need to ā€œsudo apt-get install mesa-utilsā€):
glxinfo | egrep -i '(version|nvidia)'

Basically Iā€™m curious if the NVIDIA drivers for the GUI are able to find and use the GPU.

Sorry for late check. I got the following output by glxinfo | egrep -i '(version|nvidia)'

seo@ubuntu:~$ glxinfo | egrep -i '(version|nvidia)'
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
GLX version: 1.4
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA Tegra Xavier (nvgpu)/integrated
OpenGL core profile version string: 4.6.0 NVIDIA 35.1.0
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 35.1.0
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 35.1.0
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    GL_EXT_shader_group_vote, GL_EXT_shader_implicit_conversions,

it ends with ā€˜,ā€™ no other output is printed. Seems quite awkward, are there something more should be printed?

By the way, what does GUI mean you mentioned? If it means jetson power GUI, I will attach below

Well, the X driver is NVIDIA, which is correct. What do you see from ā€œlsmodā€? I suspect nvgpu is part of it. If not, it seems confusing.

Please show the jtop info page. It will show which version of CUDA is installed, and whether the system can find the version of OpenCV with CUDA you have built.

from lsmod, I can see nvgpu .

Module                  Size  Used by
fuse                  118784  5
xt_conntrack           16384  1
xt_MASQUERADE          16384  1
nf_conntrack_netlink    45056  0
nfnetlink              20480  2 nf_conntrack_netlink
iptable_nat            16384  1
nf_nat                 45056  2 iptable_nat,xt_MASQUERADE
nf_conntrack          131072  4 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE
nf_defrag_ipv6         24576  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  2 nf_conntrack,nf_nat
xt_addrtype            16384  2
iptable_filter         16384  1
br_netfilter           32768  0
lzo_rle                16384  36
lzo_compress           16384  1 lzo_rle
zram                   32768  6
overlay               114688  0
hid_logitech_hidpp     45056  0
snd_soc_tegra186_asrc    36864  1
snd_soc_tegra186_arad    24576  2 snd_soc_tegra186_asrc
snd_soc_tegra186_dspk    20480  2
snd_soc_tegra210_ope    32768  1
snd_soc_tegra210_iqc    16384  0
snd_soc_tegra210_mvc    20480  2
snd_soc_tegra210_afc    20480  6
snd_soc_tegra210_dmic    20480  4
snd_soc_tegra210_amx    32768  4
snd_soc_tegra210_adx    28672  4
snd_soc_tegra210_i2s    24576  6
snd_soc_tegra210_mixer    45056  1
snd_soc_tegra210_admaif   118784  1
snd_soc_tegra_pcm      16384  1 snd_soc_tegra210_admaif
snd_soc_tegra210_sfc    57344  4
ofpart                 16384  0
cmdlinepart            16384  0
qspi_mtd               28672  0
mtd                    69632  4 cmdlinepart,qspi_mtd,ofpart
aes_ce_blk             36864  0
crypto_simd            24576  1 aes_ce_blk
cryptd                 28672  1 crypto_simd
aes_ce_cipher          20480  1 aes_ce_blk
ghash_ce               28672  0
sha2_ce                20480  0
sha256_arm64           28672  1 sha2_ce
sha1_ce                20480  0
input_leds             16384  0
snd_soc_tegra_machine_driver    16384  0
snd_soc_spdif_tx       16384  0
leds_gpio              16384  0
hid_logitech_dj        28672  0
max77620_thermal       16384  0
snd_soc_tegra210_adsp   753664  1
snd_soc_tegra_utils    28672  3 snd_soc_tegra210_admaif,snd_soc_tegra_machine_driver,snd_soc_tegra210_adsp
snd_soc_simple_card_utils    24576  1 snd_soc_tegra_utils
tegra_bpmp_thermal     16384  0
nvadsp                110592  1 snd_soc_tegra210_adsp
nv_imx219              20480  0
userspace_alert        16384  0
snd_soc_tegra210_ahub  1228800  3 snd_soc_tegra210_ope,snd_soc_tegra210_sfc
tegra210_adma          28672  3 snd_soc_tegra210_admaif,snd_soc_tegra210_adsp
snd_hda_codec_hdmi     57344  4
snd_hda_tegra          20480  0
snd_hda_codec         118784  2 snd_hda_codec_hdmi,snd_hda_tegra
snd_hda_core           81920  3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_tegra
spi_tegra210_qspi      28672  0
realtek                24576  1
spi_tegra114           32768  0
loop                   36864  1
binfmt_misc            24576  1
ina3221                24576  0
pwm_fan                24576  0
nvgpu                2494464  20
nvmap                 192512  95 nvgpu
ramoops                28672  0
reed_solomon           20480  1 ramoops
ip_tables              36864  2 iptable_filter,iptable_nat
x_tables               49152  5 xt_conntrack,iptable_filter,xt_addrtype,ip_tables,xt_MASQUERADE

That means the Jetson is fully capable of working correctly. What @Kangalow mentions is how to get to the next debug step.

Opencv with CUDA support is available on Jtop.
I was trying to use CUDA support in object detection (YOLO model) so I rebuilt opencv, but using gpu failed. Initially, I thought method of building opencv caused problem but after painful days I concluded CUDA itself has problem.
This link is previous topic about opencv

If you are relying on OpenCV support for CUDA access, then you have to use the version of OpenCV that you compiled. What is your environment variable for PYTHONPATH set to?

It sounds both hopeful and hopelessā€¦ I canā€™t figure out what to do.
Just in case, does CUDA sensitive with path of python? I tried to build openCV several times in various environment (python2.7, 3.8, anaconda, virtualenv) so I also have path problemsā€¦

Yesterday something went wrong again and when I import cv2 in python3.8, it shows 4.5.4 version of it but I didnā€™t install it and jtop still shows 4.6.0 version. (Onboard opencv was 4.5.4 but I deleted it - before rebuilding 4.6.0, jtop shows opencv with cuda support: MISSING)

But Iā€™m not sure because CUDA support was unable even when python path was correct.

Is there any other way to get CUDA access without OpenCV? I am planning to use CUDA for both point cloud classification and image obeject detection but when I searched for activating jetson CUDA every guide videos in Youtube leads to rebuilding openCV

BTW, as I mentioned above (reply to linuxdev my python path is messed up. Current bashrc is like this.

export CUDA_HOME="/usr/local/cuda-11.4"
export PATH="$CUDA_HOME/bin:$PATH"
export LD_LIBRARY_PATH="$CUDA_HOME/lib64:$LD_LIBRARY_PATH"

export PYENV_ROOT="$HOME/seo/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"

alias python="/usr/bin/python"
#export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib/"
export LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libgomp.so.1:$LD_PRELOAD"

It looks like you built OpenCV version 4.6.0 with CUDA support. If you run from the Terminal:

opencv_version -v

It will give the build information, and where that version of OpenCV is installed. If you did a straight build using Python 3.8 (no virtual environment) the results includes:

Install to: /usr/local

and the Python 3 section shows:

install path: lib/python3.8/site-packages/cv2/python-3.8

Which means that the cv2 packages is in:

/usr/local/lib/python3.8/site-packages

Check to make sure that the directory for cv2 exists there. If not your installation is messed up.

That means that you should set your PYTHONPATH environment variable (somewhere like in .bashrc)

export PYTHONPATH=/usr/local/lib/python3.8/site-packages:$PYTHONPATH

After you import cv2 in Python:

print(cv2.getBuildInformation())

it should match the opencv_version from earlier. The getBuildInformation will tell you if CUDA is enabled for the OpenCV build you are using in Python.

Note that if youā€™ve been building and deleting the installation may be in an unstable state, and things may not work smoothly.

When people mention PYTHONPATH they mean the environment variable PYTHONPATH The content tells Python where to look for packages/modules in addition to the default locations.

CUDA access is through a library or a compiler. You need to include and link against it using your preferred programming language. You can write CUDA specific code (Itā€™s very similar to C/C++ with extensions) and compile it using the nvcc compiler.

To be clear, OpenCV is a computer vision library. You can accelerate some functions in OpenCV using CUDA to take advantage of the GPU. However, thatā€™s just one library and you are certainly not required to use it just to use CUDA. Many third party libraries use OpenCV which might lead to the confusion of needing OpenCV to access CUDA.

1 Like

Thanks for specific steps. opencv_version -v shows

General configuration for OpenCV 4.6.0 
=====================================
  Version control:               unknown

  Extra modules:
    Location (extra):            /home/seo/opencv_contrib-4.6.0/modules
    Version control (extra):     unknown

  Platform:
    Timestamp:                   2023-11-01T00:25:15Z
    Host:                        Linux 5.10.104-tegra aarch64
    CMake:                       3.16.3
    CMake generator:             Unix Makefiles
    CMake build tool:            /usr/bin/make
    Configuration:               RELEASE

  CPU/HW features:
    Baseline:                    NEON FP16

  C/C++:
    Built as dynamic libs?:      YES
    C++ standard:                11
    C++ Compiler:                /usr/bin/c++  (ver 9.4.0)
    C++ flags (Release):         -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
    C++ flags (Debug):           -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
    C Compiler:                  /usr/bin/cc
    C flags (Release):           -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
    C flags (Debug):             -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
    Linker flags (Release):      -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    Linker flags (Debug):        -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    ccache:                      NO
    Precompiled headers:         NO
    Extra dependencies:          m pthread cudart_static -lpthread dl rt nppc nppial nppicc nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cudnn cufft -L/usr/local/cuda-11.4/lib64 -L/usr/lib/aarch64-linux-gnu
    3rdparty dependencies:

  OpenCV modules:
    To be built:                 alphamat aruco barcode bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hdf hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow phase_unwrapping photo plot python2 python3 quality rapid reg rgbd saliency sfm shape stereo stitching structured_light superres surface_matching text tracking video videoio videostab viz wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto
    Disabled:                    world
    Disabled by dependency:      -
    Unavailable:                 java julia matlab ovis ts
    Applications:                apps
    Documentation:               NO
    Non-free algorithms:         YES

  GUI:                           QT5
    QT:                          YES (ver 5.12.8 )
      QT OpenGL support:         YES (Qt5::OpenGL 5.12.8)
    OpenGL support:              YES (/usr/lib/aarch64-linux-gnu/libGL.so /usr/lib/aarch64-linux-gnu/libGLU.so)
    VTK support:                 YES (ver 6.3.0)

  Media I/O: 
    ZLib:                        /usr/lib/aarch64-linux-gnu/libz.so (ver 1.2.11)
    JPEG:                        /usr/lib/aarch64-linux-gnu/libjpeg.so (ver 80)
    WEBP:                        /usr/lib/aarch64-linux-gnu/libwebp.so (ver encoder: 0x020e)
    PNG:                         /usr/lib/aarch64-linux-gnu/libpng.so (ver 1.6.37)
    TIFF:                        /usr/lib/aarch64-linux-gnu/libtiff.so (ver 42 / 4.1.0)
    JPEG 2000:                   OpenJPEG (ver 2.3.1)
    OpenEXR:                     /usr/lib/aarch64-linux-gnu/libImath.so /usr/lib/aarch64-linux-gnu/libIlmImf.so /usr/lib/aarch64-linux-gnu/libIex.so /usr/lib/aarch64-linux-gnu/libHalf.so /usr/lib/aarch64-linux-gnu/libIlmThread.so (ver 2_3)
    HDR:                         YES
    SUNRASTER:                   YES
    PXM:                         YES
    PFM:                         YES

  Video I/O:
    FFMPEG:                      YES
      avcodec:                   YES (58.54.100)
      avformat:                  YES (58.29.100)
      avutil:                    YES (56.31.100)
      swscale:                   YES (5.5.100)
      avresample:                YES (4.0.0)
    GStreamer:                   YES (1.16.3)
    v4l/v4l2:                    YES (linux/videodev2.h)
    Xine:                        YES (ver 1.2.9)

  Parallel framework:            pthreads

  Trace:                         YES (with Intel ITT)

  Other third-party libraries:
    Lapack:                      YES (/usr/lib/aarch64-linux-gnu/liblapack.so /usr/lib/aarch64-linux-gnu/libcblas.so /usr/lib/aarch64-linux-gnu/libatlas.so)
    Eigen:                       YES (ver 3.3.7)
    Custom HAL:                  YES (carotene (ver 0.0.1))
    Protobuf:                    build (3.19.1)

  NVIDIA CUDA:                   YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
    NVIDIA GPU arch:             72
    NVIDIA PTX archs:

  cuDNN:                         YES (ver 8.4.1)

  OpenCL:                        YES (no extra features)
    Include path:                /home/seo/opencv-4.6.0/3rdparty/include/opencl/1.2
    Link libraries:              Dynamic load

  Python 2:
    Interpreter:                 /usr/bin/python2.7 (ver 2.7.18)
    Libraries:                   /usr/lib/aarch64-linux-gnu/libpython2.7.so (ver 2.7.18)
    numpy:                       /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.16.5)
    install path:                lib/python2.7/dist-packages/cv2/python-2.7

  Python 3:
    Interpreter:                 /usr/bin/python3.8 (ver 3.8.10)
    Libraries:                   /usr/lib/aarch64-linux-gnu/libpython3.8.so (ver 3.8.10)
    numpy:                       /home/seo/.local/lib/python3.8/site-packages/numpy/core/include (ver 1.24.4)
    install path:                /home/seo/.pyenv/versions/3.8.10/lib/python3.8/site-packages/cv2/python-3.8

  Python (for build):            /usr/bin/python2.7

  Java:                          
    ant:                         NO
    JNI:                         NO
    Java wrappers:               NO
    Java tests:                  NO

  Install to:                    /usr/local
-----------------------------------------------------------------

It seems python3 section is messed up. Actually, I canā€™t find .pyenv directory in home/seo/ but it exists. Is this virtual environment? But when I installed anaconda and create environment by conda create -n env_name, directory folder appeared.

seo@ubuntu:~$ cd /home/seo
seo@ubuntu:~$ ls
'conda install noticification'   jetsonUtilities        Public
'current state'                  Music                  seo
 DeepStream-Yolo                 opencv-4.6.0           Templates
 Desktop                         OpenCV-4-8-0.sh        ultralytics
 Documents                       opencv_contrib-4.6.0   Videos
 Downloads                       Pictures               yolov5
seo@ubuntu:~$ cd /home/seo/.pyenv
seo@ubuntu:~/.pyenv$ ls
bin           CONTRIBUTING.md  Makefile   shims                versions
CHANGELOG.md  Dockerfile       man        src
COMMANDS.md   libexec          plugins    terminal_output.png
completions   LICENSE          pyenv.d    test
CONDUCT.md    MAINTENANCE.md   README.md  version

I can find cv2 directory exists at /usr/lib/python3.8/dist-packages not at /usr/local/lib/python3.8/site-packages you said ā€¦ and it also donā€™t match with install path told by python3 section.

print(cv2.getBuildInformation()) show different result

>>> print(cv2.getBuildInformation())
General configuration for OpenCV 4.5.4 =====================================
  Version control:               4.5.4-8-g3e4c170df4

  Platform:
    Timestamp:                   2022-01-18T10:01:01Z
    Host:                        Linux 5.10.65-tegra aarch64
    CMake:                       3.16.3
    CMake generator:             Unix Makefiles
    CMake build tool:            /usr/bin/make
    Configuration:               Release

  CPU/HW features:
    Baseline:                    NEON FP16

  C/C++:
    Built as dynamic libs?:      YES
    C++ standard:                11
    C++ Compiler:                /usr/bin/c++  (ver 9.3.0)
    C++ flags (Release):         -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
    C++ flags (Debug):           -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
    C Compiler:                  /usr/bin/cc
    C flags (Release):           -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
    C flags (Debug):             -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
    Linker flags (Release):      -Wl,--gc-sections -Wl,--as-needed  
    Linker flags (Debug):        -Wl,--gc-sections -Wl,--as-needed  
    ccache:                      NO
    Precompiled headers:         NO
    Extra dependencies:          dl m pthread rt
    3rdparty dependencies:

  OpenCV modules:
    To be built:                 calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python2 python3 stitching ts video videoio
    Disabled:                    world
    Disabled by dependency:      -
    Unavailable:                 java
    Applications:                tests perf_tests examples apps
    Documentation:               NO
    Non-free algorithms:         NO

  GUI:                           GTK2
    GTK+:                        YES (ver 2.24.32)
      GThread :                  YES (ver 2.64.6)
      GtkGlExt:                  NO

  Media I/O: 
    ZLib:                        /usr/lib/aarch64-linux-gnu/libz.so (ver 1.2.11)
    JPEG:                        /usr/lib/aarch64-linux-gnu/libjpeg.so (ver 80)
    WEBP:                        build (ver encoder: 0x020f)
    PNG:                         /usr/lib/aarch64-linux-gnu/libpng.so (ver 1.6.37)
    TIFF:                        /usr/lib/aarch64-linux-gnu/libtiff.so (ver 42 / 4.1.0)
    JPEG 2000:                   build (ver 2.4.0)
    HDR:                         YES
    SUNRASTER:                   YES
    PXM:                         YES
    PFM:                         YES

  Video I/O:
    FFMPEG:                      YES
      avcodec:                   YES (58.54.100)
      avformat:                  YES (58.29.100)
      avutil:                    YES (56.31.100)
      swscale:                   YES (5.5.100)
      avresample:                YES (4.0.0)
    GStreamer:                   YES (1.16.2)
    v4l/v4l2:                    YES (linux/videodev2.h)

  Parallel framework:            TBB (ver 2020.1 interface 11101)

  Trace:                         YES (with Intel ITT)

  Other third-party libraries:
    Lapack:                      NO
    Eigen:                       YES (ver 3.3.7)
    Custom HAL:                  YES (carotene (ver 0.0.1))
    Protobuf:                    build (3.5.1)

  Python 2:
    Interpreter:                 /usr/bin/python2.7 (ver 2.7.18)
    Libraries:                   /usr/lib/aarch64-linux-gnu/libpython2.7.so (ver 2.7.18)
    numpy:                       /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.16.5)
    install path:                lib/python2.7/dist-packages/cv2/python-2.7

  Python 3:
    Interpreter:                 /usr/bin/python3 (ver 3.8.10)
    Libraries:                   /usr/lib/aarch64-linux-gnu/libpython3.8.so (ver 3.8.10)
    numpy:                       /usr/lib/python3/dist-packages/numpy/core/include (ver 1.17.4)
    install path:                lib/python3.8/dist-packages/cv2/python-3.8

  Python (for build):            /usr/bin/python2.7

  Java:                          
    ant:                         NO
    JNI:                         NO
    Java wrappers:               NO
    Java tests:                  NO

  Install to:                    /usr
-----------------------------------------------------------------

but until yesterday it was same with opencv_version -V result.

So according to your advice, I have to take two steps - first, set PYTHONPATH is bashrc and choose to use only ONE python path.
-second, as my installation is messed up, delete all and rebuilt with proper path? I setted cmake configuration for like this before. (part of config).

-D OPENCV_PYTHON3_INSTALL_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")  \
-D PYTHON3_EXECUTABLE=/usr/bin/python3 \
-D PYTHON3_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \

@linuxdev

And also, I found in dmesg there are no NVIDIA GPU found error.
is this not related with real acting state?

seo@ubuntu:~$ sudo dmesg | grep GPU
[   28.503802] NVRM: No NVIDIA GPU found.
[   28.593438] NVRM: No NVIDIA GPU found.
[   31.478824] NVRM: No NVIDIA GPU found.

dmesg -l | grep cuda

seo@ubuntu:~$ dpkg -l | grep cuda
ii  cuda                                       11.4.14-1                             arm64        CUDA meta-package
ii  cuda-11-4                                  11.4.14-1                             arm64        CUDA 11.4 meta-package
ii  cuda-cccl-11-4                             11.4.222-1                            arm64        CUDA CCCL
ii  cuda-command-line-tools-11-4               11.4.14-1                             arm64        CUDA command-line tools
ii  cuda-compat-11-4                           11.4.30320414-1                       arm64        cuda-compat-11-4
ii  cuda-compiler-11-4                         11.4.14-1                             arm64        CUDA compiler
ii  cuda-cudart-11-4                           11.4.243-1                            arm64        CUDA Runtime native Libraries
ii  cuda-cudart-dev-11-4                       11.4.243-1                            arm64        CUDA Runtime native dev links, headers
ii  cuda-cuobjdump-11-4                        11.4.239-1                            arm64        CUDA cuobjdump
ii  cuda-cupti-11-4                            11.4.239-1                            arm64        CUDA profiling tools runtime libs.
ii  cuda-cupti-dev-11-4                        11.4.239-1                            arm64        CUDA profiling tools interface.
ii  cuda-cuxxfilt-11-4                         11.4.239-1                            arm64        CUDA cuxxfilt
ii  cuda-documentation-11-4                    11.4.239-1                            arm64        CUDA documentation
ii  cuda-driver-dev-11-4                       11.4.243-1                            arm64        CUDA Driver native dev stub library
ii  cuda-gdb-11-4                              11.4.247-1                            arm64        CUDA-GDB
ii  cuda-libraries-11-4                        11.4.14-1                             arm64        CUDA Libraries 11.4 meta-package
ii  cuda-libraries-dev-11-4                    11.4.14-1                             arm64        CUDA Libraries 11.4 development meta-package
ii  cuda-nvcc-11-4                             11.4.239-1                            arm64        CUDA nvcc
ii  cuda-nvdisasm-11-4                         11.4.239-1                            arm64        CUDA disassembler
ii  cuda-nvml-dev-11-4                         11.4.239-1                            arm64        NVML native dev links, headers
ii  cuda-nvprune-11-4                          11.4.239-1                            arm64        CUDA nvprune
ii  cuda-nvrtc-11-4                            11.4.239-1                            arm64        NVRTC native runtime libraries
ii  cuda-nvrtc-dev-11-4                        11.4.239-1                            arm64        NVRTC native dev links, headers
ii  cuda-nvtx-11-4                             11.4.239-1                            arm64        NVIDIA Tools Extension
ii  cuda-profiler-api-11-4                     11.4.239-1                            arm64        CUDA Profiler API
ii  cuda-runtime-11-4                          11.4.14-1                             arm64        CUDA Runtime 11.4 meta-package
ii  cuda-samples-11-4                          11.4.239-1                            arm64        CUDA example applications
ii  cuda-sanitizer-11-4                        11.4.239-1                            arm64        CUDA Sanitizer
ii  cuda-toolkit-11-4                          11.4.14-1                             arm64        CUDA Toolkit 11.4 meta-package
ii  cuda-toolkit-11-4-config-common            11.4.243-1                            all          Common config package for CUDA Toolkit 11.4.
ii  cuda-toolkit-11-config-common              11.4.243-1                            all          Common config package for CUDA Toolkit 11.
ii  cuda-toolkit-config-common                 11.4.243-1                            all          Common config package for CUDA Toolkit.
ii  cuda-tools-11-4                            11.4.14-1                             arm64        CUDA Tools meta-package
ii  cuda-visual-tools-11-4                     11.4.14-1                             arm64        CUDA visual tools
ii  graphsurgeon-tf                            8.4.1-1+cuda11.4                      arm64        GraphSurgeon for TensorRT package
ii  libcudnn8                                  8.4.1.50-1+cuda11.4                   arm64        cuDNN runtime libraries
ii  libcudnn8-dev                              8.4.1.50-1+cuda11.4                   arm64        cuDNN development libraries and headers
ii  libcudnn8-samples                          8.4.1.50-1+cuda11.4                   arm64        cuDNN samples
ii  libnvinfer-bin                             8.4.1-1+cuda11.4                      arm64        TensorRT binaries
ii  libnvinfer-dev                             8.4.1-1+cuda11.4                      arm64        TensorRT development libraries and headers
ii  libnvinfer-plugin-dev                      8.4.1-1+cuda11.4                      arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                         8.4.1-1+cuda11.4                      arm64        TensorRT plugin libraries
ii  libnvinfer-samples                         8.4.1-1+cuda11.4                      all          TensorRT samples
ii  libnvinfer8                                8.4.1-1+cuda11.4                      arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                       8.4.1-1+cuda11.4                      arm64        TensorRT ONNX libraries
ii  libnvonnxparsers8                          8.4.1-1+cuda11.4                      arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                           8.4.1-1+cuda11.4                      arm64        TensorRT parsers libraries
ii  libnvparsers8                              8.4.1-1+cuda11.4                      arm64        TensorRT parsers libraries
ii  nvidia-cuda                                5.0.2-b231                            arm64        NVIDIA CUDA Meta Package
ii  nvidia-cuda-dev                            5.0.2-b231                            arm64        NVIDIA CUDA dev Meta Package
ii  nvidia-l4t-cuda                            35.1.0-20220825113828                 arm64        NVIDIA CUDA Package
ii  python3-libnvinfer                         8.4.1-1+cuda11.4                      arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                     8.4.1-1+cuda11.4                      arm64        Python 3 development package for TensorRT
ii  tensorrt                                   8.4.1.5-1+cuda11.4                    arm64        Meta package for TensorRT
ii  uff-converter-tf                           8.4.1-1+cuda11.4                      arm64        UFF converter for TensorRT package