NVMe Driver not registered with nvidia-fs - GDS NVMe unsupported on Rocky 8.6

Hi Team

I am trying to enable GDS as NVMe supported on Dell R750XA server .
Below is my OS configuration

[root@node002 src]# cat /etc/centos-release
Rocky Linux release 8.6 (Green Obsidian)
[root@node002 src]#
[root@node002 src]# uname -r
4.18.0-477.27.1.el8_8.x86_64
[root@node002 src]# dnf list cuda-tools*
cuda-tools-12-2.x86_64                                                                                     12.2.1-1                                                                                        @cuda

[root@node002 src]# dnf list gds*
Installed Packages
gds-tools-12-2.x86_64                                                                                      1.7.1.12-1                                                                                      @cuda

Installed Packages
nvidia-fs.x86_64                                                                                         2.17.3-1                                                                                          @cuda

check1 : Loaded Kernel modules

[root@node002 src]# lsmod | grep nvidia_fs
nvidia_fs             253952  0
nvidia              56508416  3 nvidia_uvm,nvidia_fs,nvidia_modeset

[root@node002 src]# lsmod | grep nvme_core
nvme_core             139264  7 nvme,nvme_fc,nvme_fabrics
t10_pi                 16384  3 nvmet,sd_mod,nvme_core

IOMMU

[root@node002 src]# dmesg | grep -i iommu
[    0.000000] Command line: BOOT_IMAGE=node/vmlinuz.node002 initrd=node/initrd.node002  biosdevname=0 net.ifnames=0 nonm acpi=on nicdelay=0 rd.driver.blacklist=nouveau xdriver=vesa intel_iommu=off console=tty0 ip=192.168.61.92:192.168.61.88:192.168.61.2:255.255.255.0 BOOTIF=01-04-3f-72-dc-06-85
[    0.000000] Kernel command line: BOOT_IMAGE=node/vmlinuz.node002 initrd=node/initrd.node002  biosdevname=0 net.ifnames=0 nonm acpi=on nicdelay=0 rd.driver.blacklist=nouveau xdriver=vesa intel_iommu=off console=tty0 ip=192.168.61.92:192.168.61.88:192.168.61.2:255.255.255.0 BOOTIF=01-04-3f-72-dc-06-85
[    0.000000] DMAR: IOMMU disabled
[    1.551025] iommu: Default domain type: Passthrough

PCIe Topology => GPU and NVMe devices on the same PXL Switch

[root@node002 ~]# lspci -tv | egrep -i "nvidia | Sams"
 |           \-02.0-[e3]----00.0  NVIDIA Corporation GA100 [A100 PCIe 80GB]
 |           \-02.0-[ca]----00.0  NVIDIA Corporation GA100 [A100 PCIe 80GB]
 |           \-02.0-[65]----00.0  NVIDIA Corporation GA100 [A100 PCIe 80GB]
 |           +-02.0-[31]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM174X
 |           +-03.0-[32]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
 |           \-02.0-[17]----00.0  NVIDIA Corporation GA100 [A100 PCIe 80GB]
[root@node002 ~]#
[root@node002 ~]#
[root@node002 ~]#
[root@node002 ~]# lspci | grep -i sams
31:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM174X
32:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
[root@node002 ~]#
[root@node002 ~]#
[root@node002 ~]#
[root@node002 ~]# lspci -vv -s 32:00.0 | grep 'ACS' -A2
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP+ FCP+ CmpltTO+ CmpltAbrt+ UnxCmplt- RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                CEMsk:  RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
--
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [178 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
[root@node002 ~]#
[root@node002 ~]#
[root@node002 ~]# lspci | grep -i nvidia
17:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
65:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
ca:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
e3:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
[root@node002 ~]#
[root@node002 ~]# lspci -vv -s 17:00.0 | grep 'ACS' -A2
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP+ FCP+ CmpltTO+ CmpltAbrt+ UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
--
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [c1c v1] Physical Layer 16.0 GT/s <?>
        Capabilities: [d00 v1] Lane Margining at the Receiver <?>
[root@node002 src]# cat /proc/driver/nvidia-fs/stats
GDS Version: 1.7.2.11
NVFS statistics(ver: 4.0)
NVFS Driver(version: 2.17.5)
Mellanox PeerDirect Supported: False
IO stats: Disabled, peer IO stats: Enabled
Logging level: info

Active Shadow-Buffer (MiB): 0
Active Process: 0
Reads                           : err=0 io_state_err=0
Sparse Reads                    : n=0 io=0 holes=0 pages=0
Writes                          : err=0 io_state_err=0 pg-cache=0 pg-cache-fail=0 pg-cache-eio=0
Mmap                            : n=0 ok=0 err=0 munmap=0
Bar1-map                        : n=0 ok=0 err=0 free=0 callbacks=0 active=0 delay-frees=0
Error                           : cpu-gpu-pages=0 sg-ext=0 dma-map=0 dma-ref=0
Ops                             : Read=0 Write=0 BatchIO=0
[root@node002 src]# cat /proc/driver/nvidia-fs/peer_affinity
GPU P2P DMA distribution based on pci-distance

(last column indicates p2p via root complex)
GPU :0000:ca:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:65:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:e3:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:17:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[root@node002 src]#
[root@node002 src]#
[root@node002 src]# cat /proc/driver/nvidia-fs/peer_distance
gpu             peer            peerrank        p2pdist link    gen     numa    np2p    class
0000:ca:00.0    0000:98:00.1    0x00820070      0x0082  0x10    0x03    0x01    0       network
0000:ca:00.0    0000:98:00.0    0x00820070      0x0082  0x10    0x03    0x01    0       network
0000:ca:00.0    0000:33:00.0    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:ca:00.0    0000:31:00.0    0x01010090      0x0101  0x04    0x04    0x00    0       nvme
0000:ca:00.0    0000:04:00.1    0x0101009e      0x0101  0x01    0x02    0x00    0       network
0000:ca:00.0    0000:32:00.0    0x01010090      0x0101  0x04    0x04    0x00    0       nvme
0000:ca:00.0    0000:33:00.3    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:ca:00.0    0000:33:00.1    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:ca:00.0    0000:04:00.0    0x0101009e      0x0101  0x01    0x02    0x00    0       network
0000:ca:00.0    0000:33:00.2    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:65:00.0    0000:98:00.1    0x01010070      0x0101  0x10    0x03    0x01    0       network
0000:65:00.0    0000:98:00.0    0x01010070      0x0101  0x10    0x03    0x01    0       network
0000:65:00.0    0000:33:00.0    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:65:00.0    0000:31:00.0    0x00820090      0x0082  0x04    0x04    0x00    0       nvme
0000:65:00.0    0000:04:00.1    0x0082009e      0x0082  0x01    0x02    0x00    0       network
0000:65:00.0    0000:32:00.0    0x00820090      0x0082  0x04    0x04    0x00    0       nvme
0000:65:00.0    0000:33:00.3    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:65:00.0    0000:33:00.1    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:65:00.0    0000:04:00.0    0x0082009e      0x0082  0x01    0x02    0x00    0       network
0000:65:00.0    0000:33:00.2    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:e3:00.0    0000:98:00.1    0x00820070      0x0082  0x10    0x03    0x01    0       network
0000:e3:00.0    0000:98:00.0    0x00820070      0x0082  0x10    0x03    0x01    0       network
0000:e3:00.0    0000:33:00.0    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:e3:00.0    0000:31:00.0    0x01010090      0x0101  0x04    0x04    0x00    0       nvme
0000:e3:00.0    0000:04:00.1    0x0101009e      0x0101  0x01    0x02    0x00    0       network
0000:e3:00.0    0000:32:00.0    0x01010090      0x0101  0x04    0x04    0x00    0       nvme
0000:e3:00.0    0000:33:00.3    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:e3:00.0    0000:33:00.1    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:e3:00.0    0000:04:00.0    0x0101009e      0x0101  0x01    0x02    0x00    0       network
0000:e3:00.0    0000:33:00.2    0x01010088      0x0101  0x08    0x03    0x00    0       network
0000:17:00.0    0000:98:00.1    0x01010070      0x0101  0x10    0x03    0x01    0       network
0000:17:00.0    0000:98:00.0    0x01010070      0x0101  0x10    0x03    0x01    0       network
0000:17:00.0    0000:33:00.0    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:17:00.0    0000:31:00.0    0x00820090      0x0082  0x04    0x04    0x00    0       nvme
0000:17:00.0    0000:04:00.1    0x0082009e      0x0082  0x01    0x02    0x00    0       network
0000:17:00.0    0000:32:00.0    0x00820090      0x0082  0x04    0x04    0x00    0       nvme
0000:17:00.0    0000:33:00.3    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:17:00.0    0000:33:00.1    0x00820088      0x0082  0x08    0x03    0x00    0       network
0000:17:00.0    0000:04:00.0    0x0082009e      0x0082  0x01    0x02    0x00    0       network
0000:17:00.0    0000:33:00.2    0x00820088      0x0082  0x08    0x03    0x00    0       network

OFED info

[root@node002 src]# ofed_info
MLNX_OFED_LINUX-5.8-3.0.7.0 (OFED-5.8-3.0.7):
clusterkit:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/clusterkit-1.8.428-1.58101.src.rpm

dapl:
mlnx_ofed_dapl/dapl-2.1.10.1.mlnx-OFED.4.9.0.1.5.58033.src.rpm

dpcp:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/dpcp-1.1.37-1.58101.src.rpm

dump_pr:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/dump_pr-1.0-5.13.0.MLNX20221016.gac314ef.58101.src.rpm

hcoll:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/hcoll-4.8.3220-1.58101.src.rpm

ibdump:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/ibdump-6.0.0-1.58101.src.rpm

ibsim:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/ibsim-0.10-1.58101.src.rpm

ibutils2:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/ibutils2-2.1.1-0.156.MLNX20221016.g4aceb16.58101.src.rpm

iser:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860
isert:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860
kernel-mft:
mlnx_ofed_mft/kernel-mft-4.22.1-307.src.rpm

knem:
knem.git mellanox-master
commit a805e8ff50104ac77b20f8a5eb496a71cd7c384c
libvma:
vma/source_rpms/libvma-9.7.2-1.src.rpm

libxlio:
/sw/release/sw_acceleration/xlio/2.0.7/libxlio-2.0.7-1.src.rpm

mlnx-dpdk:
https://github.com/Mellanox/dpdk.org mlnx_dpdk_20.11_last_stable
commit dfeb0f20c5807139a5f250e2ef1d58e9ac0130ce
mlnx-en:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860

mlnx-ethtool:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/mlnx-ethtool-5.18-1.58101.src.rpm

mlnx-iproute2:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/mlnx-iproute2-5.19.0-1.58101.src.rpm

mlnx-nfsrdma:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860
mlnx-nvme:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860
mlnx-ofa_kernel:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860

mlnx-tools:
https://github.com/Mellanox/mlnx-tools mlnx_ofed_5_8
commit f7e5694e8371ef0c6a71273ea7755f7023c35517
mlx-steering-dump:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/mlx-steering-dump-1.0.0-0.58101.src.rpm

mpi-selector:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/mpi-selector-1.0.3-1.58101.src.rpm

mpitests:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/mpitests-3.2.20-de56b6b.58101.src.rpm

mstflint:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/mstflint-4.16.1-2.58101.src.rpm

multiperf:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/multiperf-3.0-3.0.58101.src.rpm

ofed-docs:
docs.git mlnx_ofed-4.0
commit 3d1b0afb7bc190ae5f362223043f76b2b45971cc

openmpi:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/openmpi-4.1.5a1-1.58101.src.rpm

opensm:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/opensm-5.13.0.MLNX20221016.10d3954-0.1.58101.src.rpm

openvswitch:
https://gitlab-master.nvidia.com/sdn/ovs mlnx_ofed_5_8_1
commit 0565b8676ac4a40be3a2e07a8ce27a37ac792915
perftest:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/perftest-4.5-0.18.gfcddfe0.58101.src.rpm

rdma-core:
mlnx_ofed/rdma-core.git mlnx_ofed_5_8
commit 6e6f497a3412148b1e05deda456b000865472dff
rshim:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/rshim-2.0.6-18.g955dbef.src.rpm

sharp:
mlnx_ofed_sharp/sharp-3.1.1.MLNX20221122.c93d7550.tar.gz

sockperf:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-1.0.1/SRPMS/sockperf-3.10-0.git5ebd327da983.58101.src.rpm

srp:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_5_8
commit 65e3aec417045faa2228224b4a9fb74c02742860
ucx:
mlnx_ofed_ucx/ucx-1.14.0-1.src.rpm

xpmem:
/sw/release/mlnx_ofed/IBHPC/MLNX_OFED_LINUX-5.8-3.0.5/SRPMS/xpmem-2.6.4-1.58305.src.rpm


Installed Packages:
-------------------

librdmacm-utils
dapl-utils
dpcp
ucx-knem
mlnxofed-docs
mlnx-tools
knem-modules
libxpmem
libibverbs-utils
opensm-devel
sharp
openmpi
mlnx-ofa_kernel-source
isert
libibumad
mlnx-ofa_kernel
kernel-mft
opensm
dapl-devel
mstflint
dump_pr
ucx-devel
ucx-rdmacm
hcoll
mlnx-iproute2
xpmem-modules
mlnx-nfsrdma
infiniband-diags
ibacm
mlnx-ofa_kernel-modules
knem
opensm-libs
dapl
perftest
ibutils2
ucx
ucx-ib
ucx-xpmem
mlnx-ethtool
mpitests_openmpi
iser
mlnx-nvme
rdma-core
rdma-core-devel
opensm-static
srp_daemon
ucx-cma
hcoll-cuda
mlnx-ofa_kernel-devel
srp
librdmacm
dapl-devel-static
ibdump
ucx-cuda
rshim
libibverbs
xpmem
mpi-selector
ibsim

gdscheck.py

[root@node002 src]# /usr/local/cuda-12.2/gds/tools/gdscheck.py -p
 GDS release version: 1.7.1.12
 nvidia_fs version:  2.17 libcufile version: 2.12
 Platform: x86_64
 ============
 ENVIRONMENT:
 ============
 =====================
 DRIVER CONFIGURATION:
 =====================
 NVMe               : Unsupported
 NVMeOF             : Unsupported
 SCSI               : Unsupported
 ScaleFlux CSD      : Unsupported
 NVMesh             : Unsupported
 DDN EXAScaler      : Unsupported
 IBM Spectrum Scale : Unsupported
 NFS                : Unsupported
 WekaFS             : Unsupported
 Userspace RDMA     : Unsupported
 --Mellanox PeerDirect : Disabled
 --rdma library        : Not Loaded (libcufile_rdma.so)
 --rdma devices        : Not configured
 --rdma_device_status  : Up: 0 Down: 0
 =====================
 CUFILE CONFIGURATION:
 =====================
 properties.use_compat_mode : false
 properties.force_compat_mode : false
 properties.gds_rdma_write_support : true
 properties.use_poll_mode : false
 properties.poll_mode_max_size_kb : 4
 properties.max_batch_io_size : 128
 properties.max_batch_io_timeout_msecs : 5
 properties.max_direct_io_size_kb : 16384
 properties.max_device_cache_size_kb : 131072
 properties.max_device_pinned_mem_size_kb : 33554432
 properties.posix_pool_slab_size_kb : 4 1024 16384
 properties.posix_pool_slab_count : 128 64 32
 properties.rdma_peer_affinity_policy : RoundRobin
 properties.rdma_dynamic_routing : 0
 fs.generic.posix_unaligned_writes : false
 fs.lustre.posix_gds_min_kb: 0
 fs.weka.rdma_write_support: false
 fs.gpfs.gds_write_support: false
 profile.nvtx : false
 profile.cufile_stats : 0
 miscellaneous.api_check_aggressive : false
 execution.max_io_threads : 4
 execution.max_io_queue_depth : 128
 execution.parallel_io : true
 execution.min_io_threshold_size_kb : 8192
 execution.max_request_parallelism : 4
 properties.force_odirect_mode : false
 properties.prefer_iouring : false
 =========
 GPU INFO:
 =========
 GPU index 0 NVIDIA A100 80GB PCIe bar:1 bar size (MiB):131072 supports GDS, IOMMU State: Disabled
 GPU index 1 NVIDIA A100 80GB PCIe bar:1 bar size (MiB):131072 supports GDS, IOMMU State: Disabled
 GPU index 2 NVIDIA A100 80GB PCIe bar:1 bar size (MiB):131072 supports GDS, IOMMU State: Disabled
 GPU index 3 NVIDIA A100 80GB PCIe bar:1 bar size (MiB):131072 supports GDS, IOMMU State: Disabled
 ==============
 PLATFORM INFO:
 ==============
 IOMMU: disabled
 Platform verification succeeded

nvidia-smi

[root@node002 src]# nvidia-smi
Thu Nov 9 14:57:42 2023
±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100 80GB PCIe Off | 00000000:17:00.0 Off | 0 |
| N/A 42C P0 61W / 300W | 4MiB / 81920MiB | 0% Default |
| | | Disabled |
±----------------------------------------±---------------------±---------------------+
| 1 NVIDIA A100 80GB PCIe Off | 00000000:65:00.0 Off | 0 |
| N/A 41C P0 64W / 300W | 4MiB / 81920MiB | 0% Default |
| | | Disabled |
±----------------------------------------±---------------------±---------------------+
| 2 NVIDIA A100 80GB PCIe Off | 00000000:CA:00.0 Off | 0 |
| N/A 54C P0 78W / 300W | 4MiB / 81920MiB | 0% Default |
| | | Disabled |
±----------------------------------------±---------------------±---------------------+
| 3 NVIDIA A100 80GB PCIe Off | 00000000:E3:00.0 Off | Off |
| N/A 49C P0 73W / 300W | 4MiB / 81920MiB | 0% Default |
| | | Disabled |
±----------------------------------------±---------------------±---------------------+

±--------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
±--------------------------------------------------------------------------------------+

cufile.log


 02-11-2023 11:42:01:926 [pid=3540949 tid=3540949] NOTICE  cufio-fs:408 dumping volume attributes: DEVNAME:/dev/nvme1n1,ID_FS_TYPE:ext4,ID_FS_USAGE:filesystem,UDEV_PCI_BRIDGE:0000:30:02.0,device/transport:pcie,fsid:b6aba42f597f4a560x,numa_node:0,queue/logical_block_size:4096,wwid:eui.36544d3052b001630025384700000001,
 02-11-2023 11:42:01:926 [pid=3540949 tid=3540949] NOTICE  cufio:1036 cuFileHandleRegister GDS not supported or disabled by config, using cuFile posix read/write with compat mode enabled
 05-11-2023 15:01:56:615 [pid=38031 tid=38031] ERROR  cufio:480 cuInit Failed, error CUDA_ERROR_NO_DEVICE
 05-11-2023 15:01:56:615 [pid=38031 tid=38031] ERROR  cufio:583 cuFile initialization failed
 06-11-2023 18:40:43:90 [pid=59193 tid=59193] NOTICE  cufio-drv:720 running in compatible mode
 06-11-2023 20:12:02:865 [pid=157254 tid=157254] ERROR  cufio-drv:716 nvidia-fs.ko driver not loaded
 06-11-2023 20:19:09:497 [pid=164679 tid=164679] ERROR  cufio-drv:716 nvidia-fs.ko driver not loaded
 07-11-2023 16:30:39:272 [pid=15680 tid=15680] ERROR  cufio-drv:716 nvidia-fs.ko driver not loaded
 07-11-2023 16:58:54:420 [pid=45874 tid=45874] ERROR  cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!
 07-11-2023 16:58:54:421 [pid=45874 tid=45874] ERROR  cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!
 07-11-2023 16:58:54:421 [pid=45874 tid=45874] NOTICE  cufio-fs:441 dumping volume attributes: DEVNAME:/dev/nvme1n1,ID_FS_TYPE:ext4,ID_FS_USAGE:filesystem,UDEV_PCI_BRIDGE:0000:30:03.0,device/transport:pcie,ext4_journal_mode:ordered,fsid:f0578b196d5913c20x,numa_node:0,queue/logical_block_size:4096,wwid:eui.36344830526001490025384500000001,
 07-11-2023 16:58:54:421 [pid=45874 tid=45874] ERROR  cufio:296 cuFileHandleRegister error, file checks failed
 07-11-2023 16:58:54:421 [pid=45874 tid=45874] ERROR  cufio:338 cuFileHandleRegister error: GPUDirect Storage not supported on current file
 07-11-2023 17:00:42:361 [pid=47786 tid=47786] ERROR  cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!
 07-11-2023 17:00:42:361 [pid=47786 tid=47786] ERROR  cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!
 07-11-2023 17:00:42:361 [pid=47786 tid=47786] NOTICE  cufio-fs:441 dumping volume attributes: DEVNAME:/dev/nvme1n1,ID_FS_TYPE:ext4,ID_FS_USAGE:filesystem,UDEV_PCI_BRIDGE:0000:30:03.0,device/transport:pcie,ext4_journal_mode:ordered,fsid:f0578b196d5913c20x,numa_node:0,queue/logical_block_size:4096,wwid:eui.36344830526001490025384500000001,
 07-11-2023 17:00:42:361 [pid=47786 tid=47786] ERROR  cufio:296 cuFileHandleRegister error, file checks failed
 07-11-2023 17:00:42:361 [pid=47786 tid=47786] ERROR  cufio:338 cuFileHandleRegister error: GPUDirect Storage not supported on current file

please let me know if you need more information

also i found another topic where people are struggling for the same

cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!

please check if the patche nvme driver is loaded correctly.

if you have the boot drive using nvme, then the driver will be part of initramfs.

1)update-initramfs -u -k uname -r to update the initramfs with the patched
version.

  1. reboot the node and check output of gdscheck command again to see nvme supported.

@kmodukuri

please check this , i have used dracut - for updating the initramfs , there is no initramfs-tools for rocky linux , please correct me if i am wrong

[root@node002 ~]# lsmod | grep nvme
nvmet_fc               40960  1 lpfc
nvmet                 114688  1 nvmet_fc
nvme_fc                53248  1 lpfc
nvme_fabrics           24576  1 nvme_fc
nvme                   45056  3
nvme_core             139264  7 nvme,nvme_fc,nvme_fabrics
t10_pi                 16384  3 nvmet,sd_mod,nvme_core

@kmodukuri
modinfo nvme

[root@node002 ~]# modinfo nvme

filename:       /lib/modules/4.18.0-477.27.1.el8_8.x86_64/extra/mlnx-nvme/host/nvme.ko
version:        1.0
license:        GPL
author:         Matthew Wilcox <willy@linux.intel.com>
rhelversion:    8.8
srcversion:     533BB7E5866E52F63B9ACCB
alias:          pci:v*d*sv*sd*bc01sc08i02*
alias:          pci:v0000106Bd00002005sv*sd*bc*sc*i*
alias:          pci:v0000106Bd00002003sv*sd*bc*sc*i*
alias:          pci:v0000106Bd00002001sv*sd*bc*sc*i*
alias:          pci:v00001D0Fd0000CD02sv*sd*bc*sc*i*
alias:          pci:v00001D0Fd0000CD01sv*sd*bc*sc*i*
alias:          pci:v00001D0Fd0000CD00sv*sd*bc*sc*i*
alias:          pci:v00001D0Fd00008061sv*sd*bc*sc*i*
alias:          pci:v00001D0Fd00000065sv*sd*bc*sc*i*
alias:          pci:v00001D0Fd00000061sv*sd*bc*sc*i*
alias:          pci:v00001E49d00000041sv*sd*bc*sc*i*
alias:          pci:v00001CC1d00005350sv*sd*bc*sc*i*
alias:          pci:v00001E4Bd00001202sv*sd*bc*sc*i*
alias:          pci:v00001E4Bd00001002sv*sd*bc*sc*i*
alias:          pci:v00002646d00002263sv*sd*bc*sc*i*
alias:          pci:v00002646d00002262sv*sd*bc*sc*i*
alias:          pci:v00001CC4d00006302sv*sd*bc*sc*i*
alias:          pci:v00001CC4d00006303sv*sd*bc*sc*i*
alias:          pci:v0000144Dd0000A809sv*sd*bc*sc*i*
alias:          pci:v0000144Dd0000A80Bsv*sd*bc*sc*i*
alias:          pci:v00001D97d00002263sv*sd*bc*sc*i*
alias:          pci:v000015B7d00002001sv*sd*bc*sc*i*
alias:          pci:v00001C5Cd0000174Asv*sd*bc*sc*i*
alias:          pci:v00001C5Cd00001504sv*sd*bc*sc*i*
alias:          pci:v00001CC1d00008201sv*sd*bc*sc*i*
alias:          pci:v000010ECd00005762sv*sd*bc*sc*i*
alias:          pci:v00001CC1d000033F8sv*sd*bc*sc*i*
alias:          pci:v00001B4Bd00001092sv*sd*bc*sc*i*
alias:          pci:v00001987d00005016sv*sd*bc*sc*i*
alias:          pci:v00001987d00005012sv*sd*bc*sc*i*
alias:          pci:v0000144Dd0000A822sv*sd*bc*sc*i*
alias:          pci:v0000144Dd0000A821sv*sd*bc*sc*i*
alias:          pci:v00001C5Fd00000540sv*sd*bc*sc*i*
alias:          pci:v00001C58d00000023sv*sd*bc*sc*i*
alias:          pci:v00001C58d00000003sv*sd*bc*sc*i*
alias:          pci:v00001BB1d00000100sv*sd*bc*sc*i*
alias:          pci:v0000126Fd00002263sv*sd*bc*sc*i*
alias:          pci:v00001B36d00000010sv*sd*bc*sc*i*
alias:          pci:v00008086d00005845sv*sd*bc*sc*i*
alias:          pci:v00008086d0000F1A6sv*sd*bc*sc*i*
alias:          pci:v00008086d0000F1A5sv*sd*bc*sc*i*
alias:          pci:v00008086d00000A55sv*sd*bc*sc*i*
alias:          pci:v00008086d00000A54sv*sd*bc*sc*i*
alias:          pci:v00008086d00000A53sv*sd*bc*sc*i*
alias:          pci:v00008086d00000953sv*sd*bc*sc*i*
depends:        nvme-core,mlx_compat
name:           nvme
vermagic:       4.18.0-477.27.1.el8_8.x86_64 SMP mod_unload modversions
parm:           use_threaded_interrupts:int
parm:           use_cmb_sqes:use controller's memory buffer for I/O SQes (bool)
parm:           max_host_mem_size_mb:Maximum Host Memory Buffer (HMB) size per controller (in MiB) (uint)
parm:           sgl_threshold:Use SGLs when average request segment size is larger or equal to this size. Use 0 to disable SGLs. (uint)
parm:           io_queue_depth:set io queue depth, should >= 2 and < 4096
parm:           write_queues:Number of queues to use for writes. If not set, reads and writes will share a queue set.
parm:           poll_queues:Number of queues to use for polled IO.
parm:           noacpi:disable acpi bios quirks (bool)
parm:           num_p2p_queues:number of I/O queues to create for peer-to-peer data transfer per pci function (Default: 0)

@karanveersingh5623 I have verified that the kernel and MOFED version are compatible with GDS.
https://docs.nvidia.com/networking/display/mlnxofedv583070lts/general+support

can you run this command and share the output
cat /proc/kallsyms | grep -i nvfs

I am facing a similar issue and would love to see a resolution. My environment is different as I am using Ubuntu 22.04 and nvme is not the boot drive. I get cufio-fs:199 NVMe Driver not registered with nvidia-fs!!! when running cufile_sample_001.
cufile.log:

14-11-2023 20:20:25:517 [pid=27511 tid=27511] INFO   0:324 Lib being used for urcup concurrency : libcufile_ck
 14-11-2023 20:20:25:517 [pid=27511 tid=27511] INFO   cufio:601 Loaded successfully  libcufile_ck.so
 14-11-2023 20:20:25:518 [pid=27511 tid=27511] INFO   cufio:601 Loaded successfully  libmount.so
 14-11-2023 20:20:25:518 [pid=27511 tid=27511] INFO   cufio:601 Loaded successfully  libudev.so
 14-11-2023 20:20:25:518 [pid=27511 tid=27511] INFO   cufio:605 Using CKIT static library
 14-11-2023 20:20:25:518 [pid=27511 tid=27511] INFO   0:167 nvidia_fs driver open invoked
 14-11-2023 20:20:25:519 [pid=27511 tid=27511] INFO   cufio-drv:401 GDS release version: 1.5.1.14
 14-11-2023 20:20:25:519 [pid=27511 tid=27511] INFO   cufio-drv:404 nvidia_fs version:  2.17 libcufile version: 2.12
 14-11-2023 20:20:25:519 [pid=27511 tid=27511] INFO   cufio-drv:408 Platform: x86_64
 14-11-2023 20:20:25:519 [pid=27511 tid=27511] INFO   cufio-drv:329 WekaFS: driver support OK
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:528 nvidia_fs driver version check ok
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:329 WekaFS: driver support OK
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:189 ============
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:190 ENVIRONMENT:
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:191 ============
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:193 CUFILE_ENV_PATH_JSON : /home//cufile.json
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:204 =====================
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:205 DRIVER CONFIGURATION:
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:206 =====================
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:208 NVMe               : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:209 NVMeOF             : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:210 SCSI               : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:211 ScaleFlux CSD      : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:212 NVMesh             : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:215 DDN EXAScaler      : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:219 IBM Spectrum Scale : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:223 NFS                : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-drv:226 BeeGFS             : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] DEBUG  cufio-rdma:145 No valid ip addresses specified for RDMA devices. Disabling GDS userspace RDMA access

 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-rdma:1127 WekaFS             : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-rdma:1129 Userspace RDMA     : Unsupported
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-rdma:1137 --Mellanox PeerDirect : Disabled
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-rdma:1145 --rdma library        : Not Loaded (libcufile_rdma.so)
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-rdma:1148 --rdma devices        : Not configured
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-rdma:1151 --rdma_device_status  : Up: 0 Down: 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio:877 =====================
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio:878 CUFILE CONFIGURATION:
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio:879 =====================
14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1263 properties.use_compat_mode : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1265 properties.force_compat_mode : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1267 properties.gds_rdma_write_support : true
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1269 properties.use_poll_mode : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1271 properties.poll_mode_max_size_kb : 4
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1273 properties.max_batch_io_size : 128
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1275 properties.max_batch_io_timeout_msecs : 5
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1277 properties.max_direct_io_size_kb : 16384
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1279 properties.max_device_cache_size_kb : 131072
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1281 properties.max_device_pinned_mem_size_kb : 33554432
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1283 properties.posix_pool_slab_size_kb : 4 1024 16384
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1285 properties.posix_pool_slab_count : 128 64 32
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1287 properties.rdma_peer_affinity_policy : RoundRobin
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1289 properties.rdma_dynamic_routing : 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1296 fs.generic.posix_unaligned_writes : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1299 fs.lustre.posix_gds_min_kb: 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1313 fs.beegfs.posix_gds_min_kb: 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1328 fs.weka.rdma_write_support: false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1354 fs.gpfs.gds_write_support: false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1367 profile.nvtx : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1369 profile.cufile_stats : 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1371 miscellaneous.api_check_aggressive : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1379 execution.max_io_threads : 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1380 execution.max_io_queue_depth : 128
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1381 execution.parallel_io : false
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1382 execution.min_io_threshold_size_kb : 8192
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   0:1383 execution.max_request_parallelism : 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:790 =========
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:791 GPU INFO:
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:792 =========
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] TRACE  cufio-plat:306 gpu attribute read , cuDeviceGetAttribute GPU_DIRECT_RDMA_SUPPORTED value: 1
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] DEBUG  cufio-plat:379 GPU BDF: 0000:44:00.0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] DEBUG  cufio-plat:349 Searching IOMMU entries in /sys/bus/pci/devices/0000:44:00.0/iommu
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] DEBUG  cufio-plat:388 cuda GPU device attributes:  gpu :0 model :Tesla P100-PCIE-16GB nvdirect :0 numa:-1 pcibridge: bar :1 barBase :4088808865804 barSize :17179869184 streamMemOps :0 dmaBufCapable:0 GDRBufCapable:1 bdf :0 : 68 : 0 : 0
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:438 GPU index 0 Tesla P100-PCIE-16GB bar:1 bar size (MiB):16384 supports GDS, IOMMU State: Disabled
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:452 Total GPUS supported on this platform 1
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:803 ==============
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:804 PLATFORM INFO:
 14-11-2023 20:20:25:520 [pid=27511 tid=27511] INFO   cufio-plat:805 ==============
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] DEBUG  cufio-udev:147 device pci path string : 0000:44:00.0->0000:40:02.0
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] DEBUG  cufio-plat:674 GPU Dev: 0 numa_node: 1 PCI Group 0000:40:02.0
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] DEBUG  cufio-plat:158 acs enabled bridge 0000:89:08.0
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] DEBUG  cufio-plat:158 acs enabled bridge 0000:89:10.0
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] DEBUG  cufio-plat:158 acs enabled bridge 0000:82:08.0
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] DEBUG  cufio-plat:158 acs enabled bridge 0000:82:10.0
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] INFO   cufio-plat:570 ACS not enabled in GPU paths
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] INFO   cufio-plat:723 cannot open scsi_mod path, skip scsi check
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] INFO   cufio-plat:810 use_mq not detected in scsi configuration.cannot support SCSI disks!
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] INFO   cufio-plat:695 IOMMU: disabled
 14-11-2023 20:20:25:521 [pid=27511 tid=27511] INFO   cufio-plat:846 Platform verification succeeded
.
.
 14-11-2023 20:20:25:535 [pid=27511 tid=27511] TRACE  cufio:470 Threadpool Initialize Obtained pgroup 0x557dcb651a60 from map
 14-11-2023 20:20:25:535 [pid=27511 tid=27511] TRACE  cufio:474 Threadpool Initialize Obtained pgroup : numa 0x557dcb651a60 1
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  0:71 numa_num_configured_nodes obtained numNumaNodes :  4
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  cufio:29 Threadpool detected mismatch between library call and procfs for numa node num
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  cufio:33 Discovered 1 numa nodes on this system
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  0:77 setting numa_set_bind_policy preferred policy
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  cufio:24 Threapool workqueue 0x557dcb4f8d40 for numa node 1
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  cufio:63 create workqueue 0 1 0x557dcb4f8d40
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  cufio:38 Started tid:  139628027699200
 14-11-2023 20:20:25:536 [pid=27511 tid=27511] TRACE  cufio:40 Creating a thread pool with 0 threads
 14-11-2023 20:20:25:536 [pid=27511 tid=27516] TRACE  cufio:78 Started Thread: 0x557dcb651860
 14-11-2023 20:20:25:537 [pid=27511 tid=27511] INFO   cufio:943 CUFile initialization complete
 14-11-2023 20:20:25:537 [pid=27511 tid=27511] TRACE  cufio:3252 cuFileDriverOpen success
 14-11-2023 20:20:25:537 [pid=27511 tid=27511] DEBUG  cufio:1452 cuFileHandleRegister invoked
 14-11-2023 20:20:25:540 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found wwid nvme0n1
 14-11-2023 20:20:25:540 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found device/transport nvme0n1
 14-11-2023 20:20:25:540 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found model nvme0n1
 14-11-2023 20:20:25:540 [pid=27511 tid=27511] DEBUG  cufio-udev:299 detected nvme model: SAMSUNG MZQL2960HCJR-00A07               wwid: eui.37323930545002950025384500000001 xport: pcie /sys/devices/pci0000:00/0000:00:03.0/0000:04:00.0/nvme/nvme0/nvme0n1
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-udev:147 device pci path string : 0000:04:00.0->0000:00:03.0
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found integrity/device_is_integrity_capable nvme0n1
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-fs:284 block device nvme0n1 drive integrity check capability not present. Ok
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] INFO   cufio-fs:357 Block dev: /dev/nvme0n1 numa node: 0 pci bridge: 0000:00:03.0
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found device/transport nvme0n1
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found wwid nvme0n1
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-udev:94 sysfs attribute found queue/logical_block_size nvme0n1
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-fs:706 vol pciGroup : 0000:00:03.0
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] ERROR  cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio-fs:736 added volume attributes for device: dev_no: 259:0
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] ERROR  cufio-fs:199 NVMe Driver not registered with nvidia-fs!!!
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] NOTICE  cufio-fs:419 dumping volume attributes: DEVNAME:/dev/nvme0n1,ID_FS_TYPE:ext4,ID_FS_USAGE:filesystem,UDEV_PCI_BRIDGE:0000:00:03.0,device/transport:pcie,ext4_journal_mode:ordered,fsid:ded15f492ff4fc0d0x,numa_node:0,queue/logical_block_size:4096,wwid:eui.37323930545002950025384500000001,
 14-11-2023 20:20:25:542 [pid=27511 tid=27511] DEBUG  cufio:1129 cuFile DIO status for file descriptor 49 DirectIO not supported
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] ERROR  cufio:1542 cuFileHandleRegister error, file checks failed
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] ERROR  cufio:1584 cuFileHandleRegister error: GPUDirect Storage not supported on current file
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] TRACE  cufio:3286 cuFileDriver closing
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] DEBUG  cufio:1015 cuFile clearing active batch operations
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] DEBUG  cufio:1017 Destroying Batch Pool
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] DEBUG  0:378 Batch Ctx state 1
 14-11-2023 20:20:25:543 [pid=27511 tid=27511] DEBUG  0:378 Batch Ctx state 1
.
.
14-11-2023 20:20:25:545 [pid=27511 tid=27511] DEBUG  0:378 Batch Ctx state 1
 14-11-2023 20:20:25:545 [pid=27511 tid=27511] DEBUG  0:378 Batch Ctx state 1
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1020 cuFile clearing buffer hashtable
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] TRACE  0:200 Bounce buffer io is not in-progress
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] TRACE  cufio-px-pool:99 Posix Bounce buffer io is not in-progress
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1049 cuFile destroying posix buffer pool
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] TRACE  cufio-px-pool:460 Releasing POSIX pool buffers
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:468 Releasing POSIX pool size: 4096 for GPU: 0
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:141 Tearing down pci-info with 1 GPUs
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:46 Tearing down POSIX pool slab for gpu 0 num objects: 128
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:65 Freed POSIX pool slab for gpu 0 num objects: 128
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:468 Releasing POSIX pool size: 1048576 for GPU: 0
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:141 Tearing down pci-info with 1 GPUs
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:46 Tearing down POSIX pool slab for gpu 0 num objects: 64
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:65 Freed POSIX pool slab for gpu 0 num objects: 64
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:468 Releasing POSIX pool size: 16777216 for GPU: 0
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:141 Tearing down pci-info with 1 GPUs
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:46 Tearing down POSIX pool slab for gpu 0 num objects: 32
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio-px-pool:65 Freed POSIX pool slab for gpu 0 num objects: 32
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] INFO   cufio-px-pool:479 POSIX pool buffer release complete
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1054 cuFile clearing file hashtable
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1057 cuFile clearing volumeAttributes
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1060 cuFile cleanring pciGroupMap
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1064 cuFile cleanring pci group number map
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1068 cuFile clearing Dynamic Routing info
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1071 cuFile clearing pci topology
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1074 cuFile clearing all gpu entries
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1077 cuFile closing Driver
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:175 Tearing down bounce buffers
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:141 Tearing down pci-info with 1 GPUs
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:103 Tearing down buffers from GPU 0
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  0:110 free buffers 128
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] INFO   0:140 nvidia_fs driver closed
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1086 cuFile clearing all hashtables
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] DEBUG  cufio:1089 cuFile shutting threadpool
 14-11-2023 20:20:25:546 [pid=27511 tid=27511] TRACE  cufio:109 killing pollworker 0x557dcb651860
 14-11-2023 20:20:25:547 [pid=27511 tid=27511] TRACE  cufio:45 marking thread exit 139628027699200 thread: 0x557dcb651860
 14-11-2023 20:20:25:547 [pid=27511 tid=27516] TRACE  cufio:80 Thread func exited 0x557dcb651860 tid: 139628027699200
 14-11-2023 20:20:25:547 [pid=27511 tid=27511] TRACE  cufio:48 Thread exited
14-11-2023 20:20:25:547 [pid=27511 tid=27511] TRACE  cufio:116 killing cuFileWaitQueue 0x557dcb641540
 14-11-2023 20:20:25:547 [pid=27511 tid=27511] TRACE  cufio:70 delete workqueue 0x557dcb4f8d40
 14-11-2023 20:20:25:547 [pid=27511 tid=27511] TRACE  cufio:33 Killing workQueue 0x557dcb4f8d40
 14-11-2023 20:20:25:547 [pid=27511 tid=27511] INFO   cufio:1098 cuFile shutdown
 14-11-2023 20:20:25:547 [pid=27511 tid=27511] INFO   cufio:1100 Logger Shutdown

@kmodukuri , please find the output attached

ffffffffc1705000 t nvfs_get_pci_dev_mapping     [nvidia_fs]
ffffffffc1705070 t nvfs_devnode [nvidia_fs]
ffffffffc1705090 t nvfs_unpin_gpu_pages [nvidia_fs]
ffffffffc170c982 t nvfs_unpin_gpu_pages.cold.25 [nvidia_fs]
ffffffffc17051e0 t nvfs_count_ops       [nvidia_fs]
0000000000036388 a nvfs_n_ops   [nvidia_fs]
ffffffffc1705230 t nvfs_open    [nvidia_fs]
ffffffffc1719c28 b nvfs_shutdown        [nvidia_fs]
ffffffffc170c9dd t nvfs_open.cold.26    [nvidia_fs]
ffffffffc17052a0 t nvfs_close   [nvidia_fs]
ffffffffc170ca0b t nvfs_close.cold.27   [nvidia_fs]
ffffffffc1719180 d nvfs_curr_devices    [nvidia_fs]
ffffffffc17053f0 t nvfs_get_pages_free_callback [nvidia_fs]
ffffffffc170ca2d t nvfs_get_pages_free_callback.cold.28 [nvidia_fs]
ffffffffc170cc48 t nvfs_get_p2p_dma_mapping.cold.29     [nvidia_fs]
ffffffffc170cdf5 t nvfs_get_dma.cold.30 [nvidia_fs]
ffffffffc170ce77 t nvfs_io_free.cold.31 [nvidia_fs]
ffffffffc1705d30 t nvfs_io_complete     [nvidia_fs]
ffffffffc170cfa5 t nvfs_io_complete.cold.32     [nvidia_fs]
ffffffffc170cfb8 t nvfs_free_gpu_info.cold.33   [nvidia_fs]
ffffffffc170d01f t nvfs_io_init.cold.34 [nvidia_fs]
ffffffffc170d516 t nvfs_io_start_op.cold.35     [nvidia_fs]
ffffffffc1706ee0 t nvfs_ioctl   [nvidia_fs]
ffffffffc170dbef t nvfs_ioctl.cold.36   [nvidia_fs]
ffffffffc1719c30 b nvfs_class   [nvidia_fs]
ffffffffc170f985 t nvfs_exit    [nvidia_fs]
ffffffffc170e498 t nvfs_get_gpu_sglist_rdma_info.cold.12        [nvidia_fs]
ffffffffc1707960 t nvfs_dma_unmap_sg    [nvidia_fs]
ffffffffc1707a00 t nvfs_dma_map_sg_attrs_internal.constprop.11  [nvidia_fs]
ffffffffc170e68d t nvfs_dma_map_sg_attrs_internal.constprop.11.cold.13  [nvidia_fs]
ffffffffc1707bf0 t nvfs_dma_map_sg_attrs_nvme   [nvidia_fs]
ffffffffc1707c00 t nvfs_dma_map_sg_attrs        [nvidia_fs]
ffffffffc1707c10 t nvfs_blk_rq_map_sg_internal  [nvidia_fs]
ffffffffc170e79a t nvfs_blk_rq_map_sg_internal.cold.14  [nvidia_fs]
ffffffffc1708130 t nvfs_blk_rq_map_sg   [nvidia_fs]
ffffffffc1708140 t nvfs_nvme_blk_rq_map_sg      [nvidia_fs]
ffffffffc170e85a t nvfs_pfn_mkwrite     [nvidia_fs]
ffffffffc170e87a t nvfs_page_mkwrite    [nvidia_fs]
ffffffffc170e89a t nvfs_vma_fault       [nvidia_fs]
ffffffffc170e8ba t nvfs_vma_mremap      [nvidia_fs]
ffffffffc170e8e1 t nvfs_vma_split       [nvidia_fs]
ffffffffc170e90b t nvfs_vma_open        [nvidia_fs]
ffffffffc1708180 t nvfs_mgroup_put_internal     [nvidia_fs]
ffffffffc170e938 t nvfs_mgroup_put_internal.cold.15     [nvidia_fs]
ffffffffc17083c0 t nvfs_vma_close       [nvidia_fs]
ffffffffc170e975 t nvfs_vma_close.cold.16       [nvidia_fs]
ffffffffc1708480 t __nvfs_mgroup_from_page.part.12      [nvidia_fs]
ffffffffc1719c60 b nvfs_io_mgroup_hash  [nvidia_fs]
ffffffffc170ea63 t __nvfs_mgroup_from_page.part.12.cold.17      [nvidia_fs]
ffffffffc170eaaf t nvfs_mgroup_get.cold.18      [nvidia_fs]
ffffffffc170eaf9 t nvfs_get_mgroup_from_vaddr.cold.19   [nvidia_fs]
ffffffffc1710420 r nvfs_mmap_ops        [nvidia_fs]
ffffffffc170ec04 t nvfs_mgroup_mmap.cold.20     [nvidia_fs]
ffffffffc170edc4 t nvfs_mgroup_check_and_set.cold.21    [nvidia_fs]
ffffffffc170ef46 t nvfs_mgroup_pin_shadow_pages.cold.22 [nvidia_fs]
ffffffffc170f131 t nvfs_mgroup_fill_mpages.cold.23      [nvidia_fs]
ffffffffc170f179 t nvfs_mgroup_from_page_range.cold.24  [nvidia_fs]
ffffffffc170f1b9 t nvfs_mgroup_metadata_set_dma_state.cold.25   [nvidia_fs]
ffffffffc170f239 t nvfs_mgroup_from_page.cold.26        [nvidia_fs]
ffffffffc170f26b t nvfs_gpu_index.cold.27       [nvidia_fs]
ffffffffc1709be0 t nvfs_pcie_acs_enabled        [nvidia_fs]
ffffffffc1709c70 t __nvfs_find_all_device_paths.constprop.6     [nvidia_fs]
ffffffffc170f2aa t __nvfs_find_all_device_paths.constprop.6.cold.10     [nvidia_fs]
ffffffffc170f429 t nvfs_create_gpu_hash_entry.cold.11   [nvidia_fs]
ffffffffc170f452 t nvfs_create_peer_hash_entry.cold.12  [nvidia_fs]
ffffffffc170f47b t nvfs_get_gpu_hash_index.cold.13      [nvidia_fs]
ffffffffc170f4a4 t nvfs_get_peer_hash_index.cold.14     [nvidia_fs]
ffffffffc170f4cd t nvfs_fill_gpu2peer_distance_table_once.cold.15       [nvidia_fs]
ffffffffc170f584 t nvfs_get_gpu2peer_distance.cold.16   [nvidia_fs]
ffffffffc170f5ca t nvfs_update_peer_usage.cold.17       [nvidia_fs]
ffffffffc170f5ed t nvfs_aggregate_peer_usage_by_distance.cold.18        [nvidia_fs]
ffffffffc170f602 t nvfs_aggregate_cross_peer_usage.cold.19      [nvidia_fs]
ffffffffc170af30 t nvfs_version_show    [nvidia_fs]
ffffffffc170afe0 t nvfs_modules_open    [nvidia_fs]
ffffffffc170b000 t nvfs_pci_distance_map_info_open      [nvidia_fs]
ffffffffc170b020 t nvfs_peer_affinity_info_open [nvidia_fs]
ffffffffc170b040 t nvfs_bridge_info_open        [nvidia_fs]
ffffffffc170b0a0 t nvfs_bridge_show     [nvidia_fs]
ffffffffc170b060 t nvfs_version_info_open       [nvidia_fs]
ffffffffc170b080 t nvfs_devices_info_open       [nvidia_fs]
ffffffffc170b110 t nvfs_devices_show    [nvidia_fs]
ffffffffc1710720 r nvfs_devices_ops     [nvidia_fs]
ffffffffc1710960 r nvfs_version_ops     [nvidia_fs]
ffffffffc1710840 r nvfs_bridge_ops      [nvidia_fs]
ffffffffc1710600 r nvfs_peer_affinity_ops       [nvidia_fs]
ffffffffc17104e0 r nvfs_pci_distance_map_ops    [nvidia_fs]
ffffffffc170f617 t nvfs_proc_init.cold.1        [nvidia_fs]
ffffffffc170f62d t nvfs_proc_cleanup.cold.2     [nvidia_fs]
ffffffffc170b4c0 t nvfs_stats_open      [nvidia_fs]
ffffffffc170b4e0 t nvfs_stats_show      [nvidia_fs]
ffffffffc173e260 b nvfs_gpu_stat_hash   [nvidia_fs]
ffffffffc170bb00 t nvfs_stats_clear     [nvidia_fs]
ffffffffc170f68e t nvfs_update_alloc_gpustat.cold.7     [nvidia_fs]
ffffffffc170f6a9 t nvfs_set_rdma_reg_info_to_mgroup.cold.0      [nvidia_fs]
ffffffffc170f76a t nvfs_get_rdma_reg_info_from_mgroup.cold.1    [nvidia_fs]
ffffffffc170f812 t nvfs_clear_rdma_reg_info_in_mgroup.cold.2    [nvidia_fs]
ffffffffc170f857 t nvfs_io_batch_init.cold.2    [nvidia_fs]
ffffffffc170f917 t nvfs_io_batch_submit.cold.3  [nvidia_fs]
ffffffffc17086b0 t nvfs_mgroup_put      [nvidia_fs]
ffffffffc173e158 b nvfs_n_write_bytes   [nvidia_fs]
ffffffffc1709af0 t nvfs_check_gpu_page_and_error        [nvidia_fs]
ffffffffc17076f0 t nvfs_get_gpu_sglist_rdma_info        [nvidia_fs]
ffffffffc173e080 b nvfs_n_pg_cache_eio  [nvidia_fs]
ffffffffc173e118 b nvfs_n_mmap_err      [nvidia_fs]
ffffffffc1708cb0 t nvfs_mgroup_init     [nvidia_fs]
ffffffffc173e190 b nvfs_n_reads_sparse_files    [nvidia_fs]
ffffffffc173e1a8 b nvfs_n_batches       [nvidia_fs]
ffffffffc170b460 t nvfs_check_access    [nvidia_fs]
ffffffffc173e1b0 b nvfs_batch_submit_avg_latency        [nvidia_fs]
ffffffffc173e100 b nvfs_n_maps_ok       [nvidia_fs]
ffffffffc17194c0 d nvfs_sfxv_dma_rw_ops [nvidia_fs]
ffffffffc170a3e0 t nvfs_get_next_acs_device     [nvidia_fs]
ffffffffc1719c00 b nvfs_peer_stats_enabled      [nvidia_fs]
ffffffffc170bd30 t nvfs_update_free_gpustat     [nvidia_fs]
ffffffffc1719060 d nvfs_dev_fops        [nvidia_fs]
ffffffffc173e108 b nvfs_n_maps  [nvidia_fs]
ffffffffc173e0c0 b nvfs_n_op_batches    [nvidia_fs]
ffffffffc170a0e0 t nvfs_create_gpu_hash_entry   [nvidia_fs]
ffffffffc170b490 t nvfs_extend_sg_markers       [nvidia_fs]
ffffffffc1708cf0 t nvfs_mgroup_check_and_set    [nvidia_fs]
ffffffffc17092d0 t nvfs_mgroup_pin_shadow_pages [nvidia_fs]
ffffffffc170a3c0 t nvfs_lookup_peer_hash_index_entry    [nvidia_fs]
ffffffffc173e148 b nvfs_write_bytes_per_sec     [nvidia_fs]
ffffffffc170ac70 t nvfs_peer_distance_show      [nvidia_fs]
ffffffffc170c340 t nvfs_update_write_throughput [nvidia_fs]
ffffffffc173e0cc b nvfs_n_op_writes     [nvidia_fs]
ffffffffc1705340 t nvfs_get_device_count        [nvidia_fs]
ffffffffc170bf20 t nvfs_update_read_throughput  [nvidia_fs]
ffffffffc1709930 t nvfs_mgroup_metadata_set_dma_state   [nvidia_fs]
ffffffffc173e1c8 b nvfs_read_latency_per_sec    [nvidia_fs]
ffffffffc170ac30 t nvfs_reset_peer_affinity_stats       [nvidia_fs]
ffffffffc1705eb0 t nvfs_rw_verify_area  [nvidia_fs]
ffffffffc173e110 b nvfs_n_munmap        [nvidia_fs]
ffffffffc17104d0 r nvfs_pcie_link_speed_table   [nvidia_fs]
ffffffffc173e1d0 b nvfs_read_ops_per_sec        [nvidia_fs]
ffffffffc173e140 b nvfs_write_ops_per_sec       [nvidia_fs]
ffffffffc173e168 b nvfs_n_writes_ok     [nvidia_fs]
ffffffffc173e1f8 b nvfs_n_reads_ok      [nvidia_fs]
ffffffffc17055d0 t nvfs_get_p2p_dma_mapping     [nvidia_fs]
ffffffffc173e180 b nvfs_n_reads_sparse_region   [nvidia_fs]
ffffffffc17086c0 t nvfs_mgroup_put_dma  [nvidia_fs]
ffffffffc170af60 t nvfs_modules_show    [nvidia_fs]
ffffffffc173e0c4 b nvfs_n_op_process    [nvidia_fs]
ffffffffc170aa90 t nvfs_update_peer_usage       [nvidia_fs]
ffffffffc173e120 b nvfs_n_mmap_ok       [nvidia_fs]
ffffffffc170c690 t nvfs_clear_rdma_reg_info_in_mgroup   [nvidia_fs]
ffffffffc173e198 b nvfs_n_batch_err     [nvidia_fs]
ffffffffc173e138 b nvfs_write_latency_per_sec   [nvidia_fs]
ffffffffc1705b80 t nvfs_io_free [nvidia_fs]
ffffffffc1719188 d nvfs_info_enabled    [nvidia_fs]
ffffffffc1706060 t nvfs_io_init [nvidia_fs]
ffffffffc1708170 t nvfs_blk_unregister_dma_ops  [nvidia_fs]
ffffffffc170c5e0 t nvfs_get_rdma_reg_info_from_mgroup   [nvidia_fs]
ffffffffc1709500 t nvfs_mgroup_fill_mpages      [nvidia_fs]
ffffffffc173e1c4 b nvfs_avg_read_latency        [nvidia_fs]
ffffffffc170a2f0 t nvfs_lookup_gpu_hash_index_entry     [nvidia_fs]
ffffffffc173e1d8 b nvfs_read_bytes_per_sec      [nvidia_fs]
ffffffffc170a420 t nvfs_fill_gpu2peer_distance_table_once       [nvidia_fs]
ffffffffc173e088 b nvfs_n_pg_cache      [nvidia_fs]
ffffffffc170b140 t nvfs_proc_init       [nvidia_fs]
ffffffffc173e0d8 b nvfs_n_active_shadow_buf_sz  [nvidia_fs]
ffffffffc170ae30 t nvfs_peer_affinity_show      [nvidia_fs]
ffffffffc173e1e8 b nvfs_n_read_bytes    [nvidia_fs]
ffffffffc1709b70 t nvfs_gpu_index       [nvidia_fs]
ffffffffc170c4f0 t nvfs_set_rdma_reg_info_to_mgroup     [nvidia_fs]
ffffffffc173e0b0 b nvfs_n_err_sg_err    [nvidia_fs]
ffffffffc1710be0 r nvfs_stats_fops      [nvidia_fs]
ffffffffc170bdd0 t nvfs_update_alloc_gpustat    [nvidia_fs]
ffffffffc1705b00 t nvfs_io_map_sparse_data      [nvidia_fs]
ffffffffc1719184 d nvfs_max_devices     [nvidia_fs]
ffffffffc1709770 t nvfs_mgroup_from_page_range  [nvidia_fs]
ffffffffc170c120 t nvfs_update_batch_latency    [nvidia_fs]
ffffffffc173e084 b nvfs_n_pg_cache_fail [nvidia_fs]
ffffffffc173e0e0 b nvfs_n_delayed_frees [nvidia_fs]
ffffffffc1709a20 t nvfs_mgroup_from_page        [nvidia_fs]
ffffffffc170c010 t nvfs_update_read_latency     [nvidia_fs]
ffffffffc170b270 t nvfs_proc_cleanup    [nvidia_fs]
ffffffffc17191a0 d nvfs_module_mutex    [nvidia_fs]
ffffffffc1709aa0 t nvfs_is_gpu_page     [nvidia_fs]
ffffffffc17058e0 t nvfs_get_dma [nvidia_fs]
ffffffffc170c230 t nvfs_update_write_latency    [nvidia_fs]
ffffffffc173e0f0 b nvfs_n_free  [nvidia_fs]
ffffffffc1708620 t nvfs_mgroup_put_ref  [nvidia_fs]
ffffffffc1709730 t nvfs_mgroup_get_gpu_physical_address [nvidia_fs]
ffffffffc1708610 t nvfs_mgroup_get_ref  [nvidia_fs]
ffffffffc173e128 b nvfs_n_mmap  [nvidia_fs]
ffffffffc17064e0 t nvfs_io_start_op     [nvidia_fs]
ffffffffc173e0ac b nvfs_n_err_dma_map   [nvidia_fs]
ffffffffc170c710 t nvfs_io_batch_init   [nvidia_fs]
ffffffffc1719540 d nvfs_dev_dma_rw_ops  [nvidia_fs]
ffffffffc173e0d0 b nvfs_n_op_reads      [nvidia_fs]
ffffffffc173e1f4 b nvfs_n_read_err      [nvidia_fs]
ffffffffc173e200 b nvfs_n_reads [nvidia_fs]
ffffffffc1719c04 b nvfs_rw_stats_enabled        [nvidia_fs]
ffffffffc1719500 d nvfs_nvme_dma_rw_ops [nvidia_fs]
ffffffffc17086d0 t nvfs_get_mgroup_from_vaddr   [nvidia_fs]
ffffffffc170a190 t nvfs_create_peer_hash_entry  [nvidia_fs]
ffffffffc1719440 d nvfs_ibm_scale_rdma_ops      [nvidia_fs]
ffffffffc170c470 t nvfs_stat_destroy    [nvidia_fs]
ffffffffc1709700 t nvfs_mgroup_get_gpu_index_and_off    [nvidia_fs]
ffffffffc170a240 t nvfs_get_gpu_hash_index      [nvidia_fs]
ffffffffc173e170 b nvfs_n_writes        [nvidia_fs]
ffffffffc1710a80 r nvfs_module_ops      [nvidia_fs]
ffffffffc173e0f8 b nvfs_n_map_err       [nvidia_fs]
ffffffffc173e0c8 b nvfs_n_op_maps       [nvidia_fs]
ffffffffc173e164 b nvfs_n_write_err     [nvidia_fs]
ffffffffc173e1b8 b nvfs_batch_submit_latency_per_sec    [nvidia_fs]
ffffffffc173e1e0 b nvfs_read_throughput [nvidia_fs]
ffffffffc173e150 b nvfs_write_throughput        [nvidia_fs]
ffffffffc173e0a8 b nvfs_n_err_dma_ref   [nvidia_fs]
ffffffffc170a980 t nvfs_get_gpu2peer_distance   [nvidia_fs]
ffffffffc173e160 b nvfs_n_write_iostate_err     [nvidia_fs]
ffffffffc173e188 b nvfs_n_reads_sparse_io       [nvidia_fs]
ffffffffc173e0b4 b nvfs_n_err_mix_cpu_gpu       [nvidia_fs]
ffffffffc17104c0 r nvfs_pcie_link_width_table   [nvidia_fs]
ffffffffc1708630 t nvfs_mgroup_get      [nvidia_fs]
ffffffffc1708850 t nvfs_mgroup_unpin_shadow_pages       [nvidia_fs]
ffffffffc1708860 t nvfs_mgroup_mmap     [nvidia_fs]
ffffffffc173e1a0 b nvfs_n_batches_ok    [nvidia_fs]
ffffffffc1719c08 b nvfs_dbg_enabled     [nvidia_fs]
ffffffffc170c8f0 t nvfs_io_batch_submit [nvidia_fs]
ffffffffc170a310 t nvfs_get_peer_hash_index     [nvidia_fs]
ffffffffc1719480 d nvfs_nvmesh_dma_rw_ops       [nvidia_fs]
ffffffffc1705350 t nvfs_io_terminate_requested  [nvidia_fs]
ffffffffc173e178 b nvfs_n_reads_sparse_pages    [nvidia_fs]
ffffffffc173e130 b nvfs_avg_write_latency       [nvidia_fs]
ffffffffc170c430 t nvfs_stat_init       [nvidia_fs]
ffffffffc1709bd0 t nvfs_device_priority [nvidia_fs]
ffffffffc173e1f0 b nvfs_n_read_iostate_err      [nvidia_fs]
ffffffffc1708160 t nvfs_blk_register_dma_ops    [nvidia_fs]
ffffffffc170abc0 t nvfs_aggregate_cross_peer_usage      [nvidia_fs]
ffffffffc173e1c0 b nvfs_batch_ops_per_sec       [nvidia_fs]
ffffffffc1705b60 t nvfs_io_unmap_sparse_data    [nvidia_fs]
ffffffffc170ab60 t nvfs_aggregate_peer_usage_by_distance        [nvidia_fs]
ffffffffc1705f30 t nvfs_free_gpu_info   [nvidia_fs]
ffffffffc173e0e8 b nvfs_n_callbacks     [nvidia_fs]

what kernel , cuda-tools , nvidia driver and OFED driver versions you using ?

@kmodukuri , i also tried with another dell server using latest ubuntu , some cuda errors coming up

root@hpc:~# echo $LD_LIBRARY_PATH
/usr/local/cuda-12.3/lib64
root@hpc:~# echo $PATH
/usr/local/cuda-12.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
root@hpc:~# find / -name gdscheck.py
/usr/local/cuda-12.3/gds/tools/gdscheck.py
root@hpc:~# /usr/local/cuda-12.3/gds/tools/gdscheck.py -p
 cuInit Failed, error CUDA_ERROR_UNKNOWN
 cuFile initialization failed
 Platform verification error :
CUDA Driver API error

root@hpc:~# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Sep__8_19:17:24_PDT_2023
Cuda compilation tools, release 12.3, V12.3.52
Build cuda_12.3.r12.3/compiler.33281558_0
root@hpc:~#
root@hpc:~#
root@hpc:~#
root@hpc:~# uname -r
5.15.0-78-generic
root@hpc:~#
root@hpc:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.3 LTS
Release:        22.04
Codename:       jammy

Installed cuda-toolkit using run file , everything was fine

root@hpc:~# ./cuda_12.3.0_545.23.06_linux.run
===========
= Summary =
===========

Driver:   Installed
Toolkit:  Installed in /usr/local/cuda-12.3/

Please make sure that
 -   PATH includes /usr/local/cuda-12.3/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-12.3/lib64, or, add /usr/local/cuda-12.3/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-12.3/bin
To uninstall the NVIDIA Driver, run nvidia-uninstall
Logfile is /var/log/cuda-installer.log

@kmodukuri some more logs

root@hpc:~# cat cufile.log
 15-11-2023 07:50:34:52 [pid=89337 tid=89337] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 07:50:34:52 [pid=89337 tid=89337] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 07:59:55:319 [pid=2866 tid=2866] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 07:59:55:319 [pid=2866 tid=2866] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 08:03:23:878 [pid=2928 tid=2928] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 08:03:23:878 [pid=2928 tid=2928] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 08:18:00:833 [pid=2835 tid=2835] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 08:18:00:833 [pid=2835 tid=2835] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 08:25:42:633 [pid=2849 tid=2849] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 08:25:42:633 [pid=2849 tid=2849] ERROR  cufio_core:1038 cuFile initialization failed
root@hpc:~# dmesg | grep -i NVRM
[    8.417415] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
               NVRM: BAR0 is 0M @ 0x0 (PCI:0000:65:00.0)
[    8.426806] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
               NVRM: BAR1 is 0M @ 0x0 (PCI:0000:ca:00.0)
[    8.428346] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
               NVRM: BAR2 is 0M @ 0x0 (PCI:0000:ca:00.0)
[    8.430069] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
               NVRM: BAR3 is 0M @ 0x0 (PCI:0000:ca:00.0)
[    8.431517] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
               NVRM: BAR4 is 0M @ 0x0 (PCI:0000:ca:00.0)
[    8.433022] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
               NVRM: BAR5 is 0M @ 0x0 (PCI:0000:ca:00.0)
[    8.477756] NVRM: The NVIDIA probe routine failed for 1 device(s).
[    8.479076] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  545.23.06  Sun Oct 15 17:43:11 UTC 2023
[  326.308327] NVRM: GPU 0000:ca:00.0: RmInitAdapter failed! (0x22:0x40:762)
[  326.309482] NVRM: GPU 0000:ca:00.0: rm_init_adapter failed, device minor number 0
[  326.317164] NVRM: GPU 0000:ca:00.0: RmInitAdapter failed! (0x22:0x40:762)
[  326.318175] NVRM: GPU 0000:ca:00.0: rm_init_adapter failed, device minor number 0

lspci is showing device

root@hpc:~# lspci | grep -i nvidia
65:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
ca:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)

@karanveersingh5623 from the output, Looks like there are no nvfs symbols in the nvme.ko that is inserted.

possible scenarios.
When the mofed install is performed with

—with-nvmf option, there were errors and the module was not patched. mofed install log should indicate errors or option mismatch.

The kernel upgraded after the install and the nvme.ko was not patched by dkms on kernel update.

The module loaded at boot time is different than the patched version because the patched kernel module is not in the initramfs.

look for all the nvme.ko files in /lib directory for relevant kernel and see which nvme.ko has the symbols related to “nvfs_”.

The Dell server issue seems to be problem with nvidia driver not able to initialize the device properly.

Here are my environment settings:

~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"

~$ uname -r
5.15.0-88-generic

~$ ofed_info -s
MLNX_OFED_LINUX-5.8-1.1.2.1:

~$ dpkg -l | grep cuda-tools
ii  cuda-tools-12-0                                   12.0.1-1                                amd64        CUDA Tools meta-package

~$ dpkg -l | grep gds
ii  gds-tools-12-0                                    1.5.1.14-1                              amd64        Tools for GPU Direct Storage

~$ dpkg -l | grep nvidia-fs
ii  nvidia-fs                                         2.18.3-1                                amd64        NVIDIA filesystem for GPUDir                                                     ect Storage
ii  nvidia-fs-dkms                                    2.18.3-1                                amd64        NVIDIA filesystem DKMS packa                                                     ge

~$ lsmod | grep -E "nvme|nvidia"
nvme_rdma              40960  0
rdma_cm               122880  1 nvme_rdma
nvmet                 131072  0
nvidia_fs             262144  0
nvme_fabrics           24576  1 nvme_rdma
nvidia_uvm           1515520  2
ib_core               393216  7 rdma_cm,ib_ipoib,nvme_rdma,iw_cm,ib_uverbs,mlx5_ib,ib_cm
nvidia_drm             94208  0
nvidia_modeset       1327104  1 nvidia_drm
nvidia              56172544  13 nvidia_uvm,nvidia_modeset
drm_kms_helper        311296  4 mgag200,nvidia_drm
drm                   622592  5 drm_kms_helper,nvidia,mgag200,nvidia_drm
nvme                   49152  1
nvme_core             135168  5 nvmet,nvme,nvme_rdma,nvme_fabrics

:~$ sudo dmesg | grep nvidia
[   58.995682] nvidia: loading out-of-tree module taints kernel.
[   58.995739] nvidia: module license 'NVIDIA' taints kernel.
[   59.039595] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[   59.059575] nvidia-nvlink: Nvlink Core is being initialized, major device number 234
               NVRM:  visit http://www.nvidia.com/object/unix.html for more
               NVRM:  visit http://www.nvidia.com/object/unix.html for more
               NVRM:  visit http://www.nvidia.com/object/unix.html for more
               NVRM:  visit http://www.nvidia.com/object/unix.html for more
[   59.076167] nvidia 0000:44:00.0: enabling device (0140 -> 0142)
[   59.196898] nvidia: probe of 0000:8a:00.0 failed with error -1
[   59.200882] nvidia: probe of 0000:8b:00.0 failed with error -1
[   59.204414] nvidia: probe of 0000:83:00.0 failed with error -1
[   59.207379] nvidia: probe of 0000:84:00.0 failed with error -1
[   59.236215] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  545.23.06  Sun Oct 15 17:22:43 UTC 2023
[   59.242273] [drm] [nvidia-drm] [GPU ID 0x00004400] Loading driver
[   59.243407] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:44:00.0 on minor 1
[   64.914178] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[   64.920668] nvidia_fs: Unknown symbol nvidia_p2p_dma_unmap_pages (err -2)
[   64.920801] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[   64.927270] nvidia_fs: Unknown symbol nvidia_p2p_get_pages (err -2)
[   64.927341] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[   64.927343] nvidia_fs: Unknown symbol nvidia_p2p_put_pages (err -2)
[   64.927406] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[   64.940633] nvidia_fs: Unknown symbol nvidia_p2p_dma_map_pages (err -2)
[   64.940705] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[   64.947569] nvidia_fs: Unknown symbol nvidia_p2p_free_dma_mapping (err -2)
[   64.947578] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[   64.947580] nvidia_fs: Unknown symbol nvidia_p2p_free_page_table (err -2)
[   66.250758] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint.
[   66.262923] nvidia-uvm: Loaded the UVM driver, major device number 509.
[   68.087794] audit: type=1400 audit(1699990095.958:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=2170 comm="apparmor_parser"
[   68.087806] audit: type=1400 audit(1699990095.958:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=2170 comm="apparmor_parser"
[  832.310226] nvidia_fs: Initializing nvfs driver module
[  832.310240] nvidia_fs: registered correctly with major number 505

@faraz.ahmed , My 2nd setup is almost similar to you , only difference is i didn’t use apt repos for installing Cuda-toolkit , i downloaded .run file and then installed cuda-toolkit and drivers …please check the output below and above[posts] for my installations and configurations . I am not able to run gdscheck.py as its showing some cuda errors.
Can you help , what could be the issue , then we can fix the NVMe supported issue with @kmodukuri .

root@hpc:~# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"
root@hpc:~#
root@hpc:~#
root@hpc:~# uname -r
5.15.0-78-generic
root@hpc:~#
root@hpc:~# ofed_info -s
MLNX_OFED_LINUX-5.8-3.0.7.0:
root@hpc:~#
root@hpc:~# dpkg -l | grep cuda-tools
root@hpc:~#
root@hpc:~# dpkg -l | grep gds
root@hpc:~#
root@hpc:~#
root@hpc:~# dpkg -l | grep nvidia-fs
rc  nvidia-fs-dkms                        2.18.3-1                                amd64        NVIDIA filesystem DKMS package
root@hpc:~#
root@hpc:~# lsmod | grep -E "nvme|nvidia"
nvidia_uvm           1519616  0
nvidia_drm             94208  0
nvidia_modeset       1327104  1 nvidia_drm
nvidia              56172544  2 nvidia_uvm,nvidia_modeset
drm_kms_helper        311296  5 mgag200,nvidia_drm,nouveau
nvme                   53248  0
nvme_core             135168  4 nvme
mlx_compat             69632  13 rdma_cm,ib_ipoib,mlxdevm,nvme,iw_cm,nvme_core,ib_umad,ib_core,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,mlx5_core
drm                   622592  8 drm_kms_helper,nvidia,mgag200,drm_ttm_helper,nvidia_drm,ttm,nouveau
root@hpc:~#
root@hpc:~#
root@hpc:~# dmesg | grep nvidia
[    8.375378] nvidia: module license 'NVIDIA' taints kernel.
[    8.413646] nvidia-nvlink: Nvlink Core is being initialized, major device number 510
[    8.425954] nvidia: probe of 0000:65:00.0 failed with error -1
[    8.525041] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[    8.526011] nvidia_fs: Unknown symbol nvidia_p2p_dma_unmap_pages (err -2)
[    8.527487] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[    8.527489] nvidia_fs: Unknown symbol nvidia_p2p_get_pages (err -2)
[    8.527500] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[    8.527501] nvidia_fs: Unknown symbol nvidia_p2p_put_pages (err -2)
[    8.527511] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[    8.527512] nvidia_fs: Unknown symbol nvidia_p2p_dma_map_pages (err -2)
[    8.527524] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[    8.527525] nvidia_fs: Unknown symbol nvidia_p2p_free_dma_mapping (err -2)
[    8.527526] nvidia_fs: module using GPL-only symbols uses symbols from proprietary module nvidia.
[    8.527526] nvidia_fs: Unknown symbol nvidia_p2p_free_page_table (err -2)
[    8.702833] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  545.23.06  Sun Oct 15 17:22:43 UTC 2023
[    8.731472] [drm] [nvidia-drm] [GPU ID 0x0000ca00] Loading driver
[    8.731474] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:ca:00.0 on minor 1
[    9.349234] audit: type=1400 audit(1700036425.661:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=2008 comm="apparmor_parser"
[    9.349238] audit: type=1400 audit(1700036425.661:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=2008 comm="apparmor_parser"
[  326.280750] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint.
[  326.285386] nvidia-uvm: Loaded the UVM driver, major device number 503.
root@hpc:~#
root@hpc:~#
root@hpc:~# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Sep__8_19:17:24_PDT_2023
Cuda compilation tools, release 12.3, V12.3.52
Build cuda_12.3.r12.3/compiler.33281558_0
root@hpc:~#
root@hpc:~# echo $LD_LIBRARY_PATH
/usr/local/cuda-12.3/lib64
root@hpc:~#
root@hpc:~#
root@hpc:~# /usr/local/cuda-12.3/gds/tools/gdscheck.py -p
 cuInit Failed, error CUDA_ERROR_UNKNOWN
 cuFile initialization failed
 Platform verification error :
CUDA Driver API error

@kmodukuri @faraz.ahmed , when i m installing cuda tools or drivers and GDS with apt packages or local run files on ubuntu 22.04 , below error trace comes up in dmesg , gdscheck.py also fails .

[ 3259.085380] NVRM nvAssertFailedNoLog: Assertion failed: pKernelBus->pciBars[BUS_BAR_1] != 0 @ kern_bus_gm107.c:3846
[ 3259.085422] NVRM nvCheckOkFailedNoLog: Check failed: Generic Error: Invalid state [NV_ERR_INVALID_STATE] (0x00000040) returned from kbusInitBarsBaseInfo_HAL(pKernelBus) @ kern_bus.c:77
[ 3259.087156] NVRM osInitNvMapping: *** Cannot attach gpu
[ 3259.087173] NVRM RmInitAdapter: osInitNvMapping failed, bailing out of RmInitAdapter
[ 3259.087972] NVRM: GPU 0000:ca:00.0: RmInitAdapter failed! (0x22:0x40:631)
[ 3259.089333] NVRM: GPU 0000:ca:00.0: rm_init_adapter failed, device minor number 0
[ 3259.096539] NVRM nvAssertFailedNoLog: Assertion failed: pKernelBus->pciBars[BUS_BAR_1] != 0 @ kern_bus_gm107.c:3846
[ 3259.096579] NVRM nvCheckOkFailedNoLog: Check failed: Generic Error: Invalid state [NV_ERR_INVALID_STATE] (0x00000040) returned from kbusInitBarsBaseInfo_HAL(pKernelBus) @ kern_bus.c:77

lspci output

root@hpc:~# lspci | grep -i nvidia
65:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
ca:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)

For the gdscheck.py fail error, if the cufile.log is the same as in your initial post, then you may need to fix the following error first:

This means that the GPU is not visible to the application, you may want to set CUDA_VISIBLE_DEVICES or reinstall cuda-toolkit as in the instructions here: CUDA Installation Guide for Linux

These are the details , its says unknown device …very strange , I tried reinstalling but same . I tried network install option . If you did anything different can you share the steps . Moreover when I boot into the system and go to dmesg logs , I can see some NVRM trace . This looks more of a hardware issue or Cuda ?

root@hpc:~# echo $CUDA_VISIBLE_DEVICES
0,1
root@hpc:~# cat cufile.log
 15-11-2023 07:50:34:52 [pid=89337 tid=89337] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 07:50:34:52 [pid=89337 tid=89337] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 07:59:55:319 [pid=2866 tid=2866] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 07:59:55:319 [pid=2866 tid=2866] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 08:03:23:878 [pid=2928 tid=2928] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 08:03:23:878 [pid=2928 tid=2928] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 08:18:00:833 [pid=2835 tid=2835] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 08:18:00:833 [pid=2835 tid=2835] ERROR  cufio_core:1038 cuFile initialization failed
 15-11-2023 08:25:42:633 [pid=2849 tid=2849] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 15-11-2023 08:25:42:633 [pid=2849 tid=2849] ERROR  cufio_core:1038 cuFile initialization failed
 16-11-2023 02:43:15:572 [pid=3701 tid=3701] ERROR  cufio_core:919 cuInit Failed, error CUDA_ERROR_UNKNOWN
 16-11-2023 02:43:15:572 [pid=3701 tid=3701] ERROR  cufio_core:1038 cuFile initialization failed
 16-11-2023 07:19:19:634 [pid=373707 tid=373707] ERROR  cufio_core:955 cuInit Failed, error CUDA_ERROR_UNKNOWN
 16-11-2023 07:19:19:634 [pid=373707 tid=373707] ERROR  cufio_core:1074 cuFile initialization failed
 17-11-2023 02:10:37:614 [pid=381970 tid=381970] ERROR  cufio_core:955 cuInit Failed, error CUDA_ERROR_UNKNOWN
 17-11-2023 02:10:37:614 [pid=381970 tid=381970] ERROR  cufio_core:1074 cuFile initialization failed
root@hpc:~# nvidia-smi
No devices were found

DMESG logs

[  130.863017] NVRM nvAssertFailedNoLog: Assertion failed: pKernelBus->pciBars[BUS_BAR_1] != 0 @ kern_bus_gm107.c:3846
[  130.863575] NVRM nvCheckOkFailedNoLog: Check failed: Generic Error: Invalid state [NV_ERR_INVALID_STATE] (0x00000040) returned from kbusInitBarsBaseInfo_HAL(pKernelBus) @ kern_bus.c:77
[  130.864241] NVRM osInitNvMapping: *** Cannot attach gpu
[  130.864244] NVRM RmInitAdapter: osInitNvMapping failed, bailing out of RmInitAdapter
[  130.864251] NVRM: GPU 0000:ca:00.0: RmInitAdapter failed! (0x22:0x40:631)
[  130.865303] NVRM: GPU 0000:ca:00.0: rm_init_adapter failed, device minor number 0
[  130.878398] NVRM nvAssertFailedNoLog: Assertion failed: pKernelBus->pciBars[BUS_BAR_1] != 0 @ kern_bus_gm107.c:3846
[  130.878404] NVRM nvCheckOkFailedNoLog: Check failed: Generic Error: Invalid state [NV_ERR_INVALID_STATE] (0x00000040) returned from kbusInitBarsBaseInfo_HAL(pKernelBus) @ kern_bus.c:77
[  130.878481] NVRM osInitNvMapping: *** Cannot attach gpu
[  130.878484] NVRM RmInitAdapter: osInitNvMapping failed, bailing out of RmInitAdapter
[  130.878490] NVRM: GPU 0000:ca:00.0: RmInitAdapter failed! (0x22:0x40:631)
[  130.879393] NVRM: GPU 0000:ca:00.0: rm_init_adapter failed, device minor number 0

@kmodukuri @faraz.ahmed …1 thing i forgot to mention …I have just NVMe Backplane on DellR750XA server , so I have 2 NVMe drives on the system . One is used for OS ( ubuntu) and other will be used for GDS NVMe

I followed the standard installation steps for installing proprietary drivers. From the dmesg logs and google search, this looks like a driver/hardware issue. You may want to go through such posts on this site. For exact resolution, @kmodukuri may have better suggestions/pointers.

Yo…m really stuck and its like wats happening …in ubuntu 22.04 …putting all the latest drivers and deb packages , i am not able initialize GPUs for OS drivers …lol