[DOCA] Failed to register user memory. Got errno: Bad address

I am trying to run cuBB with OAI, but I failed to start cuBB with the following error.

nv-cubb  | EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:ab:00.0 (socket 1)
nv-cubb  | [07:16:37:470290][46][DOCA][ERR][linux_mapped_user_memory.cpp:75][linux_mapped_user_memory] Failed to register user memory. Got errno: Bad address
nv-cubb  | [07:16:37:470340][46][DOCA][ERR][doca_mmap.cpp:167][priv_doca_mmap_dev_to_mkey_init_mkey] Failed to initialize mkey: failed to create memory region with exception:
nv-cubb  | [07:16:37:470356][46][DOCA][ERR][doca_mmap.cpp:167][priv_doca_mmap_dev_to_mkey_init_mkey] DOCA exception [DOCA_ERROR_DRIVER] with message Failed to register user memory
nv-cubb  | [07:16:37:470363][46][DOCA][ERR][doca_mmap.cpp:313][priv_doca_mmap_init_dev_to_mkey] Mmap 0x2a6a4b3a300: Failed to initialize device=0x2a69de0ed30. err=DOCA_ERROR_DRIVER
nv-cubb  | [07:16:37:470371][46][DOCA][ERR][doca_mmap.cpp:350][priv_doca_mmap_init_dev_to_mkeys] Mmap 0x2a6a4b3a300: Failed to initialize memory range. Failed to register MR for device with id: 1. err=DOCA_ERROR_DRIVER
nv-cubb  | 07:16:37.470391 ERR phy_init 0 [AERIAL_INVALID_PARAM_EVENT] [FH.DOCA] Failed to start mmap DOCA Driver call failure
nv-cubb  | terminate called after throwing an instance of 'pd_exc_h'
nv-cubb  |   what():  Invalid pointer: StaticConversion can't return nullptr
nv-cubb  | /opt/nvidia/cuBB/aerial_l1_entrypoint.sh: line 34:    44 Aborted                 sudo -E "$cuBB_Path"/build/cuPHY-CP/cuphycontroller/examples/cuphycontroller_scf "$argument"
nv-cubb exited with code 134

I found this thread has a similar problem DOCA: GPU Packet Processing - Failed to start mmap DOCA Driver call failure - DOCA / Getting Started & Resources - NVIDIA Developer Forums, but I do not see the reason and solution.

After searching, I found the problem may have something to do with BAR1 Memory. This is the BAR1 memory usage in my computer:
image

Did I miss configuring something? If not, do you have any idea relating to this issue?

This is my hardware configuration:



Detail logs and configure file:

➜  sa_gnb_aerial docker compose up
WARN[0000] Found orphan containers ([oai-gnb-aerial]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. 
[+] Running 1/0
 ✔ Container nv-cubb  Created                                                                                                                                                                                                                                    0.0s 
Attaching to nv-cubb
nv-cubb  | 
nv-cubb  | ==========
nv-cubb  | == CUDA ==
nv-cubb  | ==========
nv-cubb  | 
nv-cubb  | CUDA Version 12.2.2
nv-cubb  | 
nv-cubb  | Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
nv-cubb  | 
nv-cubb  | This container image and its contents are governed by the NVIDIA Deep Learning Container License.
nv-cubb  | By pulling and using the container, you accept the terms and conditions of this license:
nv-cubb  | https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
nv-cubb  | 
nv-cubb  | A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
nv-cubb  | 
nv-cubb  | Cannot find MPS control daemon process
nv-cubb  | Started cuphycontroller on CPU core 46
nv-cubb  | AERIAL_LOG_PATH unset
nv-cubb  | Using default log path
nv-cubb  | Log file set to /tmp/phy.log
nv-cubb  | Aerial metrics backend address: 127.0.0.1:8081
nv-cubb  | 07:16:33.571007 WRN phy_init 0 [CTL.SCF] Config file: /opt/nvidia/cuBB/cuPHY-CP/cuphycontroller/config/cuphycontroller_P5G_FXN.yaml
nv-cubb  | 07:16:33.571559 WRN phy_init 0 [CTL.SCF] low_priority_core=8
nv-cubb  | 07:16:33.571821 WRN phy_init 0 [NVLOG.CPP] Using /opt/nvidia/cuBB/cuPHY/nvlog/config/nvlog_config.yaml for nvlog configuration
nv-cubb  | YAML invalid key: ul_order_timeout_gpu_log_enable Using default value of 0 to YAML_PARAM_UL_ORDER_TIMEOUT_GPU_LOG_ENABLE
nv-cubb  | YAML invalid key: ue_mode Using default value of 0 to YAML_PARAM_UE_MODE
nv-cubb  | YAML invalid key: enable_l1_param_sanity_check Using default value of 0 to YAML_PARAM_ENABLE_L1_PARAM_SANITY_CHECK
nv-cubb  | YAML invalid key: disable_empw Using default value of 0 to YAML_PARAM_DISABLE_EMPW
nv-cubb  | YAML invalid key: ul_rx_pkt_tracing_level Using default value of 0 to YAML_PARAM_UL_RX_PKT_TRACING_LEVEL
nv-cubb  | YAML invalid key: enable_h2d_copy_thread Using default value of 0 to YAML_PARAM_ENABLE_H2D_COPY_THREAD
nv-cubb  | YAML invalid key: h2d_copy_thread_cpu_affinity Using default value of 29 to YAML_PARAM_H2D_COPY_THREAD_CPU_AFFINITY
nv-cubb  | YAML invalid key: h2d_copy_thread_sched_priority Using default value of 0 to YAML_PARAM_H2D_COPY_THREAD_SCHED_PRIORITY
nv-cubb  | YAML invalid key: aggr_obj_non_avail_th Using default value of 5 to YAML_PARAM_AGGR_OBJ_NON_AVAIL_TH
nv-cubb  | YAML invalid key: sendCPlane_timing_error_th_ns Using default value of 50 us to YAML_PARAM_SENDCPLANE_TIMING_ERROR_TH_NS
nv-cubb  | YAML invalid key: pusch_subSlotProcEn Using default value of 0 to PUSCH-SUBSLOTPROCEN
nv-cubb  | YAML invalid key: pusch_deviceGraphLaunchEn Using default value of 1 to PUSCH-DEVICEGRAPHLAUNCHEN
nv-cubb  | YAML invalid key: pusch_waitTimeOutPreEarlyHarqUs Using default value of 1100 to PUSCH-WAITTIMEOUTPREEHQUS
nv-cubb  | YAML invalid key: pusch_waitTimeOutPostEarlyHarqUs Using default value of 1500 to PUSCH-WAITTIMEOUTPOSTEHQUS
nv-cubb  | YAML invalid key: puxch_polarDcdrListSz Using default value of 1 to PUSCH_POLAR_DCDR_LIST_SZ
nv-cubb  | YAML invalid key: split_ul_cuda_streams Using default value of 0 to YAML_PARAM_SPLIT_UL_CUDA_STREAMS
nv-cubb  | YAML invalid key: serialize_pucch_pusch Using default value of 0 to YAML_PARAM_SERIALIZE_PUCCH_PUSCH
nv-cubb  | YAML invalid key: ul_order_max_rx_pkts Using default value of 0 to UL_ORDER_MAX_RX_PKTS
nv-cubb  | YAML invalid key: ul_order_timeout_log_interval_ns Using default value of 1s to YAML_PARAM_UL_ORDER_TIMEOUT_LOG_INTERVAL_NS
nv-cubb  | 07:16:33.584136 ERR phy_init 0 [AERIAL_CONFIG_EVENT] [CTL.YAML] cuphycontroller config. yaml does not have mps_sm_ul_order key; defaulting to 16.
nv-cubb  | 07:16:33.584143 ERR phy_init 0 [AERIAL_CONFIG_EVENT] [CTL.YAML] cuphycontroller config. yaml does not have mps_sm_gpu_comms key; defaulting to old count of 8 SMs.
nv-cubb  | 07:16:33.584433 WRN phy_init 0 [CTL.YAML] Exception YAML invalid key: dl_wait_th_ns DL Wait Thresholds will be set to default values
nv-cubb  | 07:16:33.584466 WRN phy_init 0 [CTL.YAML] cell_id 1 nic_index :0
nv-cubb  | 07:16:33.584559 WRN phy_init 0 [CTL.YAML] pusch_nMaxPrb not set in config file, using default of 273 PRB allocation
nv-cubb  | 07:16:33.584563 WRN phy_init 0 [CTL.YAML] pusch_nMaxRx not set in config file, using default value of 0
nv-cubb  | 07:16:33.584567 WRN phy_init 0 [CTL.YAML] ul_u_plane_tx_offset_ns not set in config file, using default of 280 us
nv-cubb  | 07:16:33.584581 WRN phy_init 0 [CTL.YAML] cell_id 2 nic_index :0
nv-cubb  | 07:16:33.584626 WRN phy_init 0 [CTL.YAML] pusch_nMaxPrb not set in config file, using default of 273 PRB allocation
nv-cubb  | 07:16:33.584630 WRN phy_init 0 [CTL.YAML] pusch_nMaxRx not set in config file, using default value of 0
nv-cubb  | 07:16:33.584634 WRN phy_init 0 [CTL.YAML] ul_u_plane_tx_offset_ns not set in config file, using default of 280 us
nv-cubb  | 07:16:33.584648 WRN phy_init 0 [CTL.YAML] cell_id 3 nic_index :0
nv-cubb  | 07:16:33.584693 WRN phy_init 0 [CTL.YAML] pusch_nMaxPrb not set in config file, using default of 273 PRB allocation
nv-cubb  | 07:16:33.584697 WRN phy_init 0 [CTL.YAML] pusch_nMaxRx not set in config file, using default value of 0
nv-cubb  | 07:16:33.584700 WRN phy_init 0 [CTL.YAML] ul_u_plane_tx_offset_ns not set in config file, using default of 280 us
nv-cubb  | 07:16:33.584713 WRN phy_init 0 [CTL.YAML] cell_id 4 nic_index :0
nv-cubb  | 07:16:33.584755 WRN phy_init 0 [CTL.YAML] pusch_nMaxPrb not set in config file, using default of 273 PRB allocation
nv-cubb  | 07:16:33.584759 WRN phy_init 0 [CTL.YAML] pusch_nMaxRx not set in config file, using default value of 0
nv-cubb  | 07:16:33.584763 WRN phy_init 0 [CTL.YAML] ul_u_plane_tx_offset_ns not set in config file, using default of 280 us
nv-cubb  | 07:16:33.584787 WRN phy_init 0 [CTL.YAML] Num Slots: 8
nv-cubb  | 07:16:33.584788 WRN phy_init 0 [CTL.YAML] Enable UL cuPHY Graphs: 1
nv-cubb  | 07:16:33.584789 WRN phy_init 0 [CTL.YAML] Enable DL cuPHY Graphs: 1
nv-cubb  | 07:16:33.584790 WRN phy_init 0 [CTL.YAML] Accurate TX scheduling clock resolution (ns): 500
nv-cubb  | 07:16:33.584791 WRN phy_init 0 [CTL.YAML] DPDK core: 8
nv-cubb  | 07:16:33.584791 WRN phy_init 0 [CTL.YAML] Prometheus core: -1
nv-cubb  | 07:16:33.584791 WRN phy_init 0 [CTL.YAML] UL cores: 
nv-cubb  | 07:16:33.584792 WRN phy_init 0 [CTL.YAML] 	- 2
nv-cubb  | 07:16:33.584792 WRN phy_init 0 [CTL.YAML] 	- 3
nv-cubb  | 07:16:33.584792 WRN phy_init 0 [CTL.YAML] DL cores: 
nv-cubb  | 07:16:33.584793 WRN phy_init 0 [CTL.YAML] 	- 4
nv-cubb  | 07:16:33.584793 WRN phy_init 0 [CTL.YAML] 	- 5
nv-cubb  | 07:16:33.584793 WRN phy_init 0 [CTL.YAML] 	- 6
nv-cubb  | 07:16:33.584794 WRN phy_init 0 [CTL.YAML] Debug worker: -1
nv-cubb  | 07:16:33.584794 WRN phy_init 0 [CTL.YAML] Data Lake core: -1
nv-cubb  | 07:16:33.584794 WRN phy_init 0 [CTL.YAML] SRS starting Section ID: 3072
nv-cubb  | 07:16:33.584795 WRN phy_init 0 [CTL.YAML] PRACH starting Section ID: 2048
nv-cubb  | 07:16:33.584795 WRN phy_init 0 [CTL.YAML] MPS SM PUSCH: 84
nv-cubb  | 07:16:33.584795 WRN phy_init 0 [CTL.YAML] MPS SM PUCCH: 16
nv-cubb  | 07:16:33.584795 WRN phy_init 0 [CTL.YAML] MPS SM PRACH: 16
nv-cubb  | 07:16:33.584796 WRN phy_init 0 [CTL.YAML] MPS SM UL ORDER: 16
nv-cubb  | 07:16:33.584796 WRN phy_init 0 [CTL.YAML] MPS SM PDSCH: 82
nv-cubb  | 07:16:33.584796 WRN phy_init 0 [CTL.YAML] MPS SM PDCCH: 28
nv-cubb  | 07:16:33.584797 WRN phy_init 0 [CTL.YAML] MPS SM PBCH: 14
nv-cubb  | 07:16:33.584797 WRN phy_init 0 [CTL.YAML] MPS SM GPU_COMMS: 8
nv-cubb  | 07:16:33.584797 WRN phy_init 0 [CTL.YAML] PDSCH fallback: 0
nv-cubb  | 07:16:33.584797 WRN phy_init 0 [CTL.YAML] Massive MIMO enable: 0
nv-cubb  | 07:16:33.584798 WRN phy_init 0 [CTL.YAML] Enable SRS : 0
nv-cubb  | 07:16:33.584798 WRN phy_init 0 [CTL.YAML] ul_order_timeout_gpu_log_enable: 0
nv-cubb  | 07:16:33.584800 WRN phy_init 0 [CTL.YAML] ue_mode: 0
nv-cubb  | 07:16:33.584800 WRN phy_init 0 [CTL.YAML] Aggr Obj Non-availability threshold: 5
nv-cubb  | 07:16:33.584800 WRN phy_init 0 [CTL.YAML] sendCPlane_timing_error_th_ns: 50000
nv-cubb  | 07:16:33.584800 WRN phy_init 0 [CTL.YAML] ul_order_timeout_gpu_log_enable: 0
nv-cubb  | 07:16:33.584801 WRN phy_init 0 [CTL.YAML] GPU-initiated comms DL: 1
nv-cubb  | 07:16:33.584801 WRN phy_init 0 [CTL.YAML] Cell group: 1
nv-cubb  | 07:16:33.584801 WRN phy_init 0 [CTL.YAML] Cell group num: 1
nv-cubb  | 07:16:33.584801 WRN phy_init 0 [CTL.YAML] puxchPolarDcdrListSz: 1
nv-cubb  | 07:16:33.584802 WRN phy_init 0 [CTL.YAML] split_ul_cuda_streams: 0
nv-cubb  | 07:16:33.584802 WRN phy_init 0 [CTL.YAML] serialize_pucch_pusch: 0
nv-cubb  | 07:16:33.584802 WRN phy_init 0 [CTL.YAML] Number of Cell Configs: 4
nv-cubb  | 07:16:33.584803 WRN phy_init 0 [CTL.YAML] L2Adapter config file: /opt/nvidia/cuBB/cuPHY-CP/cuphycontroller/config/l2_adapter_config_P5G.yaml
nv-cubb  | 07:16:33.584804 WRN phy_init 0 [CTL.YAML] Cell name: O-RU 0
nv-cubb  | 07:16:33.584804 WRN phy_init 0 [CTL.YAML] 	MU: 1
nv-cubb  | 07:16:33.584804 WRN phy_init 0 [CTL.YAML] 	ID: 1
nv-cubb  | 07:16:33.584805 WRN phy_init 0 [CTL.YAML] Cell name: O-RU 1
nv-cubb  | 07:16:33.584805 WRN phy_init 0 [CTL.YAML] 	MU: 1
nv-cubb  | 07:16:33.584805 WRN phy_init 0 [CTL.YAML] 	ID: 2
nv-cubb  | 07:16:33.584805 WRN phy_init 0 [CTL.YAML] Cell name: O-RU 2
nv-cubb  | 07:16:33.584806 WRN phy_init 0 [CTL.YAML] 	MU: 1
nv-cubb  | 07:16:33.584806 WRN phy_init 0 [CTL.YAML] 	ID: 3
nv-cubb  | 07:16:33.584806 WRN phy_init 0 [CTL.YAML] Cell name: O-RU 3
nv-cubb  | 07:16:33.584806 WRN phy_init 0 [CTL.YAML] 	MU: 1
nv-cubb  | 07:16:33.584807 WRN phy_init 0 [CTL.YAML] 	ID: 4
nv-cubb  | 07:16:33.584807 WRN phy_init 0 [CTL.YAML] Number of MPlane Configs: 4
nv-cubb  | 07:16:33.584807 WRN phy_init 0 [CTL.YAML] 	Mplane ID: 1
nv-cubb  | 07:16:33.584808 WRN phy_init 0 [CTL.YAML] 	VLAN ID: 2
nv-cubb  | 07:16:33.584808 WRN phy_init 0 [CTL.YAML] 	Source Eth Address: 00:00:00:00:00:00
nv-cubb  | 07:16:33.584809 WRN phy_init 0 [CTL.YAML] 	Destination Eth Address: 6c:ad:ad:00:04:6e
nv-cubb  | 07:16:33.584810 WRN phy_init 0 [CTL.YAML] 	NIC port: 0000:ab:00.0
nv-cubb  | 07:16:33.584810 WRN phy_init 0 [CTL.YAML] 	RU Type: 1
nv-cubb  | 07:16:33.584811 WRN phy_init 0 [CTL.YAML] 	U-plane TXQs: 1
nv-cubb  | 07:16:33.584811 WRN phy_init 0 [CTL.YAML] 	DL compression method: 1
nv-cubb  | 07:16:33.584811 WRN phy_init 0 [CTL.YAML] 	DL iq bit width: 9
nv-cubb  | 07:16:33.584811 WRN phy_init 0 [CTL.YAML] 	UL compression method: 1
nv-cubb  | 07:16:33.584812 WRN phy_init 0 [CTL.YAML] 	UL iq bit width: 9
nv-cubb  | 07:16:33.584812 WRN phy_init 0 [CTL.YAML] 
nv-cubb  | 07:16:33.584812 WRN phy_init 0 [CTL.YAML] 	Flow list SSB/PBCH: 
nv-cubb  | 07:16:33.584813 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584813 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584813 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584813 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584814 WRN phy_init 0 [CTL.YAML] 	Flow list PDCCH: 
nv-cubb  | 07:16:33.584814 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584815 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584815 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584815 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584815 WRN phy_init 0 [CTL.YAML] 	Flow list PDSCH: 
nv-cubb  | 07:16:33.584816 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584816 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584816 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584817 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584817 WRN phy_init 0 [CTL.YAML] 	Flow list CSIRS: 
nv-cubb  | 07:16:33.584817 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584817 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584818 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584818 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584818 WRN phy_init 0 [CTL.YAML] 	Flow list PUSCH: 
nv-cubb  | 07:16:33.584819 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584819 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584819 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584819 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584820 WRN phy_init 0 [CTL.YAML] 	Flow list PUCCH: 
nv-cubb  | 07:16:33.584820 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584820 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584820 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584821 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584821 WRN phy_init 0 [CTL.YAML] 	Flow list SRS: 
nv-cubb  | 07:16:33.584821 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584821 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584822 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584822 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584822 WRN phy_init 0 [CTL.YAML] 	Flow list PRACH: 
nv-cubb  | 07:16:33.584822 WRN phy_init 0 [CTL.YAML] 		4
nv-cubb  | 07:16:33.584823 WRN phy_init 0 [CTL.YAML] 		5
nv-cubb  | 07:16:33.584823 WRN phy_init 0 [CTL.YAML] 		6
nv-cubb  | 07:16:33.584823 WRN phy_init 0 [CTL.YAML] 		7
nv-cubb  | 07:16:33.584824 WRN phy_init 0 [CTL.YAML] 	PUSCH TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584824 WRN phy_init 0 [CTL.YAML] 	SRS TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584824 WRN phy_init 0 [CTL.YAML] 	Section_3 time offset: 58369
nv-cubb  | 07:16:33.584825 WRN phy_init 0 [CTL.YAML] 	nMaxRxAnt: 4
nv-cubb  | 07:16:33.584825 WRN phy_init 0 [CTL.YAML] 	PUSCH PRBs Stride: 273
nv-cubb  | 07:16:33.584825 WRN phy_init 0 [CTL.YAML] 	PRACH PRBs Stride: 12
nv-cubb  | 07:16:33.584825 WRN phy_init 0 [CTL.YAML] 	SRS PRBs Stride: 273
nv-cubb  | 07:16:33.584826 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxPrb: 273
nv-cubb  | 07:16:33.584826 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxRx: 0
nv-cubb  | 07:16:33.584826 WRN phy_init 0 [CTL.YAML] 	UL Gain Calibration: 48.68
nv-cubb  | 07:16:33.584827 WRN phy_init 0 [CTL.YAML] 	Lower guard bw: 845
nv-cubb  | 07:16:33.584827 WRN phy_init 0 [CTL.YAML] 	Mplane ID: 2
nv-cubb  | 07:16:33.584827 WRN phy_init 0 [CTL.YAML] 	VLAN ID: 2
nv-cubb  | 07:16:33.584827 WRN phy_init 0 [CTL.YAML] 	Source Eth Address: 00:00:00:00:00:00
nv-cubb  | 07:16:33.584828 WRN phy_init 0 [CTL.YAML] 	Destination Eth Address: 6c:ad:ad:00:04:68
nv-cubb  | 07:16:33.584828 WRN phy_init 0 [CTL.YAML] 	NIC port: 0000:ab:00.0
nv-cubb  | 07:16:33.584828 WRN phy_init 0 [CTL.YAML] 	RU Type: 1
nv-cubb  | 07:16:33.584829 WRN phy_init 0 [CTL.YAML] 	U-plane TXQs: 1
nv-cubb  | 07:16:33.584829 WRN phy_init 0 [CTL.YAML] 	DL compression method: 1
nv-cubb  | 07:16:33.584829 WRN phy_init 0 [CTL.YAML] 	DL iq bit width: 9
nv-cubb  | 07:16:33.584829 WRN phy_init 0 [CTL.YAML] 	UL compression method: 1
nv-cubb  | 07:16:33.584830 WRN phy_init 0 [CTL.YAML] 	UL iq bit width: 9
nv-cubb  | 07:16:33.584830 WRN phy_init 0 [CTL.YAML] 
nv-cubb  | 07:16:33.584830 WRN phy_init 0 [CTL.YAML] 	Flow list SSB/PBCH: 
nv-cubb  | 07:16:33.584830 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584831 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584831 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584831 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584831 WRN phy_init 0 [CTL.YAML] 	Flow list PDCCH: 
nv-cubb  | 07:16:33.584831 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584832 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584832 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584832 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584832 WRN phy_init 0 [CTL.YAML] 	Flow list PDSCH: 
nv-cubb  | 07:16:33.584832 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584833 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584833 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584833 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584833 WRN phy_init 0 [CTL.YAML] 	Flow list CSIRS: 
nv-cubb  | 07:16:33.584834 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584834 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584834 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584834 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584834 WRN phy_init 0 [CTL.YAML] 	Flow list PUSCH: 
nv-cubb  | 07:16:33.584835 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584835 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584835 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584835 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584835 WRN phy_init 0 [CTL.YAML] 	Flow list PUCCH: 
nv-cubb  | 07:16:33.584836 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584836 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584836 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584836 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584836 WRN phy_init 0 [CTL.YAML] 	Flow list SRS: 
nv-cubb  | 07:16:33.584837 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584837 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584837 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584837 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584837 WRN phy_init 0 [CTL.YAML] 	Flow list PRACH: 
nv-cubb  | 07:16:33.584838 WRN phy_init 0 [CTL.YAML] 		4
nv-cubb  | 07:16:33.584838 WRN phy_init 0 [CTL.YAML] 		5
nv-cubb  | 07:16:33.584838 WRN phy_init 0 [CTL.YAML] 		6
nv-cubb  | 07:16:33.584838 WRN phy_init 0 [CTL.YAML] 		7
nv-cubb  | 07:16:33.584838 WRN phy_init 0 [CTL.YAML] 	PUSCH TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584839 WRN phy_init 0 [CTL.YAML] 	SRS TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584839 WRN phy_init 0 [CTL.YAML] 	Section_3 time offset: 58369
nv-cubb  | 07:16:33.584839 WRN phy_init 0 [CTL.YAML] 	nMaxRxAnt: 4
nv-cubb  | 07:16:33.584839 WRN phy_init 0 [CTL.YAML] 	PUSCH PRBs Stride: 273
nv-cubb  | 07:16:33.584840 WRN phy_init 0 [CTL.YAML] 	PRACH PRBs Stride: 12
nv-cubb  | 07:16:33.584840 WRN phy_init 0 [CTL.YAML] 	SRS PRBs Stride: 273
nv-cubb  | 07:16:33.584840 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxPrb: 273
nv-cubb  | 07:16:33.584840 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxRx: 0
nv-cubb  | 07:16:33.584840 WRN phy_init 0 [CTL.YAML] 	UL Gain Calibration: 48.68
nv-cubb  | 07:16:33.584841 WRN phy_init 0 [CTL.YAML] 	Lower guard bw: 845
nv-cubb  | 07:16:33.584841 WRN phy_init 0 [CTL.YAML] 	Mplane ID: 3
nv-cubb  | 07:16:33.584841 WRN phy_init 0 [CTL.YAML] 	VLAN ID: 2
nv-cubb  | 07:16:33.584841 WRN phy_init 0 [CTL.YAML] 	Source Eth Address: 00:00:00:00:00:00
nv-cubb  | 07:16:33.584842 WRN phy_init 0 [CTL.YAML] 	Destination Eth Address: 6c:ad:ad:00:02:00
nv-cubb  | 07:16:33.584842 WRN phy_init 0 [CTL.YAML] 	NIC port: 0000:ab:00.0
nv-cubb  | 07:16:33.584842 WRN phy_init 0 [CTL.YAML] 	RU Type: 1
nv-cubb  | 07:16:33.584842 WRN phy_init 0 [CTL.YAML] 	U-plane TXQs: 1
nv-cubb  | 07:16:33.584843 WRN phy_init 0 [CTL.YAML] 	DL compression method: 1
nv-cubb  | 07:16:33.584843 WRN phy_init 0 [CTL.YAML] 	DL iq bit width: 9
nv-cubb  | 07:16:33.584843 WRN phy_init 0 [CTL.YAML] 	UL compression method: 1
nv-cubb  | 07:16:33.584843 WRN phy_init 0 [CTL.YAML] 	UL iq bit width: 9
nv-cubb  | 07:16:33.584843 WRN phy_init 0 [CTL.YAML] 
nv-cubb  | 07:16:33.584844 WRN phy_init 0 [CTL.YAML] 	Flow list SSB/PBCH: 
nv-cubb  | 07:16:33.584844 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584844 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584844 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584844 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584845 WRN phy_init 0 [CTL.YAML] 	Flow list PDCCH: 
nv-cubb  | 07:16:33.584845 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584845 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584845 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584845 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584846 WRN phy_init 0 [CTL.YAML] 	Flow list PDSCH: 
nv-cubb  | 07:16:33.584846 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584846 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584846 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584846 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584846 WRN phy_init 0 [CTL.YAML] 	Flow list CSIRS: 
nv-cubb  | 07:16:33.584847 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584847 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584847 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584847 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584847 WRN phy_init 0 [CTL.YAML] 	Flow list PUSCH: 
nv-cubb  | 07:16:33.584848 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584848 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584848 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584848 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584848 WRN phy_init 0 [CTL.YAML] 	Flow list PUCCH: 
nv-cubb  | 07:16:33.584848 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584849 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584849 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584849 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584849 WRN phy_init 0 [CTL.YAML] 	Flow list SRS: 
nv-cubb  | 07:16:33.584849 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584850 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584850 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584850 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584850 WRN phy_init 0 [CTL.YAML] 	Flow list PRACH: 
nv-cubb  | 07:16:33.584850 WRN phy_init 0 [CTL.YAML] 		4
nv-cubb  | 07:16:33.584851 WRN phy_init 0 [CTL.YAML] 		5
nv-cubb  | 07:16:33.584851 WRN phy_init 0 [CTL.YAML] 		6
nv-cubb  | 07:16:33.584851 WRN phy_init 0 [CTL.YAML] 		7
nv-cubb  | 07:16:33.584851 WRN phy_init 0 [CTL.YAML] 	PUSCH TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584851 WRN phy_init 0 [CTL.YAML] 	SRS TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584851 WRN phy_init 0 [CTL.YAML] 	Section_3 time offset: 58369
nv-cubb  | 07:16:33.584852 WRN phy_init 0 [CTL.YAML] 	nMaxRxAnt: 4
nv-cubb  | 07:16:33.584852 WRN phy_init 0 [CTL.YAML] 	PUSCH PRBs Stride: 273
nv-cubb  | 07:16:33.584852 WRN phy_init 0 [CTL.YAML] 	PRACH PRBs Stride: 12
nv-cubb  | 07:16:33.584852 WRN phy_init 0 [CTL.YAML] 	SRS PRBs Stride: 273
nv-cubb  | 07:16:33.584852 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxPrb: 273
nv-cubb  | 07:16:33.584852 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxRx: 0
nv-cubb  | 07:16:33.584853 WRN phy_init 0 [CTL.YAML] 	UL Gain Calibration: 48.68
nv-cubb  | 07:16:33.584853 WRN phy_init 0 [CTL.YAML] 	Lower guard bw: 845
nv-cubb  | 07:16:33.584853 WRN phy_init 0 [CTL.YAML] 	Mplane ID: 4
nv-cubb  | 07:16:33.584853 WRN phy_init 0 [CTL.YAML] 	VLAN ID: 2
nv-cubb  | 07:16:33.584853 WRN phy_init 0 [CTL.YAML] 	Source Eth Address: 00:00:00:00:00:00
nv-cubb  | 07:16:33.584854 WRN phy_init 0 [CTL.YAML] 	Destination Eth Address: 6c:ad:ad:00:04:70
nv-cubb  | 07:16:33.584854 WRN phy_init 0 [CTL.YAML] 	NIC port: 0000:ab:00.0
nv-cubb  | 07:16:33.584854 WRN phy_init 0 [CTL.YAML] 	RU Type: 1
nv-cubb  | 07:16:33.584854 WRN phy_init 0 [CTL.YAML] 	U-plane TXQs: 1
nv-cubb  | 07:16:33.584854 WRN phy_init 0 [CTL.YAML] 	DL compression method: 1
nv-cubb  | 07:16:33.584855 WRN phy_init 0 [CTL.YAML] 	DL iq bit width: 9
nv-cubb  | 07:16:33.584855 WRN phy_init 0 [CTL.YAML] 	UL compression method: 1
nv-cubb  | 07:16:33.584855 WRN phy_init 0 [CTL.YAML] 	UL iq bit width: 9
nv-cubb  | 07:16:33.584855 WRN phy_init 0 [CTL.YAML] 
nv-cubb  | 07:16:33.584855 WRN phy_init 0 [CTL.YAML] 	Flow list SSB/PBCH: 
nv-cubb  | 07:16:33.584856 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584856 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584856 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584856 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584856 WRN phy_init 0 [CTL.YAML] 	Flow list PDCCH: 
nv-cubb  | 07:16:33.584857 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584857 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584857 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584857 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584857 WRN phy_init 0 [CTL.YAML] 	Flow list PDSCH: 
nv-cubb  | 07:16:33.584857 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584858 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584858 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584858 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584858 WRN phy_init 0 [CTL.YAML] 	Flow list CSIRS: 
nv-cubb  | 07:16:33.584858 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584858 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584859 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584859 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584859 WRN phy_init 0 [CTL.YAML] 	Flow list PUSCH: 
nv-cubb  | 07:16:33.584859 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584859 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584860 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584860 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584860 WRN phy_init 0 [CTL.YAML] 	Flow list PUCCH: 
nv-cubb  | 07:16:33.584860 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584860 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584861 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584861 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584861 WRN phy_init 0 [CTL.YAML] 	Flow list SRS: 
nv-cubb  | 07:16:33.584861 WRN phy_init 0 [CTL.YAML] 		0
nv-cubb  | 07:16:33.584861 WRN phy_init 0 [CTL.YAML] 		1
nv-cubb  | 07:16:33.584861 WRN phy_init 0 [CTL.YAML] 		2
nv-cubb  | 07:16:33.584862 WRN phy_init 0 [CTL.YAML] 		3
nv-cubb  | 07:16:33.584862 WRN phy_init 0 [CTL.YAML] 	Flow list PRACH: 
nv-cubb  | 07:16:33.584862 WRN phy_init 0 [CTL.YAML] 		4
nv-cubb  | 07:16:33.584862 WRN phy_init 0 [CTL.YAML] 		5
nv-cubb  | 07:16:33.584862 WRN phy_init 0 [CTL.YAML] 		6
nv-cubb  | 07:16:33.584862 WRN phy_init 0 [CTL.YAML] 		7
nv-cubb  | 07:16:33.584863 WRN phy_init 0 [CTL.YAML] 	PUSCH TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584863 WRN phy_init 0 [CTL.YAML] 	SRS TV: /opt/nvidia/cuBB/testVectors/cuPhyChEstCoeffs.h5
nv-cubb  | 07:16:33.584863 WRN phy_init 0 [CTL.YAML] 	Section_3 time offset: 58369
nv-cubb  | 07:16:33.584863 WRN phy_init 0 [CTL.YAML] 	nMaxRxAnt: 4
nv-cubb  | 07:16:33.584863 WRN phy_init 0 [CTL.YAML] 	PUSCH PRBs Stride: 273
nv-cubb  | 07:16:33.584863 WRN phy_init 0 [CTL.YAML] 	PRACH PRBs Stride: 12
nv-cubb  | 07:16:33.584864 WRN phy_init 0 [CTL.YAML] 	SRS PRBs Stride: 273
nv-cubb  | 07:16:33.584864 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxPrb: 273
nv-cubb  | 07:16:33.584864 WRN phy_init 0 [CTL.YAML] 	PUSCH nMaxRx: 0
nv-cubb  | 07:16:33.584864 WRN phy_init 0 [CTL.YAML] 	UL Gain Calibration: 48.68
nv-cubb  | 07:16:33.584864 WRN phy_init 0 [CTL.YAML] 	Lower guard bw: 845
nv-cubb  | EAL: Detected CPU lcores: 48
nv-cubb  | EAL: Detected NUMA nodes: 2
nv-cubb  | EAL: Detected shared linkage of DPDK
nv-cubb  | EAL: Multi-process socket /var/run/dpdk/cuphycontroller/mp_socket
nv-cubb  | EAL: Selected IOVA mode 'PA'
nv-cubb  | EAL: VFIO support initialized
nv-cubb  | EAL: Probe PCI driver: gpu_cuda (10de:2235) device: 0000:99:00.0 (socket 1)
nv-cubb  | EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:ab:00.0 (socket 1)
nv-cubb  | [07:16:37:470290][46][DOCA][ERR][linux_mapped_user_memory.cpp:75][linux_mapped_user_memory] Failed to register user memory. Got errno: Bad address
nv-cubb  | [07:16:37:470340][46][DOCA][ERR][doca_mmap.cpp:167][priv_doca_mmap_dev_to_mkey_init_mkey] Failed to initialize mkey: failed to create memory region with exception:
nv-cubb  | [07:16:37:470356][46][DOCA][ERR][doca_mmap.cpp:167][priv_doca_mmap_dev_to_mkey_init_mkey] DOCA exception [DOCA_ERROR_DRIVER] with message Failed to register user memory
nv-cubb  | [07:16:37:470363][46][DOCA][ERR][doca_mmap.cpp:313][priv_doca_mmap_init_dev_to_mkey] Mmap 0x2a6a4b3a300: Failed to initialize device=0x2a69de0ed30. err=DOCA_ERROR_DRIVER
nv-cubb  | [07:16:37:470371][46][DOCA][ERR][doca_mmap.cpp:350][priv_doca_mmap_init_dev_to_mkeys] Mmap 0x2a6a4b3a300: Failed to initialize memory range. Failed to register MR for device with id: 1. err=DOCA_ERROR_DRIVER
nv-cubb  | 07:16:37.470391 ERR phy_init 0 [AERIAL_INVALID_PARAM_EVENT] [FH.DOCA] Failed to start mmap DOCA Driver call failure
nv-cubb  | terminate called after throwing an instance of 'pd_exc_h'
nv-cubb  |   what():  Invalid pointer: StaticConversion can't return nullptr
nv-cubb  | /opt/nvidia/cuBB/aerial_l1_entrypoint.sh: line 34:    44 Aborted                 sudo -E "$cuBB_Path"/build/cuPHY-CP/cuphycontroller/examples/cuphycontroller_scf "$argument"
nv-cubb exited with code 134
➜  sa_gnb_aerial

This is my cuphycontroller_P5G_FXN.yaml

# Copyright (c) 2017-2024, NVIDIA CORPORATION.  All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are permitted
# provided that the following conditions are met:
#     * Redistributions of source code must retain the above copyright notice, this list of
#       conditions and the following disclaimer.
#     * Redistributions in binary form must reproduce the above copyright notice, this list of
#       conditions and the following disclaimer in the documentation and/or other materials
#       provided with the distribution.
#     * Neither the name of the NVIDIA CORPORATION nor the names of its contributors may be used
#       to endorse or promote products derived from this software without specific prior written
#       permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
# FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
# STRICT LIABILITY, OR TOR (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
---
l2adapter_filename: l2_adapter_config_P5G.yaml
aerial_metrics_backend_address: 127.0.0.1:8081

# CPU core shared by all low-priority threads
low_priority_core: 8
nic_tput_alert_threshold_mbps: 85000

cuphydriver_config:
  standalone: 0
  validation: 0
  num_slots: 8
  profiler_sec: 0
  log_level: DBG
  dpdk_thread: 8
  dpdk_verbose_logs: 0
  accu_tx_sched_res_ns: 500
  accu_tx_sched_disable: 0
  fh_stats_dump_cpu_core: 8
  pdump_client_thread: -1
  mps_sm_pusch: 84
  mps_sm_pucch: 16
  mps_sm_prach: 16
  mps_sm_pdsch: 82
  mps_sm_pdcch: 28
  mps_sm_pbch: 14
  mps_sm_srs: 16
  pdsch_fallback: 0
  dpdk_file_prefix: cuphycontroller
  nics:
    - nic: 0000:ab:00.0
      mtu: 8192
      cpu_mbufs: 196608
      uplane_tx_handles: 64
      txq_count: 48
      rxq_count: 16
      txq_size: 8192
      rxq_size: 16384
      gpu: 1
  gpus:
    - 1
    # Set GPUID to the GPU sharing the PCIe switch as NIC
    # run nvidia-smi topo -m to find out which GPU
  workers_ul:
    - 2
    - 3
  workers_dl:
    - 4
    - 5
    - 6
  workers_sched_priority: 95
  prometheus_thread: -1
  start_section_id_srs: 3072
  start_section_id_prach: 2048
  enable_ul_cuphy_graphs: 1
  enable_dl_cuphy_graphs: 1
  # Both RF and eLSU eCPRI configs
  ul_order_timeout_cpu_ns: 4000000
  ul_order_timeout_gpu_ns: 4000000
  cplane_disable: 0
  gpu_init_comms_dl: 1
  cell_group: 1
  cell_group_num: 1
  pusch_sinr: 1
  pusch_rssi: 1
  pusch_tdi: 0
  pusch_cfo: 0
  pusch_dftsofdm: 0
  pusch_to:  0
  pusch_select_eqcoeffalgo: 1
  pusch_select_chestalgo: 1
  pusch_tbsizecheck: 1
  enable_cpu_task_tracing: 0
  enable_compression_tracing: 0
  enable_prepare_tracing: 0
  enable_dl_cqe_tracing: 0
  mMIMO_enable: 0
  pusch_forcedNumCsi2Bits: 0
  enable_srs: 0
  mCh_segment_proc_enable: 0
  enable_csip2_v3: 0
  cells:
    - name: O-RU 0
      cell_id: 1
      ru_type: 1
      # set to 00:00:00:00:00:00 to use the MAC address of the NIC port to use
      src_mac_addr: 00:00:00:00:00:00
      dst_mac_addr: 6C:AD:AD:00:04:6E # MAC address of Foxconn O-RU #1
      nic: 0000:ab:00.0
      vlan: 2
      pcp: 0
      txq_count_uplane: 1
      eAxC_id_ssb_pbch: [0, 1, 2, 3]
      eAxC_id_pdcch: [0, 1, 2, 3]
      eAxC_id_pdsch: [0, 1, 2, 3]
      eAxC_id_csirs: [0, 1, 2, 3]
      eAxC_id_pusch: [0, 1, 2, 3]
      eAxC_id_pucch: [0, 1, 2, 3]
      eAxC_id_srs: [0, 1, 2, 3]
      eAxC_id_prach: [4, 5, 6, 7]
      dl_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      ul_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      section_3_time_offset: 484
      fs_offset_dl: 15
      exponent_dl: 4
      ref_dl: 0
      fs_offset_ul: -5
      exponent_ul: 4
      max_amp_ul: 65504
      mu: 1
      T1a_max_up_ns: 280000
      T1a_max_cp_ul_ns: 405000
      Ta4_min_ns: 50000
      Ta4_max_ns: 331000
      Tcp_adv_dl_ns: 125000
      fh_len_range: 0
      pusch_prb_stride: 273
      prach_prb_stride: 12
      srs_prb_stride: 273
      pusch_ldpc_max_num_itr_algo_type: 1
      pusch_fixed_max_num_ldpc_itrs: 10
      pusch_ldpc_n_iterations: 10
      pusch_ldpc_early_termination: 0
      pusch_ldpc_algo_index: 0
      pusch_ldpc_flags: 2
      pusch_ldpc_use_half: 1
      ul_gain_calibration: 48.68
      lower_guard_bw: 845
      tv_pusch: cuPhyChEstCoeffs.h5
    - name: O-RU 1
      cell_id: 2
      ru_type: 1
      # set to 00:00:00:00:00:00 to use the MAC address of the NIC port to use
      src_mac_addr: 00:00:00:00:00:00
      dst_mac_addr: 6c:ad:ad:00:04:68 # MAC address of Foxconn O-RU #2
      nic: 0000:ab:00.0
      vlan: 2
      pcp: 0
      txq_count_uplane: 1
      eAxC_id_ssb_pbch: [0, 1, 2, 3]
      eAxC_id_pdcch: [0, 1, 2, 3]
      eAxC_id_pdsch: [0, 1, 2, 3]
      eAxC_id_csirs: [0, 1, 2, 3]
      eAxC_id_pusch: [0, 1, 2, 3]
      eAxC_id_pucch: [0, 1, 2, 3]
      eAxC_id_srs: [0, 1, 2, 3]
      eAxC_id_prach: [4, 5, 6, 7]
      dl_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      ul_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      section_3_time_offset: 484
      fs_offset_dl: 15
      exponent_dl: 4
      ref_dl: 0
      fs_offset_ul: -5
      exponent_ul: 4
      max_amp_ul: 65504
      mu: 1
      T1a_max_up_ns: 280000
      T1a_max_cp_ul_ns: 405000
      Ta4_min_ns: 50000
      Ta4_max_ns: 331000
      Tcp_adv_dl_ns: 125000
      fh_len_range: 0
      pusch_prb_stride: 273
      prach_prb_stride: 12
      srs_prb_stride: 273
      pusch_ldpc_max_num_itr_algo_type: 1
      pusch_fixed_max_num_ldpc_itrs: 10
      pusch_ldpc_n_iterations: 10
      pusch_ldpc_early_termination: 0
      pusch_ldpc_algo_index: 0
      pusch_ldpc_flags: 2
      pusch_ldpc_use_half: 1
      ul_gain_calibration: 48.68
      lower_guard_bw: 845
      tv_pusch: cuPhyChEstCoeffs.h5
    - name: O-RU 2
      cell_id: 3
      ru_type: 1
      # set to 00:00:00:00:00:00 to use the MAC address of the NIC port to use
      src_mac_addr: 00:00:00:00:00:00
      dst_mac_addr: 6c:ad:ad:00:02:00 # MAC address of Foxconn O-RU #3
      nic: 0000:ab:00.0
      vlan: 2
      pcp: 0
      txq_count_uplane: 1
      eAxC_id_ssb_pbch: [0, 1, 2, 3]
      eAxC_id_pdcch: [0, 1, 2, 3]
      eAxC_id_pdsch: [0, 1, 2, 3]
      eAxC_id_csirs: [0, 1, 2, 3]
      eAxC_id_pusch: [0, 1, 2, 3]
      eAxC_id_pucch: [0, 1, 2, 3]
      eAxC_id_srs: [0, 1, 2, 3]
      eAxC_id_prach: [4, 5, 6, 7]
      dl_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      ul_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      section_3_time_offset: 484
      fs_offset_dl: 15
      exponent_dl: 4
      ref_dl: 0
      fs_offset_ul: -5
      exponent_ul: 4
      max_amp_ul: 65504
      mu: 1
      T1a_max_up_ns: 280000
      T1a_max_cp_ul_ns: 405000
      Ta4_min_ns: 50000
      Ta4_max_ns: 331000
      Tcp_adv_dl_ns: 125000
      fh_len_range: 0
      pusch_prb_stride: 273
      prach_prb_stride: 12
      srs_prb_stride: 273
      pusch_ldpc_max_num_itr_algo_type: 1
      pusch_fixed_max_num_ldpc_itrs: 10
      pusch_ldpc_n_iterations: 10
      pusch_ldpc_early_termination: 0
      pusch_ldpc_algo_index: 0
      pusch_ldpc_flags: 2
      pusch_ldpc_use_half: 1
      ul_gain_calibration: 48.68
      lower_guard_bw: 845
      tv_pusch: cuPhyChEstCoeffs.h5
    - name: O-RU 3
      cell_id: 4
      ru_type: 1
      # set to 00:00:00:00:00:00 to use the MAC address of the NIC port to use
      src_mac_addr: 00:00:00:00:00:00
      dst_mac_addr: 6C:AD:AD:00:04:70 # MAC address of Foxconn O-RU #4
      nic: 0000:ab:00.0
      vlan: 2
      pcp: 0
      txq_count_uplane: 1
      eAxC_id_ssb_pbch: [0, 1, 2, 3]
      eAxC_id_pdcch: [0, 1, 2, 3]
      eAxC_id_pdsch: [0, 1, 2, 3]
      eAxC_id_csirs: [0, 1, 2, 3]
      eAxC_id_pusch: [0, 1, 2, 3]
      eAxC_id_pucch: [0, 1, 2, 3]
      eAxC_id_srs: [0, 1, 2, 3]
      eAxC_id_prach: [4, 5, 6, 7]
      dl_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      ul_iq_data_fmt: {comp_meth: 1, bit_width: 9}
      section_3_time_offset: 484
      fs_offset_dl: 15
      exponent_dl: 4
      ref_dl: 0
      fs_offset_ul: -5
      exponent_ul: 4
      max_amp_ul: 65504
      mu: 1
      T1a_max_up_ns: 280000
      T1a_max_cp_ul_ns: 405000
      Ta4_min_ns: 50000
      Ta4_max_ns: 331000
      Tcp_adv_dl_ns: 125000
      fh_len_range: 0
      pusch_prb_stride: 273
      prach_prb_stride: 12
      srs_prb_stride: 273
      pusch_ldpc_max_num_itr_algo_type: 1
      pusch_fixed_max_num_ldpc_itrs: 10
      pusch_ldpc_n_iterations: 10
      pusch_ldpc_early_termination: 0
      pusch_ldpc_algo_index: 0
      pusch_ldpc_flags: 2
      pusch_ldpc_use_half: 1
      ul_gain_calibration: 48.68
      lower_guard_bw: 845
      tv_pusch: cuPhyChEstCoeffs.h5

This is my l2_adapter_config_P5G.yaml

# Copyright (c) 2017-2024, NVIDIA CORPORATION.  All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are permitted
# provided that the following conditions are met:
#     * Redistributions of source code must retain the above copyright notice, this list of
#       conditions and the following disclaimer.
#     * Redistributions in binary form must reproduce the above copyright notice, this list of
#       conditions and the following disclaimer in the documentation and/or other materials
#       provided with the distribution.
#     * Neither the name of the NVIDIA CORPORATION nor the names of its contributors may be used
#       to endorse or promote products derived from this software without specific prior written
#       permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
# FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
# STRICT LIABILITY, OR TOR (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

---
#gnb_module
msg_type: scf_5g_fapi
phy_class: scf_5g_fapi
slot_advance: 3

# tick_generator_mode: 0 - poll + sleep; 1 - sleep; 2 - timer_fd
tick_generator_mode: 1

# Allowed maximum latency of SLOT FAPI messages which send from L2 to L1. Unit: slot
allowed_fapi_latency: 0

# Allowed tick interval error. Unit: us
allowed_tick_error: 10

timer_thread_config:
  name: timer_thread
  cpu_affinity: 7
  sched_priority: 99
message_thread_config:
  name: msg_processing
  #core assignment
  cpu_affinity: 7
  # thread priority
  sched_priority: 95
# Lowest TTI for Ticking
mu_highest: 1
dl_tb_loc: 1
instances:
  # PHY 0
  -
    name: scf_gnb_configure_module_0_instance_0
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_1
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_2
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_3
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_4
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_5
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_6
    prach_ta_offset_usec: 2.5
  -
    name: scf_gnb_configure_module_0_instance_7
    prach_ta_offset_usec: 2.5

# Config dedicated yaml file for nvipc. Example: nvipc_multi_instances.yaml
nvipc_config_file: null

# Transport settings for nvIPC
transport:
  type: shm
  udp_config:
    local_port: 38556
    remort_port: 38555
  shm_config:
    primary: 1
    prefix: nvipc
    cuda_device_id: 0
    ring_len: 8192
    mempool_size:
      cpu_msg:
        buf_size: 8192
        pool_len: 4096
      cpu_data:
        buf_size: 576000
        pool_len: 1024
      cuda_data:
        buf_size: 307200
        pool_len: 0
      gpu_data:
        buf_size: 576000
        pool_len: 0
  dpdk_config:
    primary: 1
    prefix: nvipc
    local_nic_pci: 0000:ab:00.0
    peer_nic_mac: 00:00:00:00:00:00
    cuda_device_id: 0
    need_eal_init: 0
    lcore_id: 11
    mempool_size:
      cpu_msg:
        buf_size: 8192
        pool_len: 4096
      cpu_data:
        buf_size: 576000
        pool_len: 1024
      cuda_data:
        buf_size: 307200
        pool_len: 0
  app_config:
    grpc_forward: 0
    debug_timing: 0
    pcap_enable: 1
    pcap_cpu_core: 8 # CPU core of background pcap log save thread
    pcap_cache_size_bits: 29 # 2^29 = 512MB, size of /dev/shm/${prefix}_pcap
    pcap_file_size_bits: 31 # 2^31 = 2GB, max size of /var/log/aerial/${prefix}_pcap. Requires pcap_file_size_bits > pcap_cache_size_bits.
    pcap_max_data_size: 8000 # Max DL/UL FAPI data size to capture reduce pcap size.

cell_group: 1
prepone_h2d_copy: 1
pucch_dtx_thresholds: [-100.0, -100.0, -100.0, -100.0, -100.0]
ptp: {gps_alpha: 0, gps_beta: 0}
enableTickDynamicSfnSlot: 1
...

Hi @vantuan_ngo ,

Could you please check if OpenRM, which is the open-sourced GPU driver, is installed instead of the proprietary NVIDIA GPU driver?
You can check the following command.

cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX Open Kernel Module for x86_64  550.54.15  Release Build  (dvs-builder@U16-A24-23-2)  Tue Mar  5 22:15:33 UTC 2024

If it is not the open-sourced version, it shows something like below.

NVRM version: NVIDIA UNIX x86_64 Kernel Module  550.54.15  Tue Mar  5 22:23:56 UTC 2024

To install OpenRM, please refer to Installing Tools on Aerial Devkit—NVIDIA Docs.

Thank you.

Thank you very much. I think I failed to remove Nvidia-related kernel modules, so I could not install the open-source version without noticing this. After installing OpenRM, the cuBB container runs without this DOCA issue.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.