Question About Aerial SDK ARC-OTA compatibility, we are using
Openair RAN 2024.w21+ARC1.5 with Aerial 24-2.
We are seeing issues with the L1 being configured by L2.
This are using the docker compose process with two containers supported in the openair build system. I exported/copied out the nfapi-date.tar.gz to use in the openair build but am seeing some errors with the fapi messages being exchanged, where the L1 (cubb) ends up erroring out
I had seen similar issues with the aerial23-4 version with older versions of openair so I updated to get best known versions from the document.
oai-ran | START MAIN THREADS
oai-ran | RC.nb_nr_L1_inst:0
oai-ran | wait_gNBs()
oai-ran | Waiting for gNB L1 instances to all get configured ... sleeping 50ms (nb_nr_sL1_inst 0)
oai-ran | gNB L1 are configured
oai-ran | About to Init RU threads RC.nb_RU:0
oai-ran | [C]: [nvipc][core 13 ] share event_fd thread info: sched_policy=0 sched_priority=0
oai-ran | [C]: [nvipc] nvipc unix socket client connected
oai-ran | [C]: [nvipc] Received peer event_fd: 103
oai-ran | [C]: Share event_fd succeed: efd_tx=103, efd_rx=101
oai-ran | [C]: [nvipc][core 13 ] nvipc unix socket exit
oai-ran | [C]: Start initialize nvipc client
nv-cubb | 18:14:10.961645 WRN 143 0 [NVIPC.EFD] [nvipc] nvipc unix socket server connected
nv-cubb | 18:14:10.961721 WRN 143 0 [NVIPC.EFD] [nvipc] Received peer event_fd: 458
oai-ran | [C]: shm_ipc_open: forward_enable=0 fw_max_msg_buf_count=0 fw_max_data_buf_count=0
oai-ran | [C]: create_shm_nv_ipc_interface: OK
oai-ran | [NFAPI_VNF] nvIPC_Init: create IPC interface successful
nv-cubb | 18:14:11.161810 WRN 143 0 [NVIPC.EFD] Share event_fd succeed: efd_tx=458, efd_rx=467
nv-cubb | 18:14:11.161814 WRN 143 0 [NVIPC.EFD] [nvipc][core 08 ] nvipc unix socket exit
oai-ran | [VNF] pnf connection indication idx:1
oai-ran | Try to send first CONFIG.request
oai-ran | 927342156 [I] 409573248: aerial_nr_send_config_request: [VNF] 1.1 pnf p7 (null):0 timing 30 10 0 10
oai-ran | 927342163 [I] 409573248: aerial_nr_send_config_request: [VNF] Send NFAPI_CONFIG_REQUEST
oai-ran | 927342170 [D] 409573248: pack_nr_tlv: TLV 0x1001 with padding of 2 bytes
oai-ran | 927342174 [D] 409573248: pack_nr_tlv: TLV 0x1002 with padding of 0 bytes
oai-ran | 927342178 [D] 409573248: pack_nr_tlv: TLV 0x1005 with padding of 2 bytes
oai-ran | 927342181 [D] 409573248: pack_nr_tlv: TLV 0x1006 with padding of 2 bytes
oai-ran | 927342186 [D] 409573248: pack_nr_tlv: TLV 0x1007 with padding of 0 bytes
oai-ran | 927342190 [D] 409573248: pack_nr_tlv: TLV 0x100a with padding of 2 bytes
oai-ran | 927342195 [D] 409573248: pack_nr_tlv: TLV 0x100c with padding of 2 bytes
oai-ran | 927342202 [D] 409573248: pack_nr_tlv: TLV 0x100d with padding of 3 bytes
oai-ran | 927342206 [D] 409573248: pack_nr_tlv: TLV 0x100e with padding of 0 bytes
oai-ran | 927342209 [D] 409573248: pack_nr_tlv: TLV 0x1010 with padding of 3 bytes
oai-ran | 927342213 [D] 409573248: pack_nr_tlv: TLV 0x1011 with padding of 3 bytes
oai-ran | 927342217 [D] 409573248: pack_nr_tlv: TLV 0x1012 with padding of 3 bytes
oai-ran | 927342220 [D] 409573248: pack_nr_tlv: TLV 0x1013 with padding of 3 bytes
oai-ran | 927342225 [D] 409573248: pack_nr_tlv: TLV 0x1014 with padding of 3 bytes
oai-ran | 927342229 [D] 409573248: pack_nr_tlv: TLV 0x1029 with padding of 3 bytes
oai-ran | 927342235 [D] 409573248: pack_nr_tlv: TLV 0x1015 with padding of 2 bytes
oai-ran | 927342238 [D] 409573248: pack_nr_tlv: TLV 0x1016 with padding of 3 bytes
oai-ran | 927342242 [D] 409573248: pack_nr_tlv: TLV 0x1017 with padding of 2 bytes
oai-ran | 927342246 [D] 409573248: pack_nr_tlv: TLV 0x1018 with padding of 3 bytes
oai-ran | 927342250 [D] 409573248: pack_nr_tlv: TLV 0x1019 with padding of 2 bytes
oai-ran | 927342256 [D] 409573248: pack_nr_tlv: TLV 0x101b with padding of 3 bytes
oai-ran | 927342260 [D] 409573248: pack_nr_tlv: TLV 0x101c with padding of 3 bytes
oai-ran | 927342262 [D] 409573248: pack_nr_tlv: TLV 0x101f with padding of 3 bytes
oai-ran | 927342266 [D] 409573248: pack_nr_tlv: TLV 0x1020 with padding of 3 bytes
oai-ran | 927342269 [D] 409573248: pack_nr_tlv: TLV 0x1021 with padding of 0 bytes
oai-ran | 927342273 [D] 409573248: pack_nr_tlv: TLV 0x1022 with padding of 0 bytes
oai-ran | 927342276 [D] 409573248: pack_nr_tlv: TLV 0x1022 with padding of 0 bytes
oai-ran | 927342280 [D] 409573248: pack_nr_tlv: TLV 0x1028 with padding of 3 bytes
oai-ran | Entering ITTI signals handler
oai-ran | TYPE <CTRL-C> TO TERMINATE
nv-cubb | 18:14:12.233732 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233733 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] txPortTlvPresent absent. Setting nTxAnt to 0
nv-cubb | 18:14:12.233734 WRN msg_processing 0 [SCF.PHY] txPortTlvPresent absent. Setting nRxAnt to 0
nv-cubb | 18:14:12.233750 WRN msg_processing 0 [SCF.PHY] PHY Cell Id = 0, M-Plane Id= 1
nv-cubb | 18:14:12.259749 WRN msg_processing 0 [DRV.PUSCH] tvStatPrms: PUSCH enableDeviceGraphLaunch=1 enableCsiP2Fapiv3 = 0
nv-cubb | 18:14:12.259756 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxPrb: 273
nv-cubb | 18:14:12.259756 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxRx: 0
nv-cubb | 18:14:12.274281 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 684.864 MiB for cuPHY PUSCH channel object (0x423154b0000).
nv-cubb | 18:14:12.274297 WRN msg_processing 0 [CUPHY.PUSCH_RX] PuschRx: Running with eqCoeffAlgo 1
nv-cubb | 18:14:12.292296 WRN msg_processing 0 [DRV.PUSCH] tvStatPrms: PUSCH enableDeviceGraphLaunch=1 enableCsiP2Fapiv3 = 0
nv-cubb | 18:14:12.292297 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxPrb: 273
nv-cubb | 18:14:12.292297 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxRx: 0
nv-cubb | 18:14:12.305678 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 684.864 MiB for cuPHY PUSCH channel object (0x42315cd0000).
nv-cubb | 18:14:12.305679 WRN msg_processing 0 [CUPHY.PUSCH_RX] PuschRx: Running with eqCoeffAlgo 1
nv-cubb | 18:14:12.323583 WRN msg_processing 0 [DRV.PUSCH] tvStatPrms: PUSCH enableDeviceGraphLaunch=1 enableCsiP2Fapiv3 = 0
nv-cubb | 18:14:12.323584 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxPrb: 273
nv-cubb | 18:14:12.323584 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxRx: 0
nv-cubb | 18:14:12.337368 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 684.864 MiB for cuPHY PUSCH channel object (0x423160a0000).
nv-cubb | 18:14:12.337369 WRN msg_processing 0 [CUPHY.PUSCH_RX] PuschRx: Running with eqCoeffAlgo 1
nv-cubb | 18:14:12.342485 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.317 MiB for cuPHY PUCCH channel object (0x42315397e00).
nv-cubb | 18:14:12.345377 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.317 MiB for cuPHY PUCCH channel object (0x42315398c00).
nv-cubb | 18:14:12.349540 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.317 MiB for cuPHY PUCCH channel object (0x42315399a00).
nv-cubb | 18:14:12.352362 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.317 MiB for cuPHY PUCCH channel object (0x4231539a800).
nv-cubb | 18:14:12.354416 ERR msg_processing 0 [AERIAL_CUPHY_EVENT] [CUPHY] [/opt/nvidia/cuBB/cuPHY/src/cuphy/prach_receiver/prach_receiver.cu:858] CUDA runtime error invalid argument
nv-cubb | 18:14:12.354538 ERR msg_processing 0 [AERIAL_CUPHY_EVENT] [CUPHY] CUDA EXCEPTION: invalid argument
nv-cubb | 18:14:12.354599 ERR msg_processing 0 [AERIAL_CUPHYDRV_API_EVENT] [DRV.EXCP] /opt/nvidia/cuBB/cuPHY-CP/cuphydriver/src/common/cuphydriver_api.cpp l1_cell_create line 1620 exception: cuphyCreatePrachRx returned CUPHY_STATUS_INTERNAL_ERROR
nv-cubb | 18:14:12.354614 WRN msg_processing 0 [DRV.API] Update cell: mplane_id=1 dl_grid_sz=0
nv-cubb | 18:14:12.354616 WRN msg_processing 0 [DRV.API] Update cell: mplane_id=1 ul_grid_sz=0```
Full log of starting L1/L2
oai-docker.log (58.7 KB)
L1 Configs:
cuphycontroller_P5G_FXN.yaml
l2_adapter_config_P5G.yaml
L2 Configs:
./ci-scripts/yaml_files/sa_gnb_aerial/gnb-vnf.sa.band78.273prb.aerial.conf
Are there any possible missmatched versions of config options? I can attach L1/L2 configs if needed
Hi @eric.a.momper ,
Can you please attach the L1/L2 config files? Can you also share the nvipc.pcap log? You can obtain the nvipc pcap log with the following:
# Usage: sudo ./pcap_collect <prefix> [destination path]
sudo $cuBB_SDK/build/cuPHY-CP/gt_common_libs/nvIPC/tests/pcap/pcap_collect nvipc
# The nvipc.pcap can be seen at current directory (by default) or the inputed "destination path".
Thank you.
Hi @bkecicioglu ,
attached zip with pcap and config files / scripts
nv-arc-l2.zip (9.1 KB)
Thanks
Hi @eric.a.momper ,
Aerial SDK currently supports PRACH preamble formats 0 and B4. The PRACH configuration index value of 98 is used in your CONFIG request message. This index corresponds to the preamble format A2 for FR1 (see Table 6.3.3.2-3 of 3GPP 38.211 v16.2.0).
Can you correct this setting and try again?
Can you also please share the phy.log from your initial test?
Thank you.
@bkecicioglu
Thanks for this info i looked in the table and tried a few values, 0,145,158,159
L2 config/output:
gnb.conf prach_ConfigurationIndex = 159;# testMAC 158; OAI gNB 98
oai L2 debug output pack_nr_tlv: TLV 0x1029 with padding of 3 bytes
I seem to be getting the same data in the rach section of the pcap reguardless of what the prach_ConfigurationIndex is set to are there additional settings that correspond with this, or from looking at the phy.log and the l1 fapi recieve code, could there be an issue with the message encoding/decoding?
FAPI Header offset of prach config idx :
fapi config enum in scf_5g_fapi.hCONFIG_TLV_PRACH_CONFIG_INDEX = 0x1029,
nv-cubb | 13:25:18.682651 WRN 145 0 [NVIPC.EFD] [nvipc] nvipc unix socket server connected
nv-cubb | 13:25:18.682730 WRN 145 0 [NVIPC.EFD] [nvipc] Received peer event_fd: 458
oai-ran | [C]: shm_ipc_open: forward_enable=0 fw_max_msg_buf_count=0 fw_max_data_buf_count=0
oai-ran | [C]: create_shm_nv_ipc_interface: OK
oai-ran | [NFAPI_VNF] nvIPC_Init: create IPC interface successful
nv-cubb | 13:25:18.882819 WRN 145 0 [NVIPC.EFD] Share event_fd succeed: efd_tx=458, efd_rx=467
nv-cubb | 13:25:18.882823 WRN 145 0 [NVIPC.EFD] [nvipc][core 08 ] nvipc unix socket exit
oai-ran | [VNF] pnf connection indication idx:1
oai-ran | Try to send first CONFIG.request
oai-ran | 69995063439 [I] 3138865024: aerial_nr_send_config_request: [VNF] 1.1 pnf p7 (null):0 timing 30 10 0 10
oai-ran | 69995063447 [I] 3138865024: aerial_nr_send_config_request: [VNF] Send NFAPI_CONFIG_REQUEST
...
oai-ran | 69995063510 [D] 3138865024: pack_nr_tlv: TLV 0x1029 with padding of 3 bytes
...
nv-cubb | 11:31:11.551832 WRN msg_processing 0 [SCF.PHY] on_config_request unknown config request message ID 0
Full Phy log:
phy.log (19.2 KB)
pcap with prach_cfg_idx of 145
root@5g-test-gpu:~# tcpdump -nexr /root/share/nvipc.pcap
reading from file /root/share/nvipc.pcap, link-type LINUX_SLL (Linux cooked v1), snapshot length 262144
13:52:21.148338 In 00:00:00:00:00:00 ethertype IPv4 (0x0800), length 340: 192.168.1.9.9000 > 192.168.1.8.9000: UDP, length 296
0x0000: 4500 0144 1000 4000 4011 9609 c0a8 0109
0x0010: c0a8 0108 2328 2328 0130 dbd6 0100 0200
0x0020: 2001 0000 2001 1002 0064 0000 0002 1004
0x0030: 00d0 6632 0003 100a 0000 0000 0000 0000
0x0040: 0000 0000 0004 100a 0000 0011 0100 0000
0x0050: 0000 0000 0005 1002 0002 0000 0006 1002
0x0060: 0064 0000 0007 1004 00d0 6632 0008 100a
0x0070: 0000 0000 0000 0000 0000 0000 0009 100a
0x0080: 0000 0011 0100 0000 0000 0000 000a 1002
0x0090: 0002 0000 000c 1002 0000 0000 000d 1001
0x00a0: 0001 0000 000e 1004 00e7 ffff ff10 1001
0x00b0: 0001 0000 0011 1001 0001 0000 0012 1001
0x00c0: 0001 0000 0013 1001 0000 0000 0014 1001
0x00d0: 0001 0000 0029 1001 0062 0000 0015 1002
0x00e0: 0001 0000 0016 1001 0010 0000 0017 1002
0x00f0: 0000 0000 0018 1001 000d 0000 0019 1002
0x0100: 0000 0000 001b 1001 0003 0000 001c 1001
0x0110: 0000 0000 001f 1001 0002 0000 0020 1001
0x0120: 0000 0000 0021 1004 0000 0000 0022 1004
0x0130: 0000 0000 8022 1004 0000 0000 0028 1001
0x0140: 0001 0000
13:52:21.249851 In 00:00:00:00:00:00 ethertype IPv4 (0x0800), length 57: 192.168.1.8.9000 > 192.168.1.9.9000: UDP, length 13
0x0000: 4500 0029 1001 4000 4011 9609 c0a8 0108
0x0010: c0a8 0109 2328 2328 0015 dbd6 0100 0300
0x0020: 0500 0000 0200 0000 00
I think the gnb.conf im using should match that one the only difference i saw was what cfg index I set, ill look into if its an issue with how im building/running the OAI L2, is that wireshark plugin/ dissector something available? would make it easier to compare things changing vs expected values, as i was seeing all the same as i posted/ orig pcap despite changing the config idx
Hi @eric.a.momper ,
Please see attached for the wireshark plugin. Once added, you can modify the options from preferences>protocols>NVIDIA_FAPI.
scffapi_nv.lua.zip (27.1 KB)
@eric.a.momper Do you use SCF FAPI version 10.02 or 10.04?
Thanks for the wireshark plugin seemed the pcap disected ok, i think i had an issue with how i was copying out the pcap from the container double checking that, i comipled aerial 24-2 with cmake .. -DCMAKE_TOOLCHAIN_FILE=cuPHY/cmake/toolchains/native -DSCF_FAPI_10_04=ON
Please make sure that the length field of the CONFIG request TLV is encoded as uint32_t. This is what we expect if SCF FAPI 10.04 is used. In your pcap, I can see that the length field is encoded as uint16_t. This will cause parsing errors in the CONFIG request message.
Thanks for the explanation, i was suspecting something like that, it looks like on this branch of OAI the L2 config ecoding devolves to packing as u16
uint8_t pack_tl(nfapi_tl_t *tl, uint8_t **ppWritePackedMsg, uint8_t *end) {
return (push16(tl->tag, ppWritePackedMsg, end) &&
push16(tl->length, ppWritePackedMsg, end));
}
would it be easier if i rebuild aerial without the 10_04 FAPI? or what is the recommened option for running the ARC OTA with a Foxconn Radio?
@bkecicioglu
Thanks, For this info
rebuilding without -DSCF_FAPI_10_04=ON i saw L1 was able to get futher and send a config response
nv-cubb | 20:44:10.482169 WRN 128 0 [NVIPC.EFD] Share event_fd succeed: efd_tx=458, efd_rx=467
nv-cubb | 20:44:10.482173 WRN 128 0 [NVIPC.EFD] [nvipc][core 08 ] nvipc unix socket exit
oai-ran | [VNF] pnf connection indication idx:1
oai-ran | Try to send first CONFIG.request
oai-ran | 612299638 [I] 2217166720: aerial_nr_send_config_request: [VNF] 1.1 pnf p7 (null):0 timing 30 10 0 10
oai-ran | 612299645 [I] 2217166720: aerial_nr_send_config_request: [VNF] Send NFAPI_CONFIG_REQUEST
oai-ran | 612299651 [D] 2217166720: pack_nr_tlv: TLV 0x1001 with padding of 2 bytes
oai-ran | 612299655 [D] 2217166720: pack_nr_tlv: TLV 0x1002 with padding of 0 bytes
oai-ran | 612299663 [D] 2217166720: pack_nr_tlv: TLV 0x1005 with padding of 2 bytes
oai-ran | 612299667 [D] 2217166720: pack_nr_tlv: TLV 0x1006 with padding of 2 bytes
oai-ran | 612299670 [D] 2217166720: pack_nr_tlv: TLV 0x1007 with padding of 0 bytes
oai-ran | 612299677 [D] 2217166720: pack_nr_tlv: TLV 0x100a with padding of 2 bytes
oai-ran | 612299683 [D] 2217166720: pack_nr_tlv: TLV 0x100c with padding of 2 bytes
oai-ran | 612299689 [D] 2217166720: pack_nr_tlv: TLV 0x100d with padding of 3 bytes
oai-ran | 612299691 [D] 2217166720: pack_nr_tlv: TLV 0x100e with padding of 0 bytes
oai-ran | 612299697 [D] 2217166720: pack_nr_tlv: TLV 0x1010 with padding of 3 bytes
oai-ran | 612299701 [D] 2217166720: pack_nr_tlv: TLV 0x1011 with padding of 3 bytes
oai-ran | 612299705 [D] 2217166720: pack_nr_tlv: TLV 0x1012 with padding of 3 bytes
oai-ran | 612299707 [D] 2217166720: pack_nr_tlv: TLV 0x1013 with padding of 3 bytes
oai-ran | 612299712 [D] 2217166720: pack_nr_tlv: TLV 0x1014 with padding of 3 bytes
oai-ran | 612299716 [D] 2217166720: pack_nr_tlv: TLV 0x1029 with padding of 3 bytes
oai-ran | 612299721 [D] 2217166720: pack_nr_tlv: TLV 0x1015 with padding of 2 bytes
oai-ran | 612299727 [D] 2217166720: pack_nr_tlv: TLV 0x1016 with padding of 3 bytes
oai-ran | 612299731 [D] 2217166720: pack_nr_tlv: TLV 0x1017 with padding of 2 bytes
oai-ran | 612299736 [D] 2217166720: pack_nr_tlv: TLV 0x1018 with padding of 3 bytes
oai-ran | 612299738 [D] 2217166720: pack_nr_tlv: TLV 0x1019 with padding of 2 bytes
oai-ran | 612299743 [D] 2217166720: pack_nr_tlv: TLV 0x101b with padding of 3 bytes
oai-ran | 612299745 [D] 2217166720: pack_nr_tlv: TLV 0x101c with padding of 3 bytes
oai-ran | 612299749 [D] 2217166720: pack_nr_tlv: TLV 0x101f with padding of 3 bytes
oai-ran | 612299754 [D] 2217166720: pack_nr_tlv: TLV 0x1020 with padding of 3 bytes
oai-ran | 612299757 [D] 2217166720: pack_nr_tlv: TLV 0x1021 with padding of 0 bytes
oai-ran | 612299761 [D] 2217166720: pack_nr_tlv: TLV 0x1022 with padding of 0 bytes
oai-ran | 612299763 [D] 2217166720: pack_nr_tlv: TLV 0x1022 with padding of 0 bytes
oai-ran | 612299769 [D] 2217166720: pack_nr_tlv: TLV 0x1028 with padding of 3 bytes
oai-ran | Entering ITTI signals handler
oai-ran | TYPE <CTRL-C> TO TERMINATE
nv-cubb | 20:44:11.554039 WRN msg_processing 0 [SCF.PHY] txPortTlvPresent absent. Setting nTxAnt to 4
nv-cubb | 20:44:11.554039 WRN msg_processing 0 [SCF.PHY] txPortTlvPresent absent. Setting nRxAnt to 2
nv-cubb | 20:44:11.554057 WRN msg_processing 0 [SCF.PHY] PHY Cell Id = 51, M-Plane Id= 1
nv-cubb | 20:44:11.580116 WRN msg_processing 0 [DRV.PUSCH] tvStatPrms: PUSCH enableDeviceGraphLaunch=1 enableCsiP2Fapiv3 = 0
nv-cubb | 20:44:11.580122 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxPrb: 273
nv-cubb | 20:44:11.580122 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxRx: 0
nv-cubb | 20:44:11.594782 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 684.864 MiB for cuPHY PUSCH channel object (0x2cbdd4c0000).
nv-cubb | 20:44:11.594796 WRN msg_processing 0 [CUPHY.PUSCH_RX] PuschRx: Running with eqCoeffAlgo 1
nv-cubb | 20:44:11.612364 WRN msg_processing 0 [DRV.PUSCH] tvStatPrms: PUSCH enableDeviceGraphLaunch=1 enableCsiP2Fapiv3 = 0
nv-cubb | 20:44:11.612365 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxPrb: 273
nv-cubb | 20:44:11.612365 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxRx: 0
nv-cubb | 20:44:11.625791 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 684.864 MiB for cuPHY PUSCH channel object (0x2cbddcd0000).
nv-cubb | 20:44:11.625792 WRN msg_processing 0 [CUPHY.PUSCH_RX] PuschRx: Running with eqCoeffAlgo 1
nv-cubb | 20:44:11.643327 WRN msg_processing 0 [DRV.PUSCH] tvStatPrms: PUSCH enableDeviceGraphLaunch=1 enableCsiP2Fapiv3 = 0
nv-cubb | 20:44:11.643328 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxPrb: 273
nv-cubb | 20:44:11.643328 WRN msg_processing 0 [DRV.PUSCH] static_params.nMaxRx: 0
nv-cubb | 20:44:11.657197 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 684.864 MiB for cuPHY PUSCH channel object (0x2cbde0a0000).
nv-cubb | 20:44:11.657198 WRN msg_processing 0 [CUPHY.PUSCH_RX] PuschRx: Running with eqCoeffAlgo 1
nv-cubb | 20:44:11.662359 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.404 MiB for cuPHY PUCCH channel object (0x2cbdd397e00).
nv-cubb | 20:44:11.665289 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.404 MiB for cuPHY PUCCH channel object (0x2cbdd398c00).
nv-cubb | 20:44:11.669444 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.404 MiB for cuPHY PUCCH channel object (0x2cbdd399a00).
nv-cubb | 20:44:11.672358 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 3.404 MiB for cuPHY PUCCH channel object (0x2cbdd39a800).
nv-cubb | 20:44:11.675436 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.060 MiB for cuPHY PRACH channel object (0x2cbdde2fc80).
nv-cubb | 20:44:11.676395 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.060 MiB for cuPHY PRACH channel object (0x2cbdde2fac0).
nv-cubb | 20:44:11.688808 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd456000).
nv-cubb | 20:44:11.699027 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd456c00).
nv-cubb | 20:44:11.709294 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd459000).
nv-cubb | 20:44:11.720044 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd45a800).
nv-cubb | 20:44:11.730498 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd45c000).
nv-cubb | 20:44:11.741520 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd45d800).
oai-ran | 612540956 [I] 1811936832: aerial_nr_config_resp_cb: [VNF] Received NFAPI_CONFIG_RESP idx:1 phy_id:0
oai-ran | [NFAPI_VNF] Received CONFIG.response, gNB is ready!
oai-ran | 612540978 [D] 1811936832: nfapi_vnf_pnf_list_find: config->pnf_list:0x55a5eaf9b380
oai-ran | 612540983 [E] 1811936832: nfapi_vnf_pnf_list_find: nfapi_vnf_pnf_list_find : curr->p5_idx:1 p5_idx:1
nv-cubb | 20:44:11.752138 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdd45f000).
nv-cubb | 20:44:11.763196 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdfaa0c00).
nv-cubb | 20:44:11.773442 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdfaa2400).
nv-cubb | 20:44:11.784153 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 98.130 MiB for cuPHY PDSCH channel object (0x2cbdfaa3c00).
nv-cubb | 20:44:11.786428 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0442d80).
nv-cubb | 20:44:11.786755 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0443080).
nv-cubb | 20:44:11.786991 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0443380).
nv-cubb | 20:44:11.787216 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0443680).
nv-cubb | 20:44:11.787429 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0443980).
nv-cubb | 20:44:11.787639 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0443c80).
nv-cubb | 20:44:11.787866 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0443f80).
nv-cubb | 20:44:11.788077 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0444280).
nv-cubb | 20:44:11.788283 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0444580).
nv-cubb | 20:44:11.788489 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.001 MiB for cuPHY SSB channel object (0x2cbd0444880).
nv-cubb | 20:44:11.789279 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde179100).
nv-cubb | 20:44:11.789537 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde179600).
nv-cubb | 20:44:11.790243 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde179b00).
nv-cubb | 20:44:11.790513 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17a000).
nv-cubb | 20:44:11.791276 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17a500).
nv-cubb | 20:44:11.791506 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17aa00).
nv-cubb | 20:44:11.791715 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17af00).
nv-cubb | 20:44:11.791926 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17b400).
nv-cubb | 20:44:11.792677 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17b900).
nv-cubb | 20:44:11.792904 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.482 MiB for cuPHY PDCCH channel object (0x2cbde17be00).
nv-cubb | 20:44:11.793124 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd0449680).
nv-cubb | 20:44:11.793334 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd0449980).
nv-cubb | 20:44:11.793516 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd0449c80).
nv-cubb | 20:44:11.793692 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd0449f80).
nv-cubb | 20:44:11.793863 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd044a280).
nv-cubb | 20:44:11.794022 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd044a580).
nv-cubb | 20:44:11.794178 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd044a880).
nv-cubb | 20:44:11.794347 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd044ab80).
nv-cubb | 20:44:11.794497 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd044ae80).
nv-cubb | 20:44:11.794645 WRN msg_processing 0 [CUPHY.MEMFOOT] cuphyMemoryFootprint - GPU allocation: 0.093 MiB for cuPHY CSIRS channel object (0x2cbd044b180).
nv-cubb | 20:44:11.794661 WRN msg_processing 0 [DRV.API] Update cell: mplane_id=1 dl_grid_sz=273
nv-cubb | 20:44:11.794662 WRN msg_processing 0 [DRV.API] Update cell: mplane_id=1 ul_grid_sz=273
nv-cubb | 20:44:11.805791 WRN timer_thread 0 [L2A.TICK] Thread slot_indication_thread_sleep_method initialized fmtlog
nv-cubb | 20:44:11.805798 WRN timer_thread 0 [L2A.TICK] PTP Configs: gps_alpha: 0 gps_beta: 0
@eric.a.momper That is great!
Let us know if you are having any other issues.
Hi @eric.a.momper the default cmake flags are configure to work with the oai L2+ in the ARC 1.5 release as it is.
https://docs.nvidia.com/aerial/aerial-ran-colab-ota/current/text/installation_guide/validate_setup.html#start-aerial-cubb-on-the-gnb
The -DCMAKE_TOOLCHAIN_FILE=cuPHY/cmake/toolchains/native flag came with the 24-2 release of aerial
system
Closed
September 12, 2024, 11:03am
18
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.