How to proceed with installation of the aerial sdk, even with a access key ,we are not getting access to pull the sdk


We are not able to go ahead of it .

Hi @dhyey.mitmpl2023

Welcome to our community!
please use the command below for pulling aerial 24-3.
docker pull nvcr.io/qhrjhjrvlsbu/aerial-cuda-accelerated-ran:24-3-cubb

Please note, if you used “sudo docker login nvcr.io” to login, please use “sudo docker pull nvcr.io/qhrjhjrvlsbu/aerial-cuda-accelerated-ran:24-3-cubb”

The detailed info for getting the container can be seen in NVIDIA NGC

Thanks

2 Likes

we are still getting a issue , i think its a NGC NVIDIA key access level problem ,can you please guide how to resolve this and what steps to follow ?

1 Like

@jixu I am also getting the same error as Dhyey. I followed the steps explained by you but for me also NGC NVIDIA key access level problem is there. I request you to please guide.

@sukhdeep.singh @dhyey.mitmpl2023
please confirm if you have the access to aerial-sdk. To confirm, you should be able to choose the registry “aerial-ran-and-ai-radio-framwork”, an example screenshot below.


Yes we do have this access on the NGC NVIDIA site from beginning ,still the issue persists .Please suggest the steps ahead @jixu


@jixu Yes, I also have access, but I am not able to install Aerial SDK. There is a problem pulling Aerial SDK. Please help us. I am not able to proceed further.

Hi @sukhdeep.singh @dhyey.mitmpl2023

would you please re-generate API key after you select project as “aerial-ran-and-ai-radio-framework” and try again?

Thanks!

@sukhdeep.singh @dhyey.mitmpl2023
Can you please logout of NGC and then follow the link from 6G Developer Program | NVIDIA Developer to get back into it and re-generate API key?

@jixu We have successfully created an image for cuBB, as shown in the snapshot, but the issue persists with pulling the Aerial SDK. The correct command to pull the Aerial SDK image isn’t available in the documentation.

Could you please provide us with the right command to pull the Aerial SDK image, as it seems the current approach isn’t working?
Thank you


Hi @dhyey.mitmpl2023
using the 1st command, you have successfully pulled Aerial SDK version 24-3 (i.e., 24-3-cubb)

The 2nd command is invalid.

You can also get the latest Aerial SDK 25-1 from nvidia NGC catalog NVIDIA NGC

we recommend you getting Aerial SDK 25-1

Dear @jixu,

I have followed the instructions provided by you and attempted to search for the containers within the Aerial SDK 25-1. However, we are unable to view all the available containers in the SDK. Currently, we are only able to pull the Docker image for Cubb using the following command:

sudo docker pull nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

This command successfully retrieves the Cubb image. However, we are unable to access other tags, as they appear to require an active subscription. Despite being active members, we are encountering this issue. Aerial SDK 25-1 is missing.

Could you please assist us in resolving this matter?

image 25-1-cubb should have all containers for aerial SDK 25-1. Would you please share the info of running “docker image” after getting done with pull the container image?

Hi @jixu and @nhashimoto

on running this command:
sudo docker pull nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

output is:Error response from daemon: Head “https://nvcr.io/v2/nvidia/aerial/aerial-cuda-accelerated-ran/manifests/25-1-cubb”: unknown: {“errors”: [{“code”: “DENIED”, “message”: “Payment Required”}]}

Is there a requirement for a paid subscription to access the setup ?

@dhyey.mitmpl2023 @sukhdeep.singh have you been able to pull the aerial cuBB 25-1 container image successfully?

@sukh.sandhu you should be able to pull the container image now. Please try it out and let us know the outcome.

Thanks!

Hi @jixu ,
Thank you @jixu, I am able to pull the image for
aerial-cuda-accelerated-ran:25-1-cubb. After installation, I followed the cuBB quickstart guide (Generating TV and Launch Pattern Files — Aerial CUDA-Accelerated RAN). While generating the test vectors using MATLAB (I have installed MATLAB inside the cubb container as per step 1)

[Aerial Python]aerial@ubuntu-S2600STB:/opt/nvidia/cuBB/5GModel/aerial_mcore/examples$ ../scripts/gen_e2e_ota_tvs.sh
/opt/nvidia/cuBB/5GModel/aerial_mcore/scripts/gen_e2e_ota_tvs.sh starting…
/opt/nvidia/cuBB/5GModel/aerial_mcore/examples /opt/nvidia/cuBB/5GModel/aerial_mcore
Traceback (most recent call last):
File “/opt/nvidia/cuBB/5GModel/aerial_mcore/examples/genCuPhyChEstCoeffs.py”, line 49, in
import aerial_mcore as NRSimulator
File “/usr/local/lib/python3.10/dist-packages/aerial_mcore/init.py”, line 290, in
_pir.get_paths_from_os()
File “/usr/local/lib/python3.10/dist-packages/aerial_mcore/init.py”, line 142, in get_paths_from_os
raise RuntimeError(msg)

RuntimeError: Could not find an appropriate directory for MATLAB or the MATLAB runtime in LD_LIBRARY_PATH. Details: file not found: libmwmclmcrrt.so.9.14; LD_LIBRARY_PATH: :/usr/local/MATLAB/MATLAB_Runtime/R2023a/runtime/glnxa64:/usr/local/MATLAB/MATLAB_Runtime/R2023a/bin/glnxa64:/usr/local/MATLAB/MATLAB_Runtime/R2023a/sys/os/glnxa64:/usr/local/MATLAB/MATLAB_Runtime/R2023a/extern/bin/glnxa64:/usr/local/MATLAB/MATLAB_Runtime/R2023a/sys/opengl/lib/glnxa64

Could you please help me to resolve this error!

Hi @jixu,

My cpu architecture configuration is x86. Pulled the image from
nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

I have installed MATLAB inside the aerial pod as given in the first part of step 1 in Generating TV and Launch Pattern Files — Aerial CUDA-Accelerated RAN

How can I generate the test vectors using the gen_e2e_ota_tvs.sh script? This will enable me in verifying my installation.

Hi @jixu,

Reminder to please reply to my previous comment:

My cpu architecture configuration is x86. Pulled the image from
nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

I have installed MATLAB inside the aerial pod as given in the first part of step 1 in Generating TV and Launch Pattern Files — Aerial CUDA-Accelerated RAN

How can I generate the test vectors using the gen_e2e_ota_tvs.sh script? This will enable me in verifying my installation.

We are stuck, can anybody help @jixu @eobiodu @nhashimoto . Please help us with the above query! thanks!

@sukh.sandhu
did you run the command below on the Host (not in the container)?
docker build -t aerial-cuda-accelerated-ran:25-1-cubb-matlab-runtime-enabled .

Note: need to edit Dockerfile for getting right image

FROM nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

Please see the reply here:
Need access to Aerial SDK for research purpose - Accelerated Computing / Aerial Forum (private) - NVIDIA Developer Forums

@jixu I am trying to integrate Aerial SDK with OAI interface using the following docker compose file

version: '3.8'

services:
  nv-cubb:
    container_name: nv-cubb
    image: cubb-build:24-3
    privileged: true
    ipc: host
    network_mode: host
    shm_size: 4096m
    stdin_open: true
    tty: true
    userns_mode: host
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities:
                - gpu
    volumes:
      - /lib/modules:/lib/modules
      - /dev/hugepages:/dev/hugepages
      - /usr/src:/usr/src
      - ./aerial_l1_entrypoint.sh:/opt/nvidia/cuBB/aerial_l1_entrypoint.sh
      - /var/log/aerial:/var/log/aerial
      - ../../../cmake_targets/share:/opt/cuBB/share
      - /tmp/nv_mps:/tmp/nv_mps  # âś… CRITICAL FOR MPS
    environment:
      - cuBB_SDK=/opt/nvidia/cuBB
      - CUDA_MPS_PIPE_DIRECTORY=/tmp/nv_mps
      - CUDA_MPS_LOG_DIRECTORY=/tmp/nv_mps

    command: bash -c "rm -rf /tmp/phy.log && /opt/nvidia/cuBB/aerial_l1_entrypoint.sh"
    healthcheck:
      test: ["CMD-SHELL", 'grep -q "L1 is ready!" /tmp/phy.log && echo 0 || echo 1']
      interval: 20s
      timeout: 5s
      retries: 5

  oai-gnb-aerial:
    container_name: oai-gnb-aerial
    image: oai-gnb-aerial:latest
    depends_on:
      nv-cubb:
        condition: service_healthy
    cap_drop:
      - ALL
    cap_add:
      - SYS_NICE
      - IPC_LOCK
    ipc: host  # âś… Use host IPC for nvipc to work reliably
    network_mode: host
    shm_size: 4096m
    stdin_open: true
    tty: true
    cpuset: "13-20"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities:
                - gpu
    volumes:
      - ../../conf_files/gnb-vnf.sa.band78.273prb.aerial.conf:/opt/oai-gnb/etc/gnb.conf
      - /tmp/nv_mps:/tmp/nv_mps  # âś… Access to MPS shared memory
    environment:
      - USE_ADDITIONAL_OPTIONS=--log_config.global_log_options level,nocolor,time
    healthcheck:
      test: /bin/bash -c "ps aux | grep -v grep | grep -c softmodem"
      interval: 10s
      timeout: 5s
      retries: 5
      

and the following aerial_l1_entrypoint.sh

#!/bin/bash

# Check if cuBB_SDK is defined, if not, use default path
cuBB_Path="${cuBB_SDK:-/opt/nvidia/cuBB}"

cd "$cuBB_Path" || exit 1
# Add gdrcopy to LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/opt/mellanox/dpdk/lib/x86_64-linux-gnu:/opt/mellanox/doca/lib/x86_64-linux-gnu:/opt/nvidia/cuBB/cuPHY-CP/external/gdrcopy/build/x86_64/

# Restart MPS
# Export variables
export CUDA_DEVICE_MAX_CONNECTIONS=8
export CUDA_MPS_PIPE_DIRECTORY=/tmp/nv_mps
export CUDA_MPS_LOG_DIRECTORY=/tmp/nv_mps
#originalliy it was /var
# Stop existing MPS
sudo -E echo quit | sudo -E nvidia-cuda-mps-control
sudo -E echo "before starting mps"
# Start MPS
sudo -E nvidia-cuda-mps-control -d
sudo -E echo start_server -uid 0 | sudo -E nvidia-cuda-mps-control
sudo -E echo "started mps server "
# Start cuphycontroller_scf
# Check if an argument is provided
if [ $# -eq 0 ]; then
    # No argument provided, use default value
    argument="P5G_FXN_GH"
else
    # Argument provided, use it
    argument="$1"
fi
sudo -E echo "after arguments"
sudo -E "$cuBB_Path"/build/cuPHY-CP/cuphycontroller/examples/cuphycontroller_scf "$argument"
sudo -E ./build/cuPHY-CP/gt_common_libs/nvIPC/tests/pcap/pcap_collect

while i am able to get the mps client and server using the ps aux | grep mps command inside the container as well as on the host.

The logs for nv-cubb is :

==========
== CUDA ==
==========

CUDA Version 12.8.0

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

It gets stuck here

the logs for oai-gnb-aerial gets stuck at:

[C]: [nvipc] Waiting for nvipc server to start ...
[C]: [nvipc] Waiting for nvipc server to start ...
[C]: [nvipc] Waiting for nvipc server to start ...
[C]: [nvipc] Waiting for nvipc server to start ...

Could you please help me to solve nvipc issue.