Hi all,
I am currently investigating how to train and validate AI models for L1 Neural Receiver algorithms using frameworks such as Sionna, the Aerial Framework, AODT, and cuBB. The end-to-end development workflow is illustrated below.
This diagram outlines the closed-loop pipeline for AI-native L1 reception on the NVIDIA AI Aerial platform, comprising three interconnected stages:
① Develop / Train: Algorithm research and model training occur here. Sionna PHY generates differentiable synthetic training data using statistical channel models (e.g., CDL/UMa). The pyAerial notebook leverages this data with PyTorch to train neural networks — such as a neural channel estimator or LLRNet soft de-mapper — and outputs trained weights as saved_models/. The Aerial Framework then compiles JAX-based algorithms via MLIR-TensorRT into deployable .trtengine files.
② Simulate / Assess: Handled by the Aerial Omniverse Digital Twin (AODT). The saved_models/ are loaded into AODT Mode 3 Example 2, where inference runs over a physically accurate, site-specific ray-traced channel. Simulation outputs — including ground-truth CFRs, BLERs, throughputs, and scheduling metrics — are stored in ClickHouse and can be fed back to Stage ① to enrich or replace the statistical Sionna channel model.
③ Deploy: The production environment runs on Aerial CUDA-Accelerated RAN. cuBB/cuPHY executes the L1 pipeline in real time. The Aerial Data Lake captures live over-the-air (OTA) I/Q samples from O-RUs, which pyAerial processes into training datasets — specifically (equalized symbols, LLR) pairs — enabling continuous retraining of models such as LLRNet, thus closing the loop back to Stage ①.
-
The dashed blue arrow represents the documented OTA feedback path, with complete code available in the
llrnet_dataset_generationnotebook. -
The dashed green arrow from AODT to Stage ① is conceptually documented but requires custom implementation.
-
The dashed gray arrow from
.trtengineto cuBB currently has no documented end-to-end path.
I have the following questions:
-
Is the diagram accurate? Are there any conceptual inaccuracies or misrepresentations that should be corrected?
-
AODT v1.1 included ML example notebooks (e.g., for neural channel estimation), but these are absent in v1.4.1. Does this imply that ML functionality has been deprecated, or is it simply undocumented in the latest release?
-
The official documentation only shows examples for neural channel estimation (solid green arrow to AODT, gray dashed arrow to cuBB). If I wish to train AI models for all three blocks — channel estimation, equalizer, and soft de-mapping — what is the recommended approach? Are there any missing documents, hidden examples, or unpublished workflows I should be aware of?
-
Is it possible to train AI models using OTA data beyond the LLRNet case? While cuBB documentation confirms OTA-based training only for the LLRNet soft de-mapper, it remains unclear whether similar OTA-driven training is feasible for the channel estimator or equalizer — and if so, what pathway exists to use OTA I/Q data for these components?
-
Can AODT directly output performance metrics such as Normalized Mean Square Error (NMSE), Symbol Error Rate (SER), or Error Vector Magnitude (EVM)? If not, what is the recommended method to compute them from the available data (e.g., CFRs, I/Q samples)?
Any insights, clarifications, or suggestions would be greatly appreciated.
Thanks!
