Can we use PyTorch code?

The official guidelines state that models developed on the Sionna platform can be embedded into AODT. However, we are not familiar with the Sionna platform. Can we embed our existing PyTorch-based models into AODT instead? If so, could anybody provide a more comprehensive example of how this can be implemented, thanks!

@dz288 Sionna cannot be embedded into AODT. Sionnalis a link level simulator. You can use Sionna to train you rML model, and then use the trained model in AODT. What do you mean by PyTorch based model? Do you wish to integrate PyTorch into AODT or simply use a PyTorch trained model in AODT?

Thank you very much for your response. We apologize for any inconvenience caused, as we are beginners in AODT and our understanding in this area is still limited. Currently, we are developing a DRL algorithm for resource offloading in 6G, using the PyTorch framework for implementation. Our intention is not to train directly on AODT but rather to deploy the trained model onto AODT to evaluate its inference performance. Based on our current understanding, there are insufficient examples to confirm whether we can achieve this goal. Could you kindly provide some guidelines or reference any existing studies/papers that have performed similar tasks (deploying models trained in PyTorch onto AODT)? Thanks!

@dz288 Are you able to elaborate on specific use case? You can send a DM if you wish. If you have not already, please take look at the section on ML in our documentation, where we have illustrated two use cases:
https://docs.nvidia.com/aerial/aerial-dt/text/ran_digital_twin.html#ml-examples-overview

Thank you for your response. We have previously reviewed this guideline, but the scenario design and many of the technical terms seem to differ significantly from those in our field. Our primary focus is on resource allocation and task offloading in edge computing. As our code is still under development, here are existing studies, including papers and code, that are particularly similar to our work. Our main goal is to determine whether such tasks can be compatibly implemented in AODT. Thank you very much!

Qin, Langtian, et al. “Towards Decentralized Task Offloading and Resource Allocation in User-Centric MEC.” IEEE Transactions on Mobile Computing (2024).

@kpasad1

I am also exploring whether a resource allocation application can be applied in an AODT environment. Could you list the resource variables considered in the relevant paper? It is necessary to compare these with AODT components to determine if optimization is possible based on AODT scenarios.

It’s a pleasure to see someone researching the same topic. The specific resource variables can be referenced in Table 1 on the second page of the paper I provided. My concern lies in the fact that I have hardly encountered these variables in AODT’s guideline, which makes me somewhat worried about whether AODT can meet my requirements.

@dz288 The paper cannot be implemented directly. AODT provides the ability to deploy mutlple cells and multiple UEs in a realistic geometry. The is a 5G PHY/MAC pipeline available. From my cursory reading, it seems that you would need to modify the MAC to implement an alternate scheduling algorithm. Is that a correct understanding?

Thank you for your response. We have attempted to understand your reply, but we still have the following questions. You mentioned that we “need to modify the MAC to implement the alternative scheduling algorithm.” Is this modification to the MAC easily achievable within AODT?

Additionally, we have created a simple script to demonstrate task offloading behavior in edge computing, which is included in the attached DOCX file. (Due to forum restrictions, we regret that we are unable to upload the .py file directly. Kindly copy the content into your IDE for execution.) This script does not involve any machine learning algorithms and is solely intended to illustrate the task offloading logic.

Specifically, the logic in this script is as follows: First, the user device generates multiple computational tasks. For each task, we need to determine at each level whether the task should be offloaded to a higher tier for processing.

For Task A, we first evaluate whether the user device (local) has the capability to process it. If Task A’s computational resource requirements exceed the user device’s processing capability, the task will be offloaded to the edge server for evaluation. The offloading decision is made across three levels: user device (local), edge device, and cloud, in that order.

Additionally, there are bandwidth constraints between the user device, edge device, and cloud, which affect the data transmission speed for each task. Therefore, the latency for a single task can be divided into two components: transmission latency and computation latency. These latencies are determined by the available bandwidth and the computational capability of the device (user device, edge device, or cloud) executing the task, respectively.

After running the script, five computational tasks will be generated randomly. These tasks will be offloaded to different devices for execution. During this process, the latency for each task will be calculated, and the amount of computational resources occupied by each device will also be determined.

Therefore, based on the logic of our script, we are eager to know whether such task offloading can be implemented on AODT. From our analysis, achieving this operation would require AODT to support the following functionalities:

  1. Deployment of Three Types of Devices: User devices, edge devices, and cloud servers. We understand that these three types of devices do not necessarily need to correspond explicitly to specific entities within AODT. Instead, they could be represented by configuring different parameters to emulate the behavior of each type of device.
  2. Generation of Computational Tasks: These tasks do not necessarily need to be concrete “computational tasks.” Instead, they can be represented similarly to the variables as our script, with simple attributes such as computation_cost = 8 and data = 8MB, for example.
  3. Bandwidth Configuration Between Different Types of Devices: The ability to set specific bandwidth constraints between user devices, edge devices, and cloud servers.

We are eagerly looking forward to your further response. Your insights will greatly assist our research. Thank you very much for your support!

taskoffloading.docx (15.8 KB)

@dz288 Sorry for the delayed response. With AODT 1.1 and upcoming version 1.2, unfortunately, your proposal cannot be applied directly. The MAC code is written in C++ with cuda acceleration. We can help you identify the APIs. You can take a look at code at entry point at :
/aodt_bundle/backend_bundle/aodt_sim/src/asim_loop.cpp