Base station dictates the timing (frame/slot/symbol) and carrier frequencies, UEs are slave to this timing and frequency framework. How cuBB/cuPHY is designed to maintain the timing? Traffic in different cells are dynamic, and users’ traffic are dynamic, how cuBB/cuPHY is designed to allocate resources (GPU threads/storage)? Answers to these questions help estimate the workload to improve the cuBB/cuPHY to support real-product develoment.
- Is there a timing signal in cuBB/cuPHY that controls the pipeline processing? Where is the timing signal coming from? and how does each pipeline follow the timing signal, with its local clock count? how to handle the pipeline when the pipeline reaches its deadline? how to set the deadline, one for all, or one for each?
- Is there a priority assigned to pipeline or task (variant from slot to slot, and user to user) that cuBB/cuPHY uses to allocate resource and schedule / dispatch the pipelines/tasks, and monitor the timing and adjust task allocation if needed? especially complexity of a pipeline is data dependent.
- The number of users is variant, the processing complexity (e.g. number of RBs, number layers, QAM orders, etc.) of each user can be different, how cuBB/cuPHY is designed to allocate the resources (GPU threads and storage) to users / task (a big user may have multiple tasks)? Fixed or dynamic? If dynamic, what are the rules of the allocation?
- The traffics of different sectors / cells /carriers (in carrier aggregation) are dynamic, how cuBB/cuPHY is designed to allocate resources among cells? optimizing resource allocation among the sectors / cells /carriers or fixed allocation?