Running pyAerial on RTX 3090 with cuBB 25-2 – Compatibility with CUDA 12.5

Hello,

I’m currently trying to run pyAerial based on cuBB 25-2 and would like to ask a few questions regarding compatibility issues.

I plan to purchase a GH200 system later this year, but in the meantime, I’m attempting to run parts of the pyAerial simulation—specifically the PUSCH pipeline kernels—on an RTX 3090 for preliminary testing.

I understand that cuBB 25-2 uses CUDA Toolkit version 12.9. However, since the latest version of TensorFlow only supports up to CUDA 12.3, I plan to run Sionna on the CPU as a fallback.

At this point, I’ve successfully launched the Jupyter Notebook environment.
The problem is that my current server with the RTX 3090 has CUDA Toolkit version 12.5 installed.

If I upgrade the server’s CUDA Toolkit to 12.9, will I be able to run the PUSCH pipeline kernels on the RTX 3090?

Thank you in advance for your support.

Hi @jinwoomoon,

Welcome to the Aerial forum! pyAerial works with CUDA 12.9. RTX 3090 has compute capability of 8.6. That should also be ok for the pyAerial. However, we have not verified pyAerial with RTX 3090.

Can you confirm if you are trying to run from the Aerial container or from the host?

Please let us know if you have any issues.

Thank you.

1 Like

I’m running it inside the Aerial container, and it’s working successfully.

Thank you.