Accelerating SE(3)-Transformers Training Using an NVIDIA Open-Source Model Implementation

Originally published at:

SE(3)-Transformers are versatile graph neural networks unveiled at NeurIPS 2020. NVIDIA just released an open-source optimized implementation that uses 9x less memory and is up to 21x faster than the baseline official implementation. SE(3)-Transformers are useful in dealing with problems with geometric symmetries, like small molecules processing, protein refinement, or point cloud applications. They can…

1 Like

A lot of work went into accelerating these equivariant neural networks, and I hope it will be useful to you. These networks are promising in a variety of physics- and biology-based problems, and their performance should not be an issue preventing them from being used!

If you have any questions or comments, let us know.