We’re working on a larger platform for the analysis of microscope images and it’s focused on centralized computing in the cloud, but some modules would benefit from implementation on the edge. We successfully deployed an artifact detection module on Jetson Nano 2GB.
Microscope slides can be defective in various ways, it could be a mistake during the staining procedure performed to make specific tissues more visible, improper collection of the sample, an issue with the sample itself, or the sample could have been marked with a pen by a pathologist.
Automated quality control right after the scanning has finished helps prevent the cluttering of a database and gives a scanner operator a chance to redo the scan or take other adequate actions if possible.
In our project, we perform segmentation of microscope slide scans using a U-Net-like, fully convolutional neural network converted to TensorRT to find pen marks. As we’re using a sliding window approach, we can run inference on arbitrarily large images even on Jetson Nano 2GB. The largest image presented in the demo has ~3MP, inference on a single window (256x256px) takes on average only 140 ms.
Here’s the demo:
We didn’t have any scanner at hand so we mocked it using a 3D printed dummy with a contact switch. Jetson wasn’t connected to the monitor, we’re using RDP to interact with the GUI.
We wrote a blog post about the topic in general: Using edge AI in healthcare - an example | Tooploox