I am developing a multi-process project on Xavier NX, which have processes for multi-sensor data collection, data fusion and inference by ML models. Currently I use torch.multiprocessing.Queue()
to transmit data from the collection process to the processing process, but I found that it will have a fluctuating latency, normally about 20ms. It will become extremely high (more than 1000ms) especially when the CPU resource is limited.
Therefore, I wonder if there is any tool for multi-process communication? Or is there any better framework for my project? I used to try thread in thethreading
library but the overall latency is worse due to the PIL in Python I think. Another question is that is any possible approach to allcoate resource CPU based on the current system status? For example, when the data collection latency is high, the system can automatically allocate more CPU to the data collection process. I notice that there are a lot of work on resource allocation in the mobile system community, but most of them are not open source.
Thank you so much!!