I am using pylibdmtx to decode data matrix code on my jetson device.
it does work well but is very slow since it is running on arm CPU on jetson device.
I want to shift my code to work on gpu instead of CPU so that I might get faster results.
I am not sure how to do so.
What if I used a deep learning model to first localize(find the pixel co ordinates of) the data matrix code and then decode it
it would probably work faster since i have tested it to work faster with small images.
if anybody knows a pre-trained model or data set so that i train my own model it would be of a lot help.