Data matrix code

I am using pylibdmtx to decode data matrix code on my jetson device.
it does work well but is very slow since it is running on arm CPU on jetson device.
I want to shift my code to work on gpu instead of CPU so that I might get faster results.
I am not sure how to do so.

What if I used a deep learning model to first localize(find the pixel co ordinates of) the data matrix code and then decode it
it would probably work faster since i have tested it to work faster with small images.

if anybody knows a pre-trained model or data set so that i train my own model it would be of a lot help.


To deploy pylibdmtx with GPU, it requires a GPU implementation from the library itself.
So please check with the library provider to see if there is a GPU-based implementation or not first.

For DNN inference, we do have a library to accelerate the inference on Jetson called TensorRT.
It will give you a GPU-accelerated performance:

However, you may want an exact decoder rather than a data matrix “predictor,” which might make a mistake.
So our suggestion is to find some third-party decoder with GPU support instead.