start/end value for monotonically increasing integer sequences in array

I’m unsure if this particular algorithm has a name, my search terms haven’t come across what I’m particularly looking for.

Say I have an array like so, where I have an increasing sequence of integers, some of which are missing:
[ 10 11 12 15 16 19 ]

What I would like is to be able to get the start and end indices for each monotonically increasing sequence.
For the case above, The output would be (assuming 0 indexing):

det[0].start = 0, det[0].end = 2
det[1].start = 3, det[1].end = 4
det[2].start = 5, det[2].end = 5

In MATLAB for example, I’d resort to using diff and separating the sequence that way. Unsure if this particular algorithm has a name, or an already available CUDA parallel implementation.

It shouldn’t be difficult in thrust. You could start with adjacent_difference()

Then you’d have an array like:

[ 1 1 1 3 1 3 ]

A simple comparison kernel could find the beginning and ending of “1” sequences from that point.

Since adjacent difference is trivial to do in a CUDA kernel, might be easiest just to write a CUDA kernel to do it all.

As for what to call it: This seems to be a hybrid of delta encoding followed by run-length encoding, and as such a fairly obvious concept, but if I search for “run-length delta encoding” I am not getting a whole lot of search engine matches. There may be something similar in graph theory, like clustering, but I am not very knowledgeable about that area.

[Later:] Searching for “delta run-length encoding” on the other hand gives a reasonable number of hits, including papers on sparse matrix methods.

Thanks for the input. Adjacent difference is the name I was after. It puts it in a format similar to what I was after and yes, just saw it’s trivial from the slides here:

I also saw there was a way to do the same with run length encoding, but didn’t look too much into it.