Generally, yes, you can embed a thrust algorithm in a functor. That thrust algorithm call could conceivably make use of another user-defined functor.
An example of a thrust functor making a thrust algorithm call is here:
in particular, study the construction and usage of the sort_functor there. (It doesn’t use a custom operator, though. Instead it uses a built-in operator. But that could be changed.)
what you have shown will not work possibly for several reasons. One of them is that this:
is invalid usage.
vec1Ptr is a ordinary pointer to int
In C++, when you do something like:
you are attempting to invoke a class method, for that particular object. In ordinary thrust usage,
would imply that thing is a member of some class that has a class method named begin
This is not a true statement when thing is a bare pointer.
It would be a true statement if thing were e.g. a thrust vector, eg.
The problem would then arise that objects of type thrust::device_vector can’t be directly used in CUDA device code, as already pointed out to you here:
So you would need to craft your usage of thrust algorithms embedded in functors to make sure to invoke the device path for the algorithm, and to only use bare device pointers as iterators. I refer you to the first example link.
To intercept the next question, yes, unlike thrust::device_vector, thrust::device_ptr is usable directly in CUDA device code. usage-wise, it is not much different than a bare pointer. It does not have begin() and end() methods.
another reason what you have shown will not work is that Op2 really has no knowledge of what Op1 is. But that is solvable with some recrafting of Op2. (see my next comment in this thread for an example)