I usually suggest that performance questions provide a complete test case. You’re welcome to do as you wish of course. A few comments anyway:
Nothing obviously jumps out at me as “wrong”. If you have verified that you get the correct results in each case and also run your codes with compute-sanitizer, you will reduce the likelihood that you are doing something “wrong”. I haven’t studied your code carefully, and wouldn’t do so without a full test case anyway (so, for example, I could use tools such as compute-sanitizer). I’m not going to write my own test harness to wrap around someone else’s kernel code.
It’s not directly the question you asked, but the async memcpy operations should not be expected to perform better than an “ordinary” global->shared load, if you are doing a wait immediately after committing the work (and the full asynchronous character is only available on cc8.0 and later). One of the principle benefits of the async version over the “ordinary” version that people have been doing since day 1 with CUDA is if you have other work that your kernel code can do while the async operation proceeds (especially if that is compute-bounded work).
Depending on your RADIUS and other factors which are not deducible from what you have shown, I would guess that it is entirely possible for the shared-optimized 1d stencil (whether async or not - see item 1 above) to have little benefit over the non-shared. The async operations are only supported on cc7.x (and higher). Depending on which GPU you are running on, you may as much as 40MB of L2 cache. If the L2 cache is “large relative to your dataset size” or simply large enough to support the working footprint of the thread support on your GPU, then it might be that shared memory provides little additional benefit, because both shared and L2 will provide benefit in the case of data reuse. In fact, NVIDIA began touting (for example, see slide 10 here) with the Volta generation of GPUs the possibility that the usual shared optimizations for data reuse might provide diminishing returns due to both larger L1 and L2 cache structures in newer GPUs.