I couldn’t predict if this helps or not. For more cores to matter the application must be threaded, and even then, it depends on how you would get cache hits/misses. If two threads are using the same data, then often the scheduler will keep them on the same core since cache hits would go up and cache misses would go down. Perhaps the scheduler would use those other cores, but cache issues would fail to improve anything. In terms of GPU use it wouldn’t change anything for which core(s) the program runs on.
Often, if you were the set up your application to run only on the Denver cores (the core #1-2 of isolcpus), and your program was the only thing using the core, you would possibly see better performance simply because of other apps not causing cache misses. It is really an experiment and it depends on both the code and the data. No way to know without lots of experimentation (and if you want to do things in a truly scientific way, then you’d need to use the profiler tools with each combination of CPU/GPU you test).
Do note that cores #1-2 tend to have more latency than the other cores, but this is not necessarily a problem (it just depends on the nature of the program).