In the usual GAIL implementation, the generator step is carried out a few (2-3) times for every discriminator step. I cannot find this in the AMP-GAIL implementation (learning/amp_continuous.py). It looks like it executes a single generator and discriminator step.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Confusion about performance guide information | 7 | 6672 | July 23, 2009 | |
PGI 10.0 on Windows XP (Accelerator) | 6 | 18281 | December 27, 2009 | |
kernel not generated | 3 | 3155 | March 8, 2011 | |
acc kernels / acc parallel question | 2 | 3858 | September 1, 2017 | |
Loop "too deeply nested" and "data dependency | 9 | 10588 | November 27, 2017 | |
Unified binary for accelerators, serial? | 7 | 8353 | November 6, 2013 | |
One more questions | 3 | 1872 | March 16, 2012 | |
MAXVAL with PGI Accelerator | 4 | 5037 | May 18, 2013 | |
The AMP method in IsaacGym is not the same as the AMP method in the paper | 1 | 798 | June 14, 2022 | |
PGI and OpenACC - problem with collapse clause | 4 | 6739 | May 21, 2014 |