Poor FPS Streaming from A10

I’m streaming frames that I’m encoding from my own windows 3D application on AWS EC2, using a G5 instance with the A10. I’m using the h264_nvenc H.264 encoder that I’ve compiled into an ffmpeg library.
My application renders two streams, each from independent viewpoints (not stereo fwiw, just separate views of the same scene).
I have this strange behaviour where the first time I run my application on an instance, the streams will be encoded at as low as 9 FPS.
If I run the same app again, on the same instance, I will then consistently get 30 FPS for each stream (which is as good as I expect).
I had thought that the drivers hadn’t spun up properly, so wait until nvidia-smi gives a zero return code before doing any encoding or starting my app.
I’m verifying encoded framerate using nvidia-smi encodersessions.
I don’t see this behaviour with the same code on an instance that uses the T4.
I’ve tried “warming up” the encoder on the GPU by throwing some (100-1000) dummy frames at it in all manner of combinations; before I start my application via a separate app, just before I run the application, and just before I make a connection and start streaming.
I seemed to have some luck when I manually ran another application, which encoded a small sample video, before mine. the same sort of approach within my own application only seemed to help when I’d logged in to the instance remotely first, and even then it isn’t consistently giving good frame rates.
This feels like a bug - has anyone else seen anything similar to this, or got any suggestions on how I might get around it?
I can’t really make rebooting the application part of the workflow as it requires user connection to start the stream in the first place. I’m on the A10 as some of my models need the VRAM.