CloudXR 3.2.2 has worse video quality

CloudXR 3.2.2 with so great bandwidth control.
However, video likes “water” effect when moving head, even using default 100Mb bandwidth.
This does not happen in 3.1

1 Like

We also see this issue, vertical lines, creating a swimming motion, have confirmed it doesn’t happen in 3.1.1 but does with 3.2.
We have reverted back for now.

Hi folks! Thank you for trying out the CloudXR 3.2 SDK. I was wondering if you could provide some more details on which device you are using and what application you are running on the server? If you perform another run would it be possible to capture a bitstream from the client using -ccb in CloudXRLaunchOptions.txt? More details are here: Command-Line Options — NVIDIA CloudXR SDK documentation

I am using Oculus client

I’m seeing the same. Using OculusVR client running on a Quest2 (tethered to a PC rather than running on WiFi). The server is on an ec2 g4dn.4xlarge instance. Both are running CloudXR SDK 3.2 (May 2022)

It does seem intermittent. I get both horizontal and vertical lines but sometimes it quickly settles down and disappears, sometimes it stays until another session. I captured a bitstream but annoyingly, on that occasion the effect was minimal. It happens not just when running an app but also in the SteamVR environment. (I’ll PM you the log and hevc files)

Do you have any advice on how to improve app quality? I’m also seeing what feels like sub par frame rate (e.g head tracking is slow and causes obvious jitter) even though the app on the server is running well (at 6ms) and the captured bitstream seems fine too and of much higher quality than what is perceived in the HMD.

From bitstream analysis, I also did not notice any strange.
The “wave” effect is just similar to low quality VR headset distortion issue but just much more noticeable in 3.2.

Maybe I should start another thread about this? I’m finding great variability in the way the stream is decoded by the client app. The bitstream I sent was actually not too bad, but it does get much worse. The client (Quest2) can sometimes play at 72fps without issues, but at other times shows high CPU usage (CPU Level goes up to 4) and FPS drops to 50-60fps which is when I see the jitter I mentioned. These two results are from the same server, playing the same VR app, using the same client app build on the same headset, connected to the same network. I’ve not changed anything at my end.

Can you advise how to investigate this further? If it’s helpful I can send device logs and bitstream of each instance.

UP. I just wonder if there are someone getting different result in Oculus Quest 2.

1 Like

Any news on this topic ?
We are facing the same issue with CloudXR 3.2 and Vive Focus 3.
Strangely, the issue is only noticeable when looking inside the HMD, but we do not see it on the bitstream capture, either on the server or in the HMD.
The issue was not visible in older versions, so it prevent us to upgrade.
Best regards,
Alexis

Hi, I’m getting the same “swimming issue” on Quest 2, v3.2. I’m streaming from local computer through steam - UE4 and UE5. I can try and send logs if needed.

Hi Keith – can you attach with the client and server CloudXR and Streamer logs you have where you’re seeing this? We’ve been unable to recreate this issue and curious what your logs may show. Thanks!

Can you look at my log?
They are just OculusVR sample with audio enabled.
OculusVR Sample Log 2022-07-12 11.59.40.zip (90.2 KB)

This analysis may help you to debug:
I can “reduce” effect by changing these quality setting:

#ifdef DISP_RES_OCULUS_SUGGESTED
// This is using the suggested texture size for optimal perf per Oculus sdk design
desc.width = texW;
desc.height = texH;
#else
// TODO: This is trying to use display-native per eye w/h instead.
desc.width = 1.5 * dispW/2;
desc.height = 1.5 * dispH;
#endif

desc.maxResFactor = 1.0;//GOptions.mMaxResFactor;

Also, the issue is the most critical when enabling foveated rendering.

We could really use -ccb/-csb to see the raw bitstreams. the descriptions i’ve seen above don’t map to anything i have encountered before – but that could just be language semantics in the way. seeing the bistream will let us at least see what’s going on on the network. if you play that file and do NOT see the isssue in the raw stream, we may need verbose logs plus a video recording on-device… which hopefully will show things. the two video files together gives us something to at least continue the discussion.

1 Like

I believe we have bug in foveation encode/decode
With mFoveation = 100( disable) or low value like 30, you can see this issue.
At recommended value(50) from document, I almost cannot see issue.

I’d want to verify that 100 disables. It might not, the foveation system might still be active. 0 is the official disable value.

Again, if you can capture server or client bitstream that clearly shows the issues you perceive, and point out specific time markers and things of note, that would help us investigate further on our end.

I’ve sent you via message

We have found that if we force the use of the AIImageReaderDecoder at all refresh rates (including 72 hz on Quest 1, but also Quest 2 at 72 Hz), as well as hardcoding the resolution to 1832 x 1920, then foveation looks good again and the shimmering / vertical wavey lines problem goes away. This is 100% repro.

We hope this can help solve the problem in the library, as we really need to be able to use any arbitrary resolution (or at least within some specifications such as modulo 16 or 32 or whatever CloudXR expects with foveation enabled). Although, 1832 width doesn’t divide into either 16 or 32, but it does go into 8 (8 x 229 = 1832). This may be a clue?

The imagereader decoder will become the default for quest2 90hz. As we get more testing in the field and ensure no corner cases or kinks, it MAY become the default and replace the older mediacodec decoder.

But I’m a bit concerned that changing the decoder is affecting foveation at all. Both decoders have the same stream and output the same frames, it’s what they do from that point that differs, what APIs they use to access the frame data and host it into a texture for GLES composition. While it sounds like maybe there’s some kind of crop rect or sizing issue, again the two should generate same end result, IR should just do it much faster (shaves off ms/f).