I am seeking assistance regarding the movie capture extension for my robotics application. My goal is to capture the real-time robot movements, but I have encountered significant variations in the results. Specifically, I have noticed that the simulation appears to run faster in the RTX-Real-Time video compared to the Path Tracing version.
To provide more context, both videos were rendered using the same scene and physics parameters for 50 frames, with a frame rate of 24fps. :
What caught my attention is that the path-tracing version takes approximately 1 minute and 30 seconds to render, while the real-time version takes 7 minutes, both with 10,000 tsps. Could the path-tracing version be skipping physics steps?
Furthermore, I noticed that the Frame rate attribute only affects the encoding. As an example, I have captured the same scene as above using path-tracing for 50 frames, but with a frame rate of 60fps:
In this case, instead of observing half of the movements as expected, the simulation appears to be sped up (since only the encoding, not the simulation itself, is affected by the change in frame rate). Is this intentional?
I am uncertain how to obtain accurate timing with a render. Based on these results, it is challenging for me to determine the robot’s real-time movement speed.
Hi @axel.goedrich - The difference in simulation speed between the RTX-Real-Time and Path Tracing versions is likely due to the difference in computational complexity between the two rendering methods. Path Tracing is a more computationally intensive method that simulates the physical behavior of light, which can result in more realistic images but at the cost of longer rendering times. On the other hand, RTX-Real-Time rendering is optimized for speed and can produce high-quality images much faster, but it may not be as accurate in terms of light simulation.
As for the Frame rate attribute, you’re correct that it only affects the encoding of the video, not the simulation itself. The simulation runs at its own pace, independent of the frame rate of the video. When you change the frame rate, you’re changing how many frames are captured and encoded per second of video, but you’re not changing how many simulation steps are performed per second. This is why the simulation appears to be sped up when you increase the frame rate: you’re seeing more frames of the simulation in the same amount of video time.
If you want to capture the real-time movements of the robot, you might need to synchronize the simulation time with the video time. This could involve adjusting the simulation step size or the simulation speed to match the frame rate of the video. However, this can be a complex task that requires a good understanding of both the simulation and the video encoding process.
I get that for the viewport, but I was hoping that the Movie Capture-extension would step the simulation so that all physics steps are executed and the frame rate is the same as defined.
For example, if I set the simulation time steps per second (tsps) attribute to 1000 and the frame rate to 25, I would expect that the simulation would execute 40 physics steps, then one render update, another 40 physics steps, one render update, and so on. 25 frames should then equal one second of simulation time (just for clarification, the time that is passed in the simulation, not the time needed to calculate it).
If this were the case, the choice between path-tracing or rtx-realtime would only affect the look and rendering/calculation time to generate the video, not the simulation speed.
Interestingly, contrary to your description and my expectations, the video rendering was actually faster with path-tracing than with rtx-realtime. Maybe this is just a glitch caused by the high number of tsps (10000) and some skipping of physics steps:
We mainly want to use the simulation for reinforcement learning, with some camera-guided tasks as well.
So if I want that all physics steps are executed and that the camera feed records the correct simulation speed, I would probably have to step the simulation manually and then trigger a frame/rendering update after x physics steps?
I conducted a test using the new USD Composer (2023.1.1) with the same scene.
Now, both rendering methods produce similar results in terms of simulation speed.
I’d like to provide a demonstration video, but it appears that video uploads and images are no longer supported in this forum. I’m receiving the following error message:
Sorry, the file you are trying to upload is not authorized (authorized extensions: woff2, woff, pdf, doc, docx, txt, gz, zip, log).
However, it’s worth noting that when utilizing a high tsps (time steps per second) value of 10,000, the 50 frames are still rendered faster with path-tracing in comparison to the real-time renderer:
Path-Tracing: 23 seconds
RTX-Realtime: 2 minutes and 3 seconds
Both methods are faster than before. However, I have a suspicion that some physics steps might be skipped in the process.
Have there been any recent changes made to the renderer or the movie capture extension that could be influencing these results?
I experimented with lower tsps values, specifically 60, and obtained the following results:
11 seconds for RTX-Real-Time
33 seconds for Path-Tracing
These results appear to be more in line with the expectations. I initially used 10,000 tsps because accuracy in the physics simulation is crucial for our current project. Consequently, we may require a higher tsps value to achieve the desired results.
I’ve tested Composer 2023.2.0 today, which wasn’t accessible to me previously. With this latest version, everything appears to be functioning as anticipated:
RTX-Real-Time with 10,000 tsps: 1 minute and 24 seconds
Path-Tracing with 10,000 tsps: 1 minute and 47 seconds
Furthermore, when using a lower tsps setting:
RTX-Real-Time with 60 tsps: 11 seconds
Path-Tracing with 60 tsps: 32 seconds
I’m unsure of the specific changes made, but it seems that the issues have been resolved in this version.