I’m currently running a real-time setup on Jetson with three MOV layers:
Static background
Rotating logo
Win animation that plays after the spin
Right now I’m using the CPU-based GStreamer compositor, and when the win animation starts, the playback stutters or freezes.
I want to:
Load animations dynamically (show/hide at runtime)
Keep playback smooth
Use GPU acceleration if possible
Is there a way to split an RGBA MOV into RGB + alpha and composite it via nvcompositor or another GPU-based method, so I can have lightweight, seamless animation overlays without blocking the main stream?
You’re hitting the limitation of the CPU-based compositor, which is expected once you start triggering animated RGBA overlays at runtime.
On Jetson, the smoother approach is to stay fully on the GPU path. Instead of splitting the MOV manually into RGB + alpha, you can decode RGBA directly using nvv4l2decoder (when supported) and then composite with nvcompositor. This avoids CPU copies and keeps everything in NVMM.
A common pattern is:
Preload all animation pipelines (paused)
Use nvcompositor with z-ordering
Toggle visibility dynamically using alpha or pad blocking instead of rebuilding the pipeline
For animations that must start/stop frequently, keeping the pipeline alive but hidden is usually smoother than creating/destroying it. Also make sure you’re avoiding videoconvert on the CPU path, as that alone can cause stutters.
If RGBA MOV decoding becomes a bottleneck, converting assets to NV12 + separate alpha stream ahead of time can help, but nvcompositor is generally the right direction for this use case.
Thanks for the reply, this helps clarify the direction.
Just to make sure I fully understand the Jetson capabilities, I’d like to double-check a few points:
My MOV assets are ProRes 4444 with per-pixel alpha, which as far as I know are decoded via avdec_prores (CPU). I don’t think nvv4l2decoder supports ProRes / RGBA hardware decoding on Jetson — please correct me if that’s wrong.
From earlier forum threads, it seems that nvcompositor does not support true per-pixel alpha blending on NVMM buffers, only a constant alpha per layer. Even when the input is RGBA, the alpha channel is ignored.
Given that, I’m trying to understand if anyone has successfully combined an RGB video stream and a separate alpha stream on the GPU (for example RGB H.264/H.265 + grayscale alpha video) using nvcompositor or another Jetson-supported HW path — without falling back to CPU compositing or custom CUDA/OpenGL code.
Have you or anyone else actually implemented this on Jetson, or is this currently a known limitation of the Jetson multimedia stack?