Clarification on TX2 Max HW Encode Resolution

Unfortunately we’re not able to use nvcamerasrc because our image isn’t taken directly from a camera, but rather through a series of proprietary protocols, HW, & SW where raw buffers are eventually made available to an app on the Jetson, which after doing additional manipulation will eventually push the buffer into the pipeline using an appsrc. So that pipeline is where we’re doing the encoding and where this 4k x 4k would need to happen. I don’t think we can use nvcamerasrc for something as convoluted and complicated as that.

For your suggestion, you mean like through TMA with the NvVideoEncoder like 01_sample_encode example uses? We have that implementation in our code base, but we’re not actively using it because we didn’t see any real improvements over our gstreamer pipeline using OMX. Should we ideally be seeing an improvement using that over gstreamer?

Hi greg2,
For your case, you will need nvvidconv to put CPU buffers(video/x-raw) into dma buffers(video/x-raw(memory:NVMM)).

It should be same for MMAPIs. You can create dma buffers by NvBuffer APIs, but still need to copy your data in.