hi all.
how to use the decodeToFd transform NvBuffer* to opencv mat in jpegdecoder?
DaneLLL
September 30, 2020, 4:47am
3
Hi,
Please refer to
Hi,
We don’t observe the issue by applying following patch to 13_multi_camera:
diff --git a/multimedia_api/ll_samples/samples/13_multi_camera/main.cpp b/multimedia_api/ll_samples/samples/13_multi_camera/main.cpp
index 49a9ab8..0613f0b 100644
--- a/multimedia_api/ll_samples/samples/13_multi_camera/main.cpp
+++ b/multimedia_api/ll_samples/samples/13_multi_camera/main.cpp
@@ -39,6 +39,8 @@
#include <stdio.h>
#include <stdlib.h>
+#include <opencv2/opencv.hpp>
+
using namespace Argus;
using n…
You need to create NvBuffer in NvBufferColorFormat_ABGR32,NvBufferLayout_Pitch, and convert the decoded YUVs to this NvBuffer through NvBufferTransform() . And then call NvBufferMemMap() to get the CPU data pointer.
hi, how to use decodeToFd transform greyscale image to opencv mat in jpegdecoder?