when i use driveworks-0.2.0 , i want to fwrite one gmsl yuv frame , so i set the onData member of serializerParams
and then pass it to the function interface: dwSensorSerializer_initialize
but what i set the function onData doesn’t work.
so I want to ask how i get the gmsl sensor yuv data (use the origin camera)
$cd /usr/local/driveworks-0.2.0/tool
$ ./recorder
Then will create “recorder-config.json” file.
Please change camera parameters in recorder-config.json like below.
{
“comments": [
“To record raw camera use output-format=raw.”,
“To record camera at a different framerate use serial-framerate=.”
],
“path”: “.”,
“camera”: {
“write-file-pattern”: "video*.h264”,
“sensors”: [
{
“protocol”: “camera.gmsl”, “params”: "camera-type=c-ov10640-b1,csi-port=ef,camera-count=4, output-format=yuv"
}
]
},
hello
thanks your answer firstly! the way you told me may not suit our problem. because we can’t get the source code of
the dw’s tool.we want to use the dw layer to transplant our code . but in driveworks-0.2.0/samples/src/sensors/camera_gmsl/,we can’t get the right png ( the color is strange green)。 and we use dwSensorSerializer_initialize and set
serializerParams, it doesn’t work.
I have tried to add dwSensorSerializer_initialize , but it still doesn’t work normal
the log shows :
Program Arguments:
--camera-type=ov10635
--csi-port=ab
--offscreen=0
--serialize=false
--serialize-bitrate=8000000
--serialize-framerate=30
--serialize-type=h264
--slave=0
--write-file=test.h264
nvrm_gpu: Bug 1781434 workaround enabled.
nvrm_gpu: Bug 1778448 workaround enabled.
nvrm_gpu: Bug 1780185 workaround enabled.
Initialize DriveWorks SDK v0.2.0
Release build with GNU 4.9.2 from v0.2.0-rc8-0-gf0835b8 against PDK v4.1.2.0
SDK: no resource mounted
SDK: Create NvMediaDevice
SDK: use EGL display as provided
SAL: identified board as DrivePX2-TegraA
SensorFactory::createSensor() -> camera.gmsl, camera-type=ov10635,csi-port=ab,offscreen=0,serialize=false,serialize-bitrate=8000000,serialize-framerate=30,serialize-type=h264,slave=0,write-file=test.h264,output-format=yuv+data
CameraGMSL: required FPS = 30, resolution = 1280x800
***** aurix state *****
CAM_PWR_ON: 0x0
TEGRA_GMSL: TEGRA B
TEGRA_FPDL: TEGRA B
TEGRA_A_HDMI: MXM
TEGRA_B_HDMI: MXM
TEGRA_PCI: TEGRA B
FAN_STATE: UNKNOWN
***********************
***** aurix state *****
CAM_PWR_ON: 0x1
TEGRA_GMSL: UNKNOWN
TEGRA_FPDL: UNKNOWN
TEGRA_A_HDMI: UNKNOWN
TEGRA_B_HDMI: UNKNOWN
TEGRA_PCI: UNKNOWN
FAN_STATE: UNKNOWN
***********************
Camera image with 1280x800 at 30 FPS
i have get the dwSensorSerializer_initialize result, it is zero ( means it is successful to init)
I also try your recommended method (use screenshot), but the png file of the origin code saved is partial green (the capture shows on screen by the code is normal)
I get the 2000th frame aims to get the normal camera capture. and in fact, i get a right yuv file ( the y of the yuv file is right and uv is not really). do you know the associate between y field and uv field (the format in ddr)
first thanks a lot of your help : we have successful to get yuv data (a sensor contain four cameras yuv data)
so , I want to ask a question that:
Now I only need to get one camera of sensor , but through this way , i only use others way to cut the yuv .
do you know the way to get one camera yuv data?
and i use dwSensorCamera_getImageCPU function recently, but it doesn’t work , the log is like below:
Driveworks exception thrown: DW_NOT_IMPLEMENTED: Camera::getImageNvMedia() - not implemented
do you have some ideas about it?
where i can find the real buf data of struct dwImageNvMedia (the only way i know is using NvMediaImageLock to get NvMediaImageSurfaceMap, but it usually means a sensor’s buf not a camera)