Birdview perspective with cameras

Software Version
DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
Hello everyone,

In the future, I plan to record four videos using the DW API. I attempted to use dwSensorCamera_readFrame() to capture frames, which I would then store in a Serializer to create a video. However, the documentation for this function states, “Available data is configured during sensor creation.” As I understand it, this means that only one frame is captured when a (Camera-)Sensor is created.

My questions are:

  1. How can I continuously capture new frames?
  2. Are there any stitching algorithms from NVIDIA that I can use to create a BirdView perspective from the videos?

Thanks in advance!

Please see DriveWorks SDK Reference: Image Transformation Interface helps your case. You need two copy calls to stitch two frames. Also see dwImageTransformation empty lines and incorrect colours

Thank you, but I how do I fix the first problem? I usually planned to stitch videos in the first place.

Could you clarify the issue? Do you see any frame drop when trying to serialize the camera frames?

Note that, if the storage does not have enough write speed, you would expect to see frame drop messages.

Please check with recording with recorder tool first to get an idea on the limitations. You can record for 10 secs and see number of timestamps(frames) captured.

The problem is, that everytime (during the runtime of my program) I call the function dwSensorCamera_readFrame() I got the same Image. The Image is only created once during sensor creation. The serialization of the Images isn’t currently necessary. My focus now, is to write a program where I can for example press the key 1 on the keyboard and then take a new picture.

Could you check return value of dwSensorCamera_readFrame? Can you check the sample_camera source code as reference to capture and serialize frame?

The return value should be DW_SUCCESS, since I run every API call with CHECK_DW_ERROR() and I didn’t get a warning or something like this.
Currently I dont need to serialize frames, I want to run a superloop, which generates every time a new frame, when I press “1”.

The DriveWorks SDK Reference: Camera Sensor (nvidia.com) mentions, that data is only configured, when a Sensor is created.

My question is, if there is another way to generate frames during the runtime of my program, as the Sensors are only created at the beginning of my program?

include “CCameraSample.h”

using namespace std;

void CCameraSample::initContext() {
// Initialize the DW Context
CHECK_DW_ERROR(dwGetVersion(&m_sdkVersion));
CHECK_DW_ERROR(dwInitialize(&m_context, m_sdkVersion, NULL));

// Print start message
cout << getCurrentTime().str() << “Initialize DriveWorks SDK v”
<< m_sdkVersion.major << “.” << m_sdkVersion.minor << “.”
<< m_sdkVersion.patch << endl << endl;
}

void CCameraSample::initCamera() {
// Initialize the Sensor Abstraction layer
CHECK_DW_ERROR(dwSAL_initialize(&m_sal, m_context));

// Specifiy the gmsl protocol for the first camera
m_senParams[0].parameters =
“camera-name=C2SIM623S2RU1195NB20,interface=csi-cd,CPHY-mode=1,”
“link=0,output-format=processed,skip-eeprom=1,fifo-size=1”;
m_senParams[0].protocol = “camera.gmsl”;

// Specifiy the gmsl protocol for the second camera
m_senParams[1].parameters =
“camera-name=C2SIM623S2RU1195NB20,interface=csi-cd,CPHY-mode=1,”
“link=1,output-format=processed,skip-eeprom=1”;
m_senParams[1].protocol = “camera.gmsl”;

// Specifiy the gmsl protocol for the third camera
m_senParams[2].parameters =
“camera-name=C2SIM623S2RU1195NB20,interface=csi-cd,CPHY-mode=1,”
“link=2,output-format=processed,skip-eeprom=1”;
m_senParams[2].protocol = “camera.gmsl”;

// Specifiy the gmsl protocol for the fourth camera
m_senParams[3].parameters =
“camera-name=C2SIM623S2RU1195NB20,interface=csi-cd,CPHY-mode=1,”
“link=3,output-format=processed,skip-eeprom=1,fifo-size=1”;
m_senParams[3].protocol = “camera.gmsl”;

// Create the sensor (here: camera)
for (int i = 0; i < MAX_CAMS; i++) {
CHECK_DW_ERROR(dwSAL_createSensor(&m_camera[i], m_senParams[i], m_sal));
}

// Start the Sensor Abstraction Layer
CHECK_DW_ERROR(dwSAL_start(m_sal));

// Start the sensor (here: camera)
for (int i = 0; i < MAX_CAMS; i++) {
CHECK_DW_ERROR(dwSensor_start(m_camera[i]));
}

}

void CCameraSample::initImageStreamer() {
// Get the CUDA Image Properties
dwImageProperties cudaProp { };
CHECK_DW_ERROR(
dwSensorCamera_getImageProperties(&cudaProp, DW_CAMERA_OUTPUT_CUDA_RGBA_UINT8, m_camera[FIRST_CAMERA_IDX]));

// Initialize the ImageStreamer Module
CHECK_DW_ERROR(
dwImageStreamer_initialize(&m_streamerCUDA2CPU, &cudaProp,
DW_IMAGE_CPU, m_context));
}

void CCameraSample::init() {
initContext();
initCamera();
initImageStreamer();
}

CCameraSample::CCameraSample() {
m_context = DW_NULL_HANDLE;
for (int i = 0; i < MAX_CAMS; i++) {
m_camera[i] = DW_NULL_HANDLE;
}
m_sal = DW_NULL_HANDLE;
m_serializer = DW_NULL_HANDLE;
m_streamerCUDA2CPU = DW_NULL_HANDLE;
m_choice = WAIT;
init();
displayMenu();
}

void CCameraSample::exportImage(dwImageHandle_t frameCUDA, int idx) {
dwTime_t timeout = 0;
dwImageHandle_t imageHandle;
dwImageCPU *imgCPU;
cout << getCurrentTime().str() << “ImageStreamer(CUDA → CPU)” << endl;

// stream the CUDA frame to the CPU domain
CHECK_DW_ERROR(dwImageStreamer_producerSend(frameCUDA, m_streamerCUDA2CPU));

// receive the streamed CPU image as a handle
CHECK_DW_ERROR(
dwImageStreamer_consumerReceive(&imageHandle, timeout,
m_streamerCUDA2CPU));

// Converts the Image into the CPU format
CHECK_DW_ERROR(dwImage_getCPU(&imgCPU, imageHandle));

// write the image data to a png-file
char fname[128];
sprintf(fname, “GMSL_IDX_framegrab_%s.png”, std::to_string(idx).c_str());
lodepng_encode32_file(fname, imgCPU->data[0], imgCPU->prop.width,
imgCPU->prop.height);
cout << getCurrentTime().str() << "Image saved to: " << fname << “\n”;

// Return the consumed image
CHECK_DW_ERROR(
dwImageStreamer_consumerReturn(&imageHandle, m_streamerCUDA2CPU));

// Notify the producer that the work is done
CHECK_DW_ERROR(
dwImageStreamer_producerReturn(&frameCUDA, timeout,
m_streamerCUDA2CPU));
}

void CCameraSample::getFrame() {
dwTime_t timeout = 3000000;

// has to be called twice
for (int i = 0; i < MAX_CAMS; i++) {
dwCameraFrameHandle_t frameHandle = DW_NULL_HANDLE;
CHECK_DW_ERROR(dwSensorCamera_readFrame(&frameHandle, timeout, m_camera[i]));

  // Create a image out of the taken Frame
  dwImageHandle_t imageHandle = DW_NULL_HANDLE;
  CHECK_DW_ERROR(
  		dwSensorCamera_getImage(&imageHandle,
  				DW_CAMERA_OUTPUT_CUDA_RGBA_UINT8, frameHandle));

  // Passes the Image to the Image-Streamer
  exportImage(imageHandle, i);

  // Releasing the handles
  CHECK_DW_ERROR(dwSensorCamera_returnFrame(&frameHandle));

}
}

void CCameraSample::run() {
bool abortApp = false;
while (!abortApp) {

  switch (m_choice) {
  case SINGLE:
  	getFrame();
  	displayMenu();
  	m_choice = WAIT;
  	break;

  case ONGOING:
  	startVideo();
  	displayMenu();
  	m_choice = WAIT;
  	break;

  case QUIT:
  	releaseHandle();
  	abortApp = true;
  	break;

  case STOP:
  	break;

  case WAIT:
  	readInput();
  	break;

  case INVALID:
  	cerr << getCurrentTime().str() << "Invalid Option. Try again.\n";
  	displayMenu();
  	m_choice = WAIT;
  	break;
  }

}
}

void CCameraSample::releaseHandle() {
// Release Image Streamer
cout << endl;
CHECK_DW_ERROR(dwImageStreamer_release(m_streamerCUDA2CPU));
cout << getCurrentTime().str() << “Image Streamer released” << endl;

for (int i = 0; i < MAX_CAMS; i++) {
// Stop the camera
CHECK_DW_ERROR(dwSensor_stop(m_camera[i]));
cout << getCurrentTime().str() << "CameraClient: Client stopped - Link "
<< i << endl;

  // Release camera sensor
  CHECK_DW_ERROR(dwSAL_releaseSensor(m_camera[i]));
  cout << getCurrentTime().str() << "Camera master released - Link " << i
  		<< endl;

}

// Release SAL module of the SDK
CHECK_DW_ERROR(dwSAL_release(m_sal));
cout << getCurrentTime().str() << “Driveworks SDK released” << endl;
}

stringstream CCameraSample::getCurrentTime() {
stringstream ss;

// Get the current time
time_t t = time(nullptr);
tm *now = localtime(&t);

// Save date and time in a stringstream and return it
ss << ‘[’ << setw(2) << setfill(‘0’) << now->tm_mday << ‘-’ << setw(2)
<< setfill(‘0’) << now->tm_mon + 1 << ‘-’ << now->tm_year + 1900
<< ’ ’ << setw(2) << setfill(‘0’) << now->tm_hour << ‘:’ << setw(2)
<< setfill(‘0’) << now->tm_min << ‘:’ << setw(2) << setfill(‘0’)
<< now->tm_sec << "]: ";
return ss;
}

void CCameraSample::displayMenu() {
cout << “\nPlease chose an option:\n”;
cout << “<1> Take a picture\n”;
cout << “<2> Record a video\n”;
cout << “<3> Stop the Recording\n”;
cout << “<4> Quit the application\n”;
cout << "> ";
}

void CCameraSample::readInput() {
string input;
cin >> input;

if (input == “1”) {
m_choice = SINGLE;
} else if (input == “2”) {
m_choice = ONGOING;
} else if (input == “3”) {
m_choice = STOP;
} else if (input == “4”) {
m_choice = QUIT;
} else {
m_choice = INVALID;
}
}

This is my code, I have main()-function that just calls the run()-method.