DW_CALL_NOT_ALLOWED while using dwSensorCamera_readFrame

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.4.1.7402
other

Host Machine Version
native Ubuntu 18.04
other

Hi,
I am writing ROS Driver for Multiple cameras in drive OS 5.2 and driveworks-3.5.
im using dwSensorCamera_readFrameNew() for reading the frames, it works well in one camera but im getting DW_CALL_NOT_ALLOWED error when i read frames of other three cameras. I have initialized and started the cameras without any problem. Can someone tell me what am i doing wrong?

Dear @vimal.prasath .
Could you please share more details about your camera setup, complete error logs and DW sample reproducible code snippet?

Dear @vimal.prasath ,
The error DW_CALL_NOT_ALLOWED indicate it is an invalid call. It looks like a limitation with dwSensorCamera_readFrameNew() API. I will check with core team and get back to you

Hi,
We have connected the four cameras to csi-a port.
Here is the code i have used to retrieve frames from four cameras.

   void SensorCamera::run_camera()
{ dwCameraFrameHandle_t frame[16];
    ROS_INFO("inside run_camera ");
    ROS_INFO("mcam %d",m_cam);

  while(m_cameraRun) {

  ImagePtr image(new Image);
  dwImageHandle_t cpuFrame;

  for(uint32_t i=0;i< m_cam ; ++i)
  {
   ROS_INFO("run_camera  %d ",i);
  
  dwStatus status = DW_NOT_READY;

  uint32_t countFailure = 0;
        while ((status == DW_NOT_READY) || (status == DW_END_OF_STREAM))
        {
            status = dwSensorCamera_readFrameNew(&frame[i], 333333, m_camera[i]);
            countFailure++;
            if (countFailure == 1000000)
            {
                std::cout << "Camera doesn't seem responsive, exit loop and stopping the sample" << std::endl;
                break;
            }

            // Reset the sensor to support loopback
            if (status == DW_END_OF_STREAM)
            {
                dwSensor_reset(m_camera[i]);
                std::cout << "Video reached end of stream" << std::endl;
            }
        }

        if (status == DW_TIME_OUT)
        {
            throw std::runtime_error("onRender: Timeout waiting for camera " + i);
        }
        if (status != DW_SUCCESS) {
        ROS_ERROR("camera sensor readFrame failed. Error: %s", dwGetStatusName(status));
        continue;
        }
  }

    for(uint32_t i=0;i< m_cam ; ++i)
    {
    dwImageHandle_t img;
    dwCameraOutputType outputType = DW_CAMERA_OUTPUT_NATIVE_PROCESSED;
    dwStatus status = dwSensorCamera_getImage(&img, outputType, frame[i]);
    if(status != DW_SUCCESS) {
      ROS_ERROR("Error");
      break;
    }

    // convert native (yuv420 planar nvmedia) to rgba nvmedia
    status = dwImage_copyConvert(m_rgbaFrame[i], img, m_sdk);
    if(status != DW_SUCCESS) {
      ROS_ERROR("Error");
      break;
    }

    // stream that image to the CPU domain
    status = dwImageStreamer_producerSend(m_rgbaFrame[i], m_streamerNvmediaToCpuProcessed[i]);
    if(status != DW_SUCCESS) {
      ROS_ERROR("Error");
      break;
    }

    // receive the streamed image as a handle

    status = dwImageStreamer_consumerReceive(&cpuFrame, 33000, m_streamerNvmediaToCpuProcessed[i]);
    if(status != DW_SUCCESS) {
      ROS_ERROR("Error");
      break;
    }

    dwImageProperties prop;
    status = dwImage_getProperties(&prop, cpuFrame);
    if (status != DW_SUCCESS) {
      ROS_ERROR("dwImage_getProperties() failed. Error: %s", dwGetStatusName(status));
      break;
    }

    dwImageCPU *imgCPU;
    status = dwImage_getCPU(&imgCPU, cpuFrame);
    if (status != DW_SUCCESS) {
      ROS_ERROR("dwImage_getCPU() failed. Error: %s", dwGetStatusName(status));
      break;
    }

    unsigned int pair_id = 0;

    dwTime_t timestamp;
    dwImage_getTimestamp(&timestamp, img);
    image->header.stamp.sec = (timestamp / 1000000L);
    image->header.stamp.nsec = (timestamp % 1000000L) * 1000;
    ROS_DEBUG("timestamp:  %u.%u", image->header.stamp.sec, image->header.stamp.nsec);

    image->header.seq = pair_id;
    pair_id++; 

    image->header.frame_id = "camera";

    fillImage(*image, sensor_msgs::image_encodings::RGBA8, prop.height, prop.width, 4 * prop.width, imgCPU->data[i]);

    status = dwImageStreamer_consumerReturn(&cpuFrame, m_streamerNvmediaToCpuProcessed[i]);
            if (status != DW_SUCCESS) {
      ROS_ERROR("dwImageStreamer_consumerReturn() failed. Error: %s", dwGetStatusName(status));
      break;
    }
    status = dwImageStreamer_producerReturn(nullptr, 33000, m_streamerNvmediaToCpuProcessed[i]);
      if (status != DW_SUCCESS) {
      ROS_ERROR("dwImageStreamer_producerReturn() failed. Error: %s", dwGetStatusName(status));
      break;
    }
  }
  for(uint32_t i=0;i< m_cam ; ++i)
  {
    dwStatus status = dwSensorCamera_returnFrame(&frame[i]); // error at aceesing and returning multiple frames in a array of i only capable of doing it for 0.
    if(status != DW_SUCCESS) {
      ROS_ERROR("dwSensorCamera_returnFrame Error %d",i);
      break;
    }
    m_cameraPub[i].publish(image);
  }
}
}

Error logs

Dear @vimal.prasath ,
It seems you have already created separate handle for each camera which is limitation SIPL based camera. Could you check sample_camera sample. It does the same thing as your requirement.

Hi,
I have gone through the sample_camera sample. even this sample creates separate handle for each camera and i have made the similar thing as well.
Below i have attached the sample_camera code where seperate handle has been created for each cameras

                    for (uint32_t i = 0; i < cnt; i++)
    {
        uint32_t cameraSensorIdx = 0;
        CHECK_DW_ERROR(dwRig_findSensorByTypeIndex(&cameraSensorIdx, DW_SENSOR_CAMERA, i, rigConfig));
        const char* protocol = nullptr;
        CHECK_DW_ERROR(dwRig_getSensorProtocol(&protocol, cameraSensorIdx, rigConfig));
        const char* params = nullptr;
        CHECK_DW_ERROR(dwRig_getSensorParameterUpdatedPath(&params, cameraSensorIdx, rigConfig));
        dwSensorParams paramsClient{};
        paramsClient.protocol   = protocol;
        paramsClient.parameters = params;

        std::cout << "onInitialize: creating camera.gmsl with params: " << params << std::endl;
        CHECK_DW_ERROR(dwSAL_createSensor(&m_camera[m_totalCameras], paramsClient, m_sal));
        m_totalCameras++;

        m_useRaw       = std::string::npos != std::string(params).find("raw");
        m_useProcessed = std::string::npos != std::string(params).find("processed");
        if (!m_useProcessed && !m_useRaw) //if neither output format specified, assume processed output
            m_useProcessed = true;
        if (!m_useProcessed)
            std::cout << "Processed not selected as master output format parameter. No images will be previewed." << std::endl;
    }

@SivaRamaKrishnaNV
I went through the sample camera code and implemented it the same way but I am getting the same error. I was able to read the frame from one camera but not the other three cameras. is there anything I need to consider before reading the frame. Please let us know.

@SivaRamaKrishnaNV We have solved the issue. Issue was in the initialization of the sensors.

Good to hear that. Please share your mistake to educate others in the community.