dwImageCUDA: Initialize pixel planes manually -SOLVED-

Hello,

Could somebody indicates me if a dwImageCUDA’s pixel data can be manually filled ?
I mean to initialize it with three channels of an RGB frame:
plane0 = R0R1R2R3R4…
plane1 = G0G1G2G3G4…
plane2 = B0B1B2B3B4…

I am trying since days to create a RGB dwImageCUDA frame with planar pixel representation (not R0G0B0R1G1C1…) since the interleaved representation is rejected by the DriveNet detector.

What I already tried:

  1. Initialize a dwImageCPU with those RGB planes ==> Stream it to a dwImageCUDA:
    Driveworks exception thrown: DW_INVALID_ARGUMENT: calculatePlaneSizes: plane count and format combination is invalid

  2. Initialize a dwImageCPU with those RGB planes ==> Convert it to a YUV dwImageCPU frame (planeCount=3):
    Driveworks exception thrown: DW_FAILURE: FormatConverter: Invalid image type.
    because the converter only act on dwImageCUDA.

This is pretty embarrassing because every .h264 sample videos’ frames used by the DriveNet sample are converted to a RGB dwImageCUDA frame with planar representation before processing. And I can’t do the same (Indeed the SimpleCamera module directly catch a YUV dwImageCUDA planar frame, which is then converted to RGB planar frame with CUDA).

How am I supposed to do this if I can’t create an dwImageCUDA with planeCount=3 ?

Thank you for your help.

Dear bcollado-bougeard,
Could you please share some code snippet for us look into the issue.

With Pleasure Dear SivaRamaKrishna,

=================================== UPDATE ==================================
I finally succeeded to perform the inference on the output Frame !!!
Color seems a bit noisy for the moment.
Question: When it comes to process the second frame I get this error (when streaming CPU_IMAGE to CUDA_IMAGE):

terminate called after throwing an instance of ‘std::runtime_error’
what(): Failed to post the CPU_RGB image

Did I forgot to release some variables, for the streamer to keep streaming ?

I kept it simple for this time: simple CPU_RGB_uint8 → CUDA_RGB_uint8 → CUDA_RGB_fp16 → dwImageGeneric.
Then return it to the DriveNet for processing.

  1. Retrieve RGBQUAD(R0G0B0X0R1G1B1X1…) frame from the network in Uchar* array and convert it to a FRAME struct:
uchar frame_data[frame_byte_size];
int bytes = Server->receiveData(frame_data, frame_byte_size);
copyConvert_matToFrame_planar(frame_data);

→ Frame Struct:

struct Frame {
        Frame(uint32_t width_, uint32_t height_):
            width(width_),
            height(height_),
            data(width * height * 3) {
        }

        const uint32_t width;
        const uint32_t height;
        std::vector<uint8_t> data;
        std::vector<uint8_t> R;
        std::vector<uint8_t> G;
	std::vector<uint8_t> B;

        Frame& operator=(const Frame& rhs) = delete;
    };

copyConvert_matToFrame_planar(uchar* frame_data):

void NetworkStreamer::copyConvert_matToFrame_planar(uchar *frame_data){
    uint32_t index_s;
    //cout<<"Size of target vector = "<<temp_frame.data.size()<<"\n"; 
    //cout<<"\nConverstart\n";
    //cin.get();
    //cout<<"CRASH TEST !"<<endl;
    //cout<<frame_data[8294398]<<endl;
    //cout<<temp_frame.data[6220799]<<endl;
    //cin.get();

    for (uint32_t row = 0 ; row < height ; row++) {

        for (uint32_t col = 0 ; col < width ; col++) {
            index_s = 4*(row*width+col);
            temp_frame.R.push_back(static_cast<uint8_t>( frame_data[index_s]+0 ));
            temp_frame.G.push_back(static_cast<uint8_t>( frame_data[index_s]+1 ));
            temp_frame.B.push_back(static_cast<uint8_t>( frame_data[index_s]+2 ));
        }
    }
    cout<<"NICE !!"<<endl;

}
  1. Perform Conversion and return the final frame to the DriveNet:
....
    dwImageCPU_create(&cpu_RGB_FRAME, &input_properties);
 
    dwContext_getCurrentTime(&cpu_RGB_FRAME.timestamp_us, app_context);
    cpu_RGB_FRAME.data[0] = &(temp_frame.R[0]);
    cpu_RGB_FRAME.data[1] = &(temp_frame.G[0]);
    cpu_RGB_FRAME.data[2] = &(temp_frame.B[0]);
    cpu_RGB_FRAME.pitch[0] = width * 3; 
    cpu_RGB_FRAME.pitch[0] = width * 3; 
    cpu_RGB_FRAME.pitch[0] = width * 3; 
    cpu_RGB_FRAME.prop.width = width;
    cpu_RGB_FRAME.prop.height = height;

    //----------- STREAMING:  RGB(cpu) => RGB(cuda)  ---------------------------
    // Post
    dwStatus result = dwImageStreamer_postCPU(cpu_RGB_FRAME, CPU_to_CUDA_streamer);
    if(DW_SUCCESS != result) {
        throw std::runtime_error("Failed to post the CPU_RGB image");
    }
    cout<<">>> POST(cpu->cuda) = "<<dwGetStatusName(result)<<endl;
    // Receive converted
    result = dwImageStreamer_receiveCUDA(&cuda_RGB_FRAME, 5000, CPU_to_CUDA_streamer);
    if(DW_SUCCESS != result) {
        throw std::runtime_error("Failed to receive the CUDA_RGB image");
    }
    cout<<">>> cuda_RGB_FRAME filled: "<<dwGetStatusName(result)<<endl;

    //----------- CONVERSION:  RGB(cuda)_uint8 => RGB(cuda)_fp16  ---------------------------
    dwImageGeneric *imgInput = GenericImage::fromDW(&cuda_RGB_FRAME);
    dwImageGeneric *imgOutput = RGBuin8_to_RGBfp16_converter->convert( imgInput );
    frameId++;
    
    return imgOutput;

Image properties, streamer, and converter initializations:

void NetworkStreamer::init_images_properties(){
    //======<  Input frame properties  >======
    input_properties.width = width;
    input_properties.height = height;
    input_properties.planeCount = 3;
    input_properties.pxlType = DW_TYPE_UINT8;
    input_properties.pxlFormat = DW_IMAGE_RGB;
    input_properties.type = DW_IMAGE_CPU;

    //=====<  cuda_RGB_properties  >=============
    cuda_RGB_properties = input_properties;
    cuda_RGB_properties.type = DW_IMAGE_CUDA;
 
    //=====<  cuda_RGB2_properties: FP16 >======
    cuda_RGB2_properties = cuda_RGB_properties;
    cuda_RGB2_properties.pxlType = DW_TYPE_FLOAT16;
}

void NetworkStreamer::init_converters(){
    RGBuint8_to_RGBfp16_converter.reset( new GenericSimpleFormatConverter(cuda_RGB_properties, cuda_RGB2_properties, app_context) );
    cout<<"<<<  RGB_to_YUV converter initialized."<<endl;
}

void NetworkStreamer::init_streamers(){
    dwImageStreamer_initialize(&CPU_to_CUDA_streamer,
                               &input_properties,
                               DW_IMAGE_CUDA,
                               app_context);
}

Dear Ben,
I notice you have used cpu_RGB_FRAME.pitch[0] = width * 3 three times Step 2. Is it a mistake when sharing the code here?
I assume you are using dwImageStreamer_returnReceiveCUDA and dwImageStreamer_waitPostedCPU functions in Step2 after using the frame for inferencing. If not please incorporate those API calls in your code following the image_streamer_simple sample.
As I understand, you are able to process the first frame but you could not send second frame to CUDA consumer. It is difficult to figure out mistake with the snippet you have provided. Do you mind sharing a complete sample code and input so that I can incorporate it as sample in driveworks to reproduce the issue on our side. If you can not share it publicly, please file a bug with code and share bug ID here to follow up.

Dear SivaRamaKrishna,

Indeed you are right, the pitch mistake was a mistake, corrected.
Concerning the functions dwImageStreamer_returnReceivedCUDA and dwImageStreamer_waitPostedCPU I didn’t use them in fact. Because I dont really know how to use them. Should I call them once my first frame has been inferenced, and before the second frame can be converted ? I tried them in my source-code (before the return statement). As a consequence the program doens’t crash anymore after the first inference, still, I have a problem: the image seems duplicated three times during display and doens’t vary, when it should. I can’t display images using the [img] bbcode so I dont know how to show it to you.

I transmit you:

  • The complete source-code of my NetworkStreamer class (frame retrieving and convertion)
  • The sample calling its methods from the DriveNetApp class. (inferencing)
  • The source code of the C++ class in charge of screenshoting a window’s desktop and transmit it via socket to the NetworkStreamer class on a linux desktop.

If you want to reproduce you will need two computer, Windows for the screenshot-streaming and Linux for the Driveworks and screen-shots reception. Or maybe you can simulate this behavior in another way, by grabbing an Interleaved RGB frame from a video file, then try to convert it to a planar RGB cuda frame.

==================================== [WINDOWS] StreamingClient =============================
—>>> StreamingClient.hpp:

/////////////////////////////////////////////////////////////////////////////////////////
// This code contains Ben Collado brain juice. Please make a donation.
/////////////////////////////////////////////////////////////////////////////////////////
//===> STANDARD <===
#include <vector>
#include <string>
#include <iostream>

//===> SOCKET COMM <===
#include <winsock.h>
#pragma comment(lib,"Ws2_32.lib")

#include <stdlib.h>

//===> OPENCV <===
#include "opencv2/opencv.hpp"

using namespace std;
using namespace cv;

class SocketClient
{
public:
    //#################################  M E T H O D S  ##########################################
    SocketClient();
SocketClient(string ip_address, int port);
    ~SocketClient();
    
void set_ip_address(string ip_address);
void set_server_port(int port);

    int connect_to_serv();
    
    int sendMessage(string message, const int BUFFERSIZE);
int sendMessage(char* imgData, size_t size);

void CheckErrorMessage();

    void close();   

protected:
    //#################################  M E T H O D S  ##########################################
    int server_port;

    int thisSocket;
    struct sockaddr_in destination;

    string ip_address;
    //##############################  V A R I A B L E S  #########################################

};

—>>> StreamingClient.cpp:

/////////////////////////////////////////////////////////////////////////////////////////
// This code contains Ben Collado brain juice. Please make a donation.
/////////////////////////////////////////////////////////////////////////////////////////
//===> STANDARD <===
#include <vector>
#include <string>
#include <iostream>

//===> SOCKET COMM <===
#include <winsock.h>
#pragma comment(lib,"Ws2_32.lib")

#include <stdlib.h>

//===> OPENCV <===
#include "opencv2/opencv.hpp"

using namespace std;
using namespace cv;

class SocketClient
{
public:
    //#################################  M E T H O D S  ##########################################
    SocketClient();
SocketClient(string ip_address, int port);
    ~SocketClient();
    
void set_ip_address(string ip_address);
void set_server_port(int port);

    int connect_to_serv();
    
    int sendMessage(string message, const int BUFFERSIZE);
int sendMessage(char* imgData, size_t size);

void CheckErrorMessage();

    void close();   

protected:
    //#################################  M E T H O D S  ##########################################
    int server_port;

    int thisSocket;
    struct sockaddr_in destination;

    string ip_address;
    //##############################  V A R I A B L E S  #########################################

};

—>>> StreamingCLient.cpp:

// Streaming client.cpp : Defines the entry point for the console application.
#include "stdafx.h"
#include "opencv2/opencv.hpp"
#include <Windows.h>
#include "SocketClient.hpp"

using namespace std;

Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;

int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
Mat src;
BITMAPINFOHEADER  bi;

hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);

RECT windowsize;    // get the height and width of the screen
GetClientRect(hwnd, &windowsize);

srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = 1080;  //change this to whatever size you want to resize to
width = 1920;

src.create(height, width, CV_8UC4);

// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);    //http://msdn.microsoft.com/en-us/library/windows/window/dd183402%28v=vs.85%29.aspx
bi.biWidth = width;
bi.biHeight = -height;  //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB; 
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;

// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0, srcwidth, srcheight, SRCCOPY); //change SRCCOPY to NOTSRCCOPY for wacky colors !
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO *)&bi, DIB_RGB_COLORS);  //copy from hwindowCompatibleDC to hbwindow
// avoid memory leak
DeleteObject(hbwindow);
DeleteDC(hwindowCompatibleDC);
ReleaseDC(hwnd, hwindowDC);

return src;
}

int main()
{
//**********************<  SOCKET CLIENT  INSTANCIATION  >***********************

string linux_desktop = "192.168.6.74";
int port = 27015;
int bytes = 0;

//======< Initialize the connection >======
SocketClient client = SocketClient(linux_desktop, port);

//======< Connect to the server >======
if (!client.connect_to_serv()){
cerr << "CONNECTION TO THE SERVER FAILED !\n";
client.CheckErrorMessage();
client.close();
return 0;
}
    
int width = 1920;
int height = 1080;

//**********************<  SCREEN CAPTURE STUFF  >***********************
//======< Get the desktop handle >======
HWND hwndDesktop = GetDesktopWindow();
Mat src;

namedWindow("output", WINDOW_NORMAL);
src = (src.reshape(1, 0));
int frameId = 1;
int key = 0;

bool micro_test = false;
if (micro_test){
namedWindow("output", WINDOW_NORMAL);
src = hwnd2mat(hwndDesktop);
imshow("output", src);
waitKey(0);
}

//======< Display & Send the frame >======
while (key != 27)
{
//======< Convert the Window's handle to an OpenCV matrix >======
src = hwnd2mat(hwndDesktop);
/*cout << ">>> Frame infos:\n"
<< "- Depth of the matrix = " << src.depth() << endl
<< "- Num rows = " << src.rows << endl
<< "- Num cols = " << src.cols << endl
<< "- Num channels = " << src.channels() << endl
<< "- Is continuous = " << src.isContinuous() << endl
<< "- Step = " << src.step.buf << " + " << src.step << endl
<< "- DataStart = " << src.datastart << endl
<< "- DataEnd = " << src.dataend << endl
<< "- DataLimit = " << src.datalimit << endl
<< src.flags << endl; */
//======< SEND THE FRAME >======
bytes = client.sendMessage((char *)src.data, src.total()*src.elemSize());
cout << "["<<frameId<<"]Num bytes sent = " << bytes << endl;
if (bytes < 0){
cerr << "ERROR WHEN SENDING FRAME -[" << frameId << "]-\n";
client.CheckErrorMessage();
client.close();
return 0;
}

++frameId;
//imshow("output", src);
}
return 0;
}

===============================================================================================
====================================[LINUX] NetworkStreamer ===============================

---->> NetworkStreamer.hpp

//===> SOCKET COMM <===
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <unistd.h>

// Driveworks
#include <dw/core/Context.h>
#include <dw/sensors/Sensors.h>
#include <dw/sensors/camera/Camera.h>
#include <dw/image/Image.h>
#include <dw/image/FormatConverter.h>
#include <dw/isp/SoftISP.h>

// C++ Std
#include <memory>
#include <vector>
#include <type_traits>
#include <chrono>
#include <thread>
#include <string>
#include <iostream>

// Common
#include <framework/Checks.hpp>
#include <framework/SimpleStreamer.hpp>
#include <framework/SimpleFormatConverter.hpp>
#include <framework/SimpleCamera.hpp>

// Frame reception
#include "SocketServer.hpp"

// Frame management
#include "opencv2/opencv.hpp"

using namespace cv;

using namespace std;

namespace dw_samples
{
namespace common
{

class NetworkStreamer
{
public:
    //#################################  M E T H O D S  ##########################################
    NetworkStreamer(int server_port, dwContextHandle_t app_context);
    ~NetworkStreamer();
    
    void close();
    
    //============  Initialization  ===================    
    string establish_connection();

    //============  I & O  =============================
    dwImageGeneric* readFrame();

//##############################  V A R I A B L E S  #########################################
    struct Frame {
        Frame(uint32_t width_, uint32_t height_):
            width(width_),
            height(height_),
            data(width * height * 3) {
        }

        const uint32_t width;
        const uint32_t height;
        std::vector<uint8_t> data;
        std::vector<uint8_t> R;
        std::vector<uint8_t> G;
	std::vector<uint8_t> B;

        Frame& operator=(const Frame& rhs) = delete;
    };

    int frameId = 0;

protected:
    //#################################  M E T H O D S  ##########################################
    // ===========  Init the struct  ===================
    void initializeFrame();   
    void init_images_properties(); 
    void init_streamers();
    void init_converters();
    int init_images();

    //============  Conversions  =======================
    void copyConvert_matToFrame_interleaved(Mat *cv_frame);
    void copyConvert_matToFrame_interleaved(uchar *frame_data);
    void copyConvert_matToFrame_planar(uchar *frame_data);

    //##############################  V A R I A B L E S  #########################################
    //=============== Global stuff ========================
    dwContextHandle_t app_context;
    
    //=============== General Stuff =======================
    uint32_t width = 1920;
    uint32_t height = 1080;
    int channels = 3;    

    // Is a new frame available
    bool frame_is_pending;
    // Temp frame
    Frame temp_frame = Frame(width, height);

//=============== Network Stuff =======================
    int server_port;
    string client_address;

    // Server
    SocketServer *Server;

    //=============== OpenCV Stuff ========================
    Mat cv_frame;

    //======== DriveWorks Frame management  ===============
    int frameID;

    //*** Properties of the input, display, processed image
    dwImageProperties input_properties{};  	//CPU-RGB
    dwImageProperties cuda_RGB2_properties{};	//CUDA-RGB
    dwImageProperties cuda_RGB_properties{}; 	//CPU-RGB with planeCount =3 & pxlType = FLOAT16

    //*** Image containers
    dwImageCPU cpu_RGB_FRAME{};
    dwImageCUDA *cuda_RGB_FRAME;

    //*** Image converters
    std::unique_ptr<GenericSimpleFormatConverter> RGBuint8_to_RGBfp16_converter;

    //*** Image streamers (CPU->CUDA)
    dwImageStreamerHandle_t CPU_to_CUDA_streamer = DW_NULL_HANDLE;
 
};

}
}

---->> NetworkStreamer.cpp :

#include "NetworkStreamer.hpp"

namespace dw_samples
{
namespace common
{

//===================  CONSTRUCTORS & GETTERS  ===============
NetworkStreamer::NetworkStreamer(int server_port, dwContextHandle_t app_context){
    this->server_port = server_port;
    
    this->app_context = app_context;

    frameId = 0;
    
    cout<<"\n====================================\n";
    cout<<">>> WAITING FOR CLIENT CONNECTION !\n";
    cout<<"======================================\n";
    Server = new SocketServer();
    Server->create_SocketServer();

    initializeFrame();

    init_images_properties();

    init_converters();

    init_streamers();

    if (not init_images()) cout<<"Images memory allocations failed !!!\n"<<endl;
}

NetworkStreamer::~NetworkStreamer(){
    
}

void NetworkStreamer::close(){
    Server->closeServer();
    delete(Server);

    dwImageStreamer_release(&CPU_to_CUDA_streamer);

    dwImageCPU_destroy(&cpu_RGB_FRAME);
}

//******************************************************************
//==========================  P U B L I C  =========================
//******************************************************************

//#######  networking stuff: create socket , connect ###############
string NetworkStreamer::establish_connection(){

    // Connect with the streaming client
    if (not Server->waitForConnection()){
	throw std::runtime_error("[SocketServer] Error when establishing the connection !");
        return "ERROR";
    }
    client_address = Server->get_client_address();

    return client_address;
}

//=*=*=*=*=*=*=*=*=*=*||  Catch a frame  ||*=*=*=*=*=*=*=*=*==*=*
dwImageGeneric* NetworkStreamer::readFrame()
{

    //=====<  Retrieve the RGBQUAD-encoded frame  >=====
    int frame_byte_size = width*height*4;  //The input RGB image has one temp pixel channel
    
    cout<< ">>> Waiting frame ["<<frameId<<"]\n";
    
    uchar frame_data[frame_byte_size];
    int bytes = Server->receiveData(frame_data, frame_byte_size);
    if (bytes == 0) {
        std::cout << "Camera reached end of stream." << std::endl;
    } 
    else if (bytes == -1) {
        cout<<"Error reading from stream ! (no frame)\n";
    }
    else if (bytes != frame_byte_size){
	cout<<"Error reading from camera ! (corrupted frame)"<<endl;
    }
    cout<<"\nOOps\n";

    //=====<  First conversion phase  >=====
    try{
        copyConvert_matToFrame_planar(frame_data);    
    }
    catch (...){
        cout<<"BORDEL DE MERDE\n";
    }
    cout<<"frame_data converted to dwImageCPU"<<endl;
    
    //============================================================
    //====<  Initialize the dwImages with their properties!  >=====
    //============================================================
    dwImageCPU_create(&cpu_RGB_FRAME, &input_properties);
    // Timestamp the dwImageCPU, using the global time base, and planar pixel layout
    dwContext_getCurrentTime(&cpu_RGB_FRAME.timestamp_us, app_context);
    cout<<"CPU frame initialized"<<endl;
    cpu_RGB_FRAME.data[0] = &(temp_frame.R[0]);
    cpu_RGB_FRAME.data[1] = &(temp_frame.G[0]);
    cpu_RGB_FRAME.data[2] = &(temp_frame.B[0]);
    cpu_RGB_FRAME.pitch[0] = width * 3; 
    cpu_RGB_FRAME.pitch[1] = width * 3; 
    cpu_RGB_FRAME.pitch[2] = width * 3; 
    cpu_RGB_FRAME.prop.width = width;
    cpu_RGB_FRAME.prop.height = height;
 
    cout<<"CPU frame initialized"<<endl;
    //----------- STREAMING:  RGB(cpu) => RGB(cuda)  ---------------------------
    // Post
    dwStatus result = dwImageStreamer_postCPU(&cpu_RGB_FRAME, CPU_to_CUDA_streamer);
    if(DW_SUCCESS != result) {
        throw std::runtime_error("Failed to post the CPU_RGB image");
    }
    cout<<">>> POST(cpu->cuda) = "<<dwGetStatusName(result)<<endl;
    // Receive converted
    result = dwImageStreamer_receiveCUDA(&cuda_RGB_FRAME, 5000, CPU_to_CUDA_streamer);
    if(DW_SUCCESS != result) {
        throw std::runtime_error("Failed to receive the CUDA_RGB image");
    }
    cout<<">>> cuda_rgb_FRAME filled: "<<dwGetStatusName(result)<<endl;

    //----------- CONVERSION:  RGB(cuda)_uint8 => RGB(cuda)_fp16  ---------------------------
    dwImageGeneric *imgInput = GenericImage::fromDW(cuda_RGB_FRAME);
    dwImageGeneric *imgOutput = RGBuint8_to_RGBfp16_converter->convert( imgInput );
    frameId++;
    
    //======  Releases   ====
    // the UserSensor does not need returning of its frame, but we need to reclaim
    // the resources from the streamer anyway
    dwImageCPU* returnedFrame;
    dwImageStreamer_waitPostedCPU(&returnedFrame, 32000, CPU_to_CUDA_streamer);
    dwImageStreamer_returnReceivedCUDA(cuda_RGB_FRAME, CPU_to_CUDA_streamer);
    //=======================

    return imgOutput;
}

//******************************************************************
//==========================  P R I V A T E  =======================
//******************************************************************
void NetworkStreamer::initializeFrame(){

    for (uint32_t row = 0 ; row < height ; row++) {

        for (uint32_t col = 0 ; col < width ; ++col) {

            for(uint32_t ch = 0 ; ch < 3 ; ++ch) {
                temp_frame.data[3*(row*width+col) + ch] = static_cast<uint8_t>(static_cast<int32_t>(row));
            }
        }
    }

}

void NetworkStreamer::init_images_properties(){
    //======<  Input frame properties  >======
    input_properties.width = width;
    input_properties.height = height;
    input_properties.planeCount = 3;
    input_properties.pxlType = DW_TYPE_UINT8;
    input_properties.pxlFormat = DW_IMAGE_RGB;
    input_properties.type = DW_IMAGE_CPU;

    //=====<  cuda_RGB_properties  >=============
    cuda_RGB_properties = input_properties;
    cuda_RGB_properties.type = DW_IMAGE_CUDA;
 
    //=====<  cuda_RGB2_properties: FP16 >======
    cuda_RGB2_properties = cuda_RGB_properties;
    cuda_RGB2_properties.pxlType = DW_TYPE_FLOAT16;
}

void NetworkStreamer::init_converters(){
    RGBuint8_to_RGBfp16_converter.reset( new GenericSimpleFormatConverter(cuda_RGB_properties, cuda_RGB2_properties, app_context) );
    cout<<"<<<  RGB_to_YUV converter initialized."<<endl;
}

void NetworkStreamer::init_streamers(){
    dwImageStreamer_initialize(&CPU_to_CUDA_streamer,
                               &input_properties,
                               DW_IMAGE_CUDA,
                               app_context);
}

void NetworkStreamer::copyConvert_matToFrame_planar(uchar *frame_data){
    uint32_t index_s;
    //cout<<"Size of target vector = "<<temp_frame.data.size()<<"\n"; 
    //cout<<"\nConverstart\n";
    //cin.get();
    //cout<<"CRASH TEST !"<<endl;
    //cout<<frame_data[8294398]<<endl;
    //cout<<temp_frame.data[6220799]<<endl;
    //cin.get();

    for (uint32_t row = 0 ; row < height ; row++) {

        for (uint32_t col = 0 ; col < width ; col++) {
            index_s = 4*(row*width+col);
            temp_frame.R.push_back(static_cast<uint8_t>( frame_data[index_s]+0 ));
            temp_frame.G.push_back(static_cast<uint8_t>( frame_data[index_s]+1 ));
            temp_frame.B.push_back(static_cast<uint8_t>( frame_data[index_s]+2 ));
        }
    }
    cout<<"NICE !!"<<endl;

}

void NetworkStreamer::copyConvert_matToFrame_interleaved(Mat *cv_frame){
    cout<<"Size of target vector = "<<temp_frame.data.size()<<"\n";
    cout<<"Size of source vector = "<<cv_frame->total()*cv_frame->elemSize()<<endl;
    int index_d, index_s;

    for (uint32_t row = 0 ; row < height ; row++) {

        for (uint32_t col = 0 ; col < width ; col++) {

            for(uint32_t ch = 0 ; ch < 3 ; ch++) {
                index_d = 4*(row*width+col)+ch;
                index_s = 3*(row*width+col)+ch;

                temp_frame.data[index_s] = static_cast<uint8_t>( cv_frame->data[index_d] );

            }
        }
    }

}

int NetworkStreamer::init_images(){
    dwStatus success = dwImageCPU_create(&cpu_RGB_FRAME, &input_properties);
    //success = dwImageCUDA_create(&cuda_RGB_FRAME, &cuda_RGB_properties, DW_IMAGE_CUDA_PITCH);
    //success = dwImageCUDA_create(&cuda2_RGB_FRAME, &cuda2_RGB_properties, DW_IMAGE_CUDA_PITCH);
    //success = dwImageCUDA_create(&cuda_RGBA_FRAME, &cuda_RGBA_properties, DW_IMAGE_CUDA_PITCH);
    
    if (success != DW_SUCCESS) return 0;
    
    return 1;
}

}
}

=====================================================================================================
============================== DriveNetApp.cpp call ===========================================

void DriveNetApp::getNextFrame(dwImageCUDA** nextFrameCUDA, dwImageGL** nextFrameGL)
{ 
    *nextFrameCUDA = GenericImage::toDW<dwImageCUDA>(camera->readFrame());
    if (*nextFrameCUDA == nullptr) {
        camera->resetCamera();
    } else {
        *nextFrameGL = streamerCUDA2GL->post(GenericImage::toDW<dwImageCUDA>(converterToRGBA->convert(GenericImage::fromDW(*nextFrameCUDA)) ));
    }
}

/*
  Grab the frame from the network
*/
void DriveNetApp::getNextFrame_from_network(dwImageCUDA** nextFrameCUDA, dwImageGL** nextFrameGL)
{
    *nextFrameCUDA = GenericImage::toDW<dwImageCUDA>(networkStreamer->readFrame());
    if (*nextFrameCUDA == nullptr) {
        cout<<"===>> NEXTFRAMECUDA = NULLPTR !!  <<===\n";
    } else {
        *nextFrameGL = streamerCUDA2GL->post(GenericImage::toDW<dwImageCUDA>(converterToRGBA->convert(GenericImage::fromDW(*nextFrameCUDA)) ));
    }
}

Thank you :)

Dear ben,
It seems the code you have shared is incomplete(like I dont see getNextFrame_from_network, getNextFrame being used here and using CUDA->GL streamer for display). It is an implementation issue which is specific to your use case. It would be great if you share the video file along with code snippet to read and process so that we both will be on same page.

Should I call them once my first frame has been inferenced, and before the second frame can be converted ?

Yes If you are implementing it as a single thread application with CPUImage producer and CUDA consumer. CUDA consumer receives a frames and inferences and releases the frame back to CPU.

You can check frames(which are received via network in this case) data before sending it to CUDA side using ImageStreamer and check the same on CUDA receiver side to confirm if we actually receiving a new frame or the old frame is being replayed. I suspect old frame is being replayed instead of new frame.

Dear SivaRamaKrishna,

Except the following code snippet showing the call for “getNextFrame_from_network()”, there is nothing missing I think. All the processing and conversion is made inside the “networkStreamer::readFrame()” method. The application I develop is based on the DriveNet example sample.

—> getNextFrame_from_network() call in main.cpp:

void FreeLaneDriveApp::onProcess()
{
    std::cout<<"onProcess() ->> Read frame\n";
    /*
    if (frameCount==0){
        DriveNetApp::getNextFrame(&inputImage, &m_imgGl);
    }
    else{*/
    
    DriveNetApp::getNextFrame_from_network(&inputImage, &m_imgGl);
        

    std::this_thread::yield();
    while (inputImage == nullptr) {
        DriveNetApp::resetDetector();
        DriveNetApp::resetTracker();
        std::this_thread::sleep_for(std::chrono::milliseconds(1));
        DriveNetApp::getNextFrame_from_network(&inputImage, &m_imgGl);
    }

    // detect objects, track and get the results
    cout<<">>> STARTING INFERENCE !!!!!"<<endl;
    DriveNetApp::inferDetectorAsync(inputImage);
    DriveNetApp::inferTrackerAsync(inputImage);

    // Process the results
    //std::cout<<"onProcess() ->> processResults()\n";
    //std::cin.get();
    DriveNetApp::processResults();

    frameCount++;

}

—>>> DriveNetApp::inferDetectorAsync() :

void DriveNetApp::inferDetectorAsync(const dwImageCUDA* rcbImage)
{
    // we feed two images to the DriveNet module, the first one will have full ROI
    // the second one, is the same image, however with an ROI cropped in the center
    const dwImageCUDA* rcbImagePtr[2] = {rcbImage, rcbImage};
    CHECK_DW_ERROR(dwObjectDetector_inferDeviceAsync(rcbImagePtr, 2U, driveNetDetector));
}

—>>> DriveNetApp::inferTrackerAsync() :

void DriveNetApp::inferTrackerAsync(const dwImageCUDA* rcbImage)
{
    // track feature points on the rcb image
    CHECK_DW_ERROR(dwObjectTracker_featureTrackDeviceAsync(rcbImage, objectTracker));
}

On the other hand, there is no video file used since the frame is retrieved from the network. Instead is used a screenshot of the desktop on a window computer converted to a RGB frame (see StreamingClient.cpp, the main.cpp performing the screenshot and streaming via socket is there) .
Finally, how can I transmit you files and pictures easily ? The “” bbcode doens’t work for me.

Thank you.

Dear ben,
Please file a bug with source code and input file.
Please login to https://developer.nvidia.com/drive with your credentials. Please check MyAccount->MyBugs->Submit a new bug to file bug. Please share bug ID here

Dear SivaRamaKrishna,

BugId: 2326547
I can’t find how to join a file to the bug thread…

External Media
If file is big, you can upload to google drive and share link in the email.

Alright, I just noticed it as well.

Dear SivaRamaKrishna,

I shared the Drive Folder containing informations and source codes about my problem on the mail address provided with the bug report.

Dear Ben,
Can you confirm now if you have sent file to the email as you said earlier in bug that you have not send.