Exception: jetson.utils -- failed to create videoSource device

If you have your own 1D line buffer, then I would use that to fill the CUDA image, until the image is full and ready to be processed with imageNet.

May I ask, How do I fill the CUDA image with my line buffer?

You would keep track of what the latest row in the image you are on, and then use a memcpy or loop to fill the pixels of that line. Then increment the variable that you are using to keep track of the latest row. When you have filled the last row, then process it with imageNet.Classify() and begin filling the image again from the first row.

First, use cudaAllocMapped() to allocate your image buffer one time at the beginning of the program, instead of on every iteration. Then do your memcpy(img_ptr, add,(channels * length)) code each time you get new data in. Then when lineNumber++ >=256, do the classification. You don’t need the cudaMemcpy() and I also don’t think you need to use loadImage() anymore, because you are filling your own buffer.

Thank you sir for your response.
I want to save the line buffer pointer - img_ptr (of type uchar3) into an image to give it to the classification segment of the code. Is that even possible, sir?

can the below functions be used?

bool saveImageRGBA( const char* filename, float4* cpu, int width, int height, float max_pixel )

or

bool saveImage( const char* filename, void* ptr, int width, int height, imageFormat format, int quality, const float2& pixel_range, bool sync )
https://github.com/dusty-nv/jetson-utils/blob/master/image/imageIO.cpp

or

// saveImageRGBA

bool saveImageRGBA( const char* filename, float4* ptr, int width, int height, float max_pixel, int quality )
{
return saveImage(filename, ptr, width, height, IMAGE_RGBA32F, quality, make_float2(0, max_pixel));
}

Also, Sir, do we have a sample code for this detectnet.py in cpp version.

Regards,
Karishma

Those functions save the image to disk. What it sounds like you want to do is accumulate your line buffer into the CUDA image, by doing a memcpy of the line buffer into the next line of the CUDA image.

The C++ sample is detectnet.cpp which you have already been using.

If you are to just run detectnet /home/sesotec-ai-2/tk-recognition/sesotec.bmp /home/sesotec-ai-2/tk-recognition/sesotec_output.jpg does it detect objects? This way you can test if it is working first on that image.

If you don’t want to parse these flags from the command line, you can just do this:

const uint32_t overlayFlags = detectNet::OVERLAY_BOX | detectNet::OVERLAY_LABEL | detectNet::OVERLAY_CONFIDENCE;

What is the sesotec.bmp image? It appears to be all white with a square? If that’s the case, I wouldn’t expect the model to detect anything in that.

Greetings Sir,
Sorry sir for the late response. Yes Sir, it is a image of green and white boxes. I thought the model would detect the colors and makes the bounding boxes for the detected colors.
Thank you very much. You have been a great mentor and being patient to my queries. Can’t thank you enough, Sir.

Best regards,
Karishma

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.