Hi We are trying to build a product, which reads the streaming video from the Webcam using OpenCV.
Now, we are following deepstream sdk samples. In perticular -/deepstream/samples/nvDecInfer_detection. The problem is to implement the same but for streaming video rather than reading from the file.
- We are reading the frame using OpenCV function
- As per the deepstream docs, we need to push the video packets to thread. The snippet code from the sample
- Video Frame is not equal to Video Packets. So what is right way to push the Video frame from OpenCV ?
// what the users need to do is
// push video packets into a packet cache
std::vector<std::thread > vUserThreads;
for (int i = 0; i < g_nChannels; ++i) {
vUserThreads.push_back(std::thread(userPushPacket,vpDataProviders[i],pDeviceWorker,i) );
}
Reference :
Sample code we are using to read the frame from Webcam using OpenCV
int openWebCam() {
// Create a VideoCapture object and open the input file
// If the input is the web camera, pass 0 instead of the video file name
VideoCapture cap(0);
// Check if camera opened successfully
if(!cap.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
while(1){
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Display the resulting frame
imshow( "Frame", frame );
// Press ESC on keyboard to exit
char c=(char)waitKey(25);
if(c==27)
break;
}
// When everything done, release the video capture object
cap.release();
// Closes all the frames
destroyAllWindows();
return 0;
}