Deploying trained model to real time object recognition from my laptop webcam

I trained a Hand gesture model in Nvidia digits for american sign language dataset. It is also working fine for testing images, But How can i use my laptop webcam to test images in real time instead of browsing the image every single time for testing.
I have seen Inference tutorials for Nvidia Jetson but no such for Laptop Webcam. KIndly share your work in the below comments if someone has already done it.



You can still use jetson_inference for the deployment.
The sample can be compiled on an x86 environment:

For Webcam, suppose you can access it with V4L2 interface as an USB camera.

$ ./detectnet-camera --camera=/dev/video0     # using PedNet,  V4L2 camera /dev/video0 (1280x720)

It’s recommended to give it a try first.


Hey Thanks for the reply. But i am getting cmake error as below for building the repo link in two days to a demo link

CMake Error at CMakeLists.txt:74 (find_package):
Could not find a configuration file for package “OpenCV” that is compatible
with requested version “3.0.0”.

The following configuration files were considered but not accepted:

/usr/share/OpenCV/OpenCVConfig.cmake, version:

could you help me in this.


The sample requires OpenCV 3.x package.
Could you help to install it?


So it will not work for opencv 4.1.1 version?. I need to change to 3.x version to run the two days to a demo tutorial?


You can give it a try.
Suppose it should work by updating the CMakeLists.txt file here:

The OpenCV v3.x is the pre-installed version on the Jetson platform so we fix the version in the CMakeLists.txt.