Greetings all,
After finishing the Deepstream on Nano course I felt pretty confident that I could get the local render/overlay working quickly… The USB camera code included in the course ran VERY slowly making it unsuitable for realtime and nullifying any useful application for me. In hopes of saving others time here is code that is currently running on the DLI Nano image, Deepstream SDK 4.0.2 that uses USB camera for single input and displays to the local machine. I found this example code elsewhere on the forum and just had to modify it slightly to work for my camera setup.
Before you start:
–Run v4l2-ctl -d /dev/video0 --list-formats-ext ,replacing the device with the appropriate camera number
–In the output from the command, note the resolution, encoding, and framerate; you need to set those in the gstream pipeline
–In my case, I am running 800x600 with a framerate of 20 fps, the function call looks like,
caps_uyvy = gst_caps_new_simple (“video/x-raw”, “format”, G_TYPE_STRING, “YUY2”,
“width”, G_TYPE_INT, 800, “height”, G_TYPE_INT,
600, “framerate”, GST_TYPE_FRACTION,
20, 1, NULL);
This should at least get you up and running with USB camera and Deepstream with local display. I tested this on the Logitech C270. I copied the config for tinyyolo in from example 3 (dirty i know…) into the same directory so for this to work you’ll need to as well (or update the path).
In this configuration detection accuracy under tiny yolo v3 is very poor. I found better performance when running yolo standalone versus through deepstream, so I’m wondering if I have something configured incorrectly.
deepstream_test1_app.c (10.6 KB)
dstest3_pgie_config_yolov3_tiny.txt (3.2 KB)