@pilotfdd Thank you very much!
It works up to the point of execution of the step below
deepstream-app -c deepstream_app_source1_facedetectir.txt
However, given the two downloaded files are
resnet18_facedetectir_pruned.etlt facedetectir_int8.txt
that are the result of executing of the readme command below:
mkdir -p ../../models/tlt_pretrained_models/facedetectir && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/resnet18_facedetectir_pruned.etlt \
-O ../../models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/facedetectir_int8.txt \
-O ../../models/tlt_pretrained_models/facedetectir/facedetectir_int8.txt
why should the engine file named
resnet18_facedetectir_pruned.etlt_b1_gpu0_fp16.engine
be created?
I can see there are only these files:
/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt
/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt_b1_gpu0_int8.engine
trying the other pipeline with
/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models$ cp ../../streams/sample_1080p_h264.mp4 test.mp4
gst-launch-1.0 filesrc location= /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_facedetectir.txt batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvoverlaysink
it works
But it doesn’t seem to detect faces on played videos, does it?
the original example also comes redacted already so that the detection doesn’t seem to happen, right?
It seems to boundebox faces if running
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
but doesn’t seem to show detection or draw boxes if running
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_facedetectir.txt
Another complication is that it doesn’t seem to support input from vl4 usb camera?
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
(deepstream-app:26591): GStreamer-CRITICAL **: 00:28:24.050: passed '0' as denominator for `GstFraction'
(deepstream-app:26591): GStreamer-CRITICAL **: 00:28:24.050: passed '0' as denominator for `GstFraction'
** ERROR: <create_camera_source_bin:160>: Failed to link 'src_elem' (image/jpeg; video/mpeg, mpegversion=(int)4, systemstream=(boolean)false; video/mpeg, mpegversion=(int)2; video/mpegts, systemstream=(boolean)true; video/x-bayer, format=(string){ bggr, gbrg, grbg, rggb }, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-dv, systemstream=(boolean)true; video/x-h263, variant=(string)itu; video/x-h264, stream-format=(string){ byte-stream, avc }, alignment=(string)au; video/x-pwc1, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-pwc2, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-raw, format=(string){ RGB16, BGR, RGB, GRAY8, GRAY16_LE, GRAY16_BE, YVU9, YV12, YUY2, YVYU, UYVY, Y42B, Y41B, YUV9, NV12_64Z32, NV24, NV61, NV16, NV21, NV12, I420, BGRA, BGRx, ARGB, xRGB, BGR15, RGB15 }, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-sonix, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-vp8; video/x-vp9; video/x-wmv, wmvversion=(int)3, format=(string)WVC1) and 'src_cap_filter1' (video/x-raw, width=(int)0, height=(int)0)
** ERROR: <create_camera_source_bin:215>: create_camera_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: <main:636>: Failed to create pipeline
Quitting
App run failed
@pilotfdd However, if you could share the code to save /cut faces from video it might be helpful to try / for reference.
Thank you very much
Any ideas on how can we use the FaceDetectIR/ PeopleNet with test4/test5 apps that have mqqt option so that data gets to AWS from the detection via the MQQT ? Also any idea how to get it to work with usb camera? Probably it doesn’t work with camera/ default sample file doe to wrong resolution? Does FaceDecectIR require input files to be necessarily 384 X 240 ? so far I tried with 1080p inputs that did not work.