Parallel recording of video during inference

Starting from this sample code: deepstream_python_apps/apps/deepstream-test1-usbcam at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Would there be a way to record the annotated video with detected bounding boxes to file?

OK, a bit more:

This is a pipeline, which works from console, does inference on one camera and displays that

gst-launch-1.0 v4l2src device=/dev/video0 ! "image/jpeg,width=640,height=480" ! jpegdec ! videoconvert ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path=./config.txt ! nvdsosd ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false

This pipeline here additionally saves any annotated frame to disk. In conjunction nvoverlaysink there are a lot of complains, that the computer is too slow. But if I drop that nvoverlaysink there seem to be about 30 fps on disk.

gst-launch-1.0 v4l2src device=/dev/video0 ! "image/jpeg,width=640,height=480" ! jpegdec ! videoconvert ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvinfer config-file-path=./config.txt ! nvdsosd ! tee name=t t. ! queue ! nvoverlaysink t. ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! video/x-raw,format=BGR ! jpegenc ! multifilesink location=/tmp/dump_%03d.jpg

You might notice, that up to nvosd both pipelines are identical. Now I’m wondering, how I could picture this in code and switch on/off frame dumping on demand and at runtime w/o restarting the engine…

EDIT: The content of config.txt:

#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

#
# DEPRECATED. FOR USE IN SCRIPTS ONLY
# Settings imported to config.yaml
#

[property]
workspace-size=600
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=./models/primary-detector-nano/resnet10.caffemodel
proto-file=./models/primary-detector-nano/resnet10.prototxt
labelfile-path=./models/primary-detector-nano/labels.txt
model-engine-file=./models/primary-detector-nano/resnet10.caffemodel_b1_gpu0_fp16.engine
force-implicit-batch-dim=1
batch-size=1
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=3



[class-attrs-all]
pre-cluster-threshold=0.5
eps=0.2
group-threshold=1

Looks like there is a way: deepstream_python_apps/deepstream_imagedata-multistream.py at 971626b018501db5128d418da304a4c8d38d412b · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

It’s a Jetson Nano development Kit, DS 5.1. The image is taken from you pre-canned SD images, I suppose it is JetPack 4.5.1

OK, the quoted sample code above works fine. With one exception…

The problem: I’m using 3 USB cams simultaneously. The nvosd overlay is presented like so: All three images in one overlay window (disregard the drawing on the right image, it is just an overlayed fence):

With this sample code here deepstream_python_apps/deepstream_imagedata-multistream.py at 971626b018501db5128d418da304a4c8d38d412b · NVIDIA-AI-IOT/deepstream_python_apps · GitHub I’m getting correctly three “stream_N” subdirectories, and only the third contains an image, because the inference found something on the third camera only. But the image is stretched in width over trice the real width. How to mitigate that? Do I have to scale down the obtained image in width? Or is there any extra parameter?

TIA
Regards

OK, that was easy:

frame_image = cv2.resize(frame_image, (640, 480))

After the box annotations. Just the font looks bad after the resize.

What on earth is your issue?

Nothing anymore. But who is able to read is able to see the initial question and the self-answer. You are obviously not. Not an issue

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.