Inference of Faster RCNN on Deepstream

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version nvcr.io/nvidia/deepstream-l4t:5.0-20.07-samples
• JetPack Version (valid for Jetson only) Jetpack 4.4 [L4T 32.4.3]
• TensorRT Version TensorRT: 7.1.3.0
• Issue Type( questions, new requirements, bugs) Question

Hello there, I can see config_infer_primary_fasterRCNN.txt and deepstream_app_config_fasterRCNN.txt config files. Now, I have two questions:

  1. How can I run inference for a few images which are saved in a folder?
  2. How can I save their detection outputs?

Thanks.

Hi,

1. You can use -i flag with deepstream-app.

$ deepstream-app -c deepstream_app_config_fasterRCNN.txt -i /home/nvidia/my_image.jpg

2. Please use sink component.

[sink3]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out1.mp4

Thanks.

Thanks for your answer, @AastaLLL. Maybe my questions were not clear enough. I was asking,

  1. How can I process a set of images, not a single image?
  2. How can I save only detection output (only bounding box coordinates and scores), not images with detections?
  3. And most importantly, where is the documentation of these configurations? Looks like I can’t find.
    Looking forward to your reply. Thanks.

Found the documentation. it’s here

I can save detection results in .txt files with the following

[application]
gie-kitti-output-dir=./output/

But I can’t feed .jpg images to inference. I tried
with -i flag as below

deepstream-app -c deepstream_app_config_fasterRCNN.txt -i ../../samples/test_jpg/%d.jpg

with source group as below

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://../pocket-datasets-coco/test_jpg/%d.jpg
#uri=file://../../samples/streams/sample_720p.mp4
gpu-id=0
cudadec-memtype=0

both of them are failed. Can you please help?

Hi,

Sorry for the unclear statement.

For .jpg image, a JPEG decoder is required.
Please check our /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-image-decode-test sample.

Or you can use gst-launch-1.0 like this:

gst-launch-1.0 multifilesrc location='/home/nvidia/image/sample_%02d.jpg' caps="image/jpeg,framerate=1/1" ! \
  jpegparse ! \
  nvv4l2decoder ! \
  nvvideoconvert ! \
  'video/x-raw(memory:NVMM),format=(string)RGBA' ! \
  mux.sink_0 nvstreammux live-source=0 name=mux batch-size=1 width=224 height=224 ! \
  nvinfer config-file-path=config_infer_primary.txt batch-size=1 process-mode=1 ! \
  nvstreamdemux name=demux demux.src_0 ! \
  nvvideoconvert ! \
  'video/x-raw(memory:NVMM),format=(string)NV12' ! \
  nvvideoconvert ! \
  nvdsosd ! \
  nvvideoconvert ! \
  nvv4l2h265enc ! \
  h265parse ! \
  qtmux ! \
  filesink location=out.mp4

Thanks.