A question when running deepstream-infer-tensor-meta-test

Hi, we are running deepstream-infer-tensor-meta-test, however, when we “make” , error occured. Detailed as follows:

g++ -c -o deepstream_infer_tensor_meta_test.o -fPIC -std=c++11 -I …/…/…/includes -I /usr/local/cuda-10.2/include pkg-config --cflags gstreamer-1.0 opencv deepstream_infer_tensor_meta_test.cpp
Package opencv was not found in the pkg-config search path.
Perhaps you should add the directory containing `opencv.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘opencv’ found
deepstream_infer_tensor_meta_test.cpp:23:10: fatal error: gst/gst.h: No such file or directory
#include <gst/gst.h>
^~~~~~~~~~~
compilation terminated.
Makefile:69: recipe for target ‘deepstream_infer_tensor_meta_test.o’ failed
make: *** [deepstream_infer_tensor_meta_test.o] Error 1

Hi,
Please provide below information,

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

and please install libgstreamer1.0-dev, will fix error of gst.h No such file or directory
for opencv, did you install it?

Hi,

Thank you very much, it works now!

However, after make, when I run ./deepstream-infer-tensor-meta-app -t inferserver /xxx/test.mp4
error occured, detailed as follows:
error

I have changed the config file(dstensor_pgie_config.txt):

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface_1088_1920.onnx_b1_gpu0_fp32.engine
model-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface.trt
onnx-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface_1088_1920.onnx
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
#labelfile-path=labels_centerface.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

Which element was created failed? Can you upload more log?

Hi, Fiona,

• Hardware Platform (Jetson / GPU) GeForce GTX 1050
• DeepStream Version 5.0
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only) 440.100

I am not sure which element was created failed.

I run ./deepstream-infer-tensor-meta-app -t infer <h264_elementary_stream>, it shows me this:

And when I run ./deepstream-infer-tensor-meta-app -t inferserver <h264_elementary_stream>, it shows me this:

Thank you.

The sample deepstream-infer-tensor-meta-app can only handle h264 ES stream file, mp4 file is not ES stream. Can you replace it with the sample of sample_720p.h264?

Hi,

I use the original config file and it works now!

However, when I use my custom config file ( I use an onnx modle and get an engine via trt inference model ) , error occured, it just shows black screen, detailed as follows:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface_1088_1920.onnx_b1_gpu0_fp32.engine
model-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface.trt
onnx-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface_1088_1920.onnx
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
#labelfile-path=labels_centerface.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

This row may not work because model-file is usually for caffe models.
model-file=…/…/…/…/samples/models/tlt_pretrained_models/centerface/centerface.trt

Hi,

I have Del this row, but it does not work. Error still occured like before.

@yohoohhh

It seems the line output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid in the configuration is invalid.
Item output-blob-names is usually for searching for outputs of a CAFFE model.
ONNX model does not need this configuration.