I have tried creating a shared object file with the cpp file mentioned above and included it using the config file as specified below :
infer_config {
unique_id: 1
gpu_ids: 0
max_batch_size: 1
backend {
inputs [
{
name: “INPUT0”
dims: [3, 1080, 1920]
},
{
name: “SOURCE_ID”
dims: [1]
}
]
trt_is {
model_name: “centerface”
version: -1
model_repo {
root: “./centerface”
log_level: 1
tf_gpu_memory_fraction: 0.2
tf_disable_soft_placement: 0
}
}
}
preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_LINEAR
maintain_aspect_ratio: 0
normalize {
scale_factor: 1.0
channel_offsets: [0, 0, 0]
}
}
postprocess {
labelfile_path: “./centerface/centerface/centerface_labels.txt”
other{}
}
custom_lib {
path: “./centerface/libnvdstriton_custom_impl_ensemble.so”
}
extra {
copy_input_to_host_buffers: false
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}
output_control {
output_tensor_meta: true
}
As given above, I have specified a tensor Source ID to be given as input.
The query for running the pipeline has been specified as below:
gst-launch-1.0 --gst-debug=rtpjitterbuffer:6,nvstreammux:6,nvstreamdemux:6 videotestsrc ! nvvideoconvert ! “video/x-raw(memory:NVMM), width=640, height=480” ! m.sink_0 nvstreammux name=m width=640 height=480 batch_size=1 ! nvinferserver config-file-path=config.txt ! nvstreamdemux name=d d.src_0 ! nvvideoconvert ! autovideosink sync=0
and these are the error logs I get :
0:00:00.054489923 55981 0x5aa6f4f16210 DEBUG nvstreammux gstnvstreammux.cpp:1539:gst_nvstreammux_request_new_pad: Requesting new sink pad
0:00:00.054765830 55981 0x5aa6f4f16210 DEBUG nvstreamdemux gstnvstreamdemux.cpp:129:gst_nvstreamdemux_request_new_pad: Requesting new src pad
0:00:00.054779426 55981 0x5aa6f4f16210 DEBUG nvstreamdemux gstnvstreamdemux.cpp:153:gst_nvstreamdemux_request_new_pad: Requesting new src pad
Setting pipeline to PAUSED …
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
W0902 04:16:25.504173 55981 metrics.cc:512] Unable to get power limit for GPU 0. Status:Success, value:0.000000
W0902 04:16:25.504253 55981 metrics.cc:530] Unable to get power usage for GPU 0. Status:Success, value:0.000000
W0902 04:16:25.504269 55981 metrics.cc:554] Unable to get energy consumption for GPU 0. Status:Success, value:0
Model initialized
Error connecting: [Errno 111] Connection refused
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: centerface
Pipeline is PREROLLING …
0:00:02.191259811 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:1256:gst_nvstreammux_sink_event: parse video info from caps video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-cuda-device, gpu-id=(int)0
0:00:02.191289412 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:1295:gst_nvstreammux_sink_event: peer caps video/x-raw(memory:NVMM), format=(string){ NV12, RGBA }, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ]
0:00:02.191611653 55981 0x5aa6f4f08240 DEBUG nvstreamdemux gstnvstreamdemux.cpp:443:set_src_pad_caps: caps before = video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, multiview-mode=(string)mono, format=(string)NV12, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-cuda-device, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1; video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, multiview-mode=(string)mono, format=(string)NV12, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-cuda-device, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1
0:00:02.191633984 55981 0x5aa6f4f08240 DEBUG nvstreamdemux gstnvstreamdemux.cpp:453:set_src_pad_caps: caps after = video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, multiview-mode=(string)mono, format=(string)NV12, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-cuda-device, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1; video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, multiview-mode=(string)mono, format=(string)NV12, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-cuda-device, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1
0:00:02.195234493 55981 0x5aa6f4f08240 INFO nvstreammux gstnvstreammux.cpp:1467:gst_nvstreammux_sink_event: mux got segment from src 0 time segment start=0:00:00.000000000, offset=0:00:00.000000000, stop=99:99:99.999999999, rate=1.000000, applied_rate=1.000000, flags=0x00, time=0:00:00.000000000, base=0:00:00.000000000, position 0:00:00.000000000, duration 99:99:99.999999999
W0902 04:16:26.504844 55981 metrics.cc:512] Unable to get power limit for GPU 0. Status:Success, value:0.000000
W0902 04:16:26.504897 55981 metrics.cc:530] Unable to get power usage for GPU 0. Status:Success, value:0.000000
W0902 04:16:26.504910 55981 metrics.cc:554] Unable to get energy consumption for GPU 0. Status:Success, value:0
0:00:02.764540418 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:521:gst_nvstreammux_chain: Got buffer 0x5aa6f54ccea0 from source 0 pts = 0:00:00.000000000
0:00:02.764706936 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2432:gst_nvstreammux_src_collect_buffers: Pad added event sent 0
0:00:02.764895466 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2933:gst_nvstreammux_src_push_loop: Pushing buffer 0x5aa6f54cc6c0, batch size 1, PTS 0:00:00.000000000
0:00:02.764909791 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2954:gst_nvstreammux_src_push_loop: STREAMMUX OUT BUFFER attached timestamp 0:00:00.000000000
0:00:02.765532390 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:521:gst_nvstreammux_chain: Got buffer 0x718a8c014000 from source 0 pts = 0:00:00.033333333
0:00:02.765601157 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2933:gst_nvstreammux_src_push_loop: Pushing buffer 0x5aa6f54cc7e0, batch size 1, PTS 0:00:00.033333333
0:00:02.765621790 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2954:gst_nvstreammux_src_push_loop: STREAMMUX OUT BUFFER attached timestamp 0:00:00.033333333
0:00:02.767095384 55981 0x718a800022a0 WARN nvinferserver gstnvinferserver.cpp:412:gst_nvinfer_server_logger: nvinferserver[UID 1]: Warning from initFixedExtraInputLayers() <infer_cuda_context.cpp:361> [UID = 1]: More than one input layers but custom initialization function not implemented
ERROR: infer_cuda_context.cpp:327 Init fixed extra input tensors failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: infer_base_context.cpp:289 pre-inference on input tensors failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:02.767257133 55981 0x718a800022a0 WARN nvinferserver gstnvinferserver.cpp:412:gst_nvinfer_server_logger: nvinferserver[UID 1]: Warning from initFixedExtraInputLayers() <infer_cuda_context.cpp:361> [UID = 1]: More than one input layers but custom initialization function not implemented
ERROR: infer_cuda_context.cpp:327 Init fixed extra input tensors failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: infer_base_context.cpp:289 pre-inference on input tensors failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
Pipeline is PREROLLED …
Setting pipeline to PLAYING …
0:00:02.769432404 55981 0x5aa6f4f16210 DEBUG nvstreammux gstnvstreammux.cpp:1191:gst_nvstreammux_src_event: latency 0:00:00.000000000
0:00:02.769627474 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:521:gst_nvstreammux_chain: Got buffer 0x718a8c014120 from source 0 pts = 0:00:00.066666666
0:00:02.769686536 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2933:gst_nvstreammux_src_push_loop: Pushing buffer 0x5aa6f54cc900, batch size 1, PTS 0:00:00.066666666
0:00:02.769713112 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2954:gst_nvstreammux_src_push_loop: STREAMMUX OUT BUFFER attached timestamp 0:00:00.066666666
New clock: GstSystemClock
0:00:02.770178096 55981 0x718ab00c7f50 WARN nvinferserver gstnvinferserver.cpp:581:gst_nvinfer_server_push_buffer: error: inference failed with unique-id:1
ERROR: from element /GstPipeline:pipeline0/GstNvInferServer:nvinferserver0: inference failed with unique-id:1
Additional debug info:
gstnvinferserver.cpp(581): gst_nvinfer_server_push_buffer (): /GstPipeline:pipeline0/GstNvInferServer:nvinferserver0
Execution ended after 0:00:00.000551316
Setting pipeline to NULL …
0:00:02.772811732 55981 0x718a800022a0 WARN nvinferserver gstnvinferserver.cpp:412:gst_nvinfer_server_logger: nvinferserver[UID 1]: Warning from initFixedExtraInputLayers() <infer_cuda_context.cpp:361> [UID = 1]: More than one input layers but custom initialization function not implemented
ERROR: infer_cuda_context.cpp:327 Init fixed extra input tensors failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:02.772834926 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:521:gst_nvstreammux_chain: Got buffer 0x718a8c014240 from source 0 pts = 0:00:00.100000000
ERROR: infer_base_context.cpp:289 pre-inference on input tensors failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:02.772940873 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2933:gst_nvstreammux_src_push_loop: Pushing buffer 0x5aa6f54cca20, batch size 1, PTS 0:00:00.099999999
0:00:02.772965876 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2954:gst_nvstreammux_src_push_loop: STREAMMUX OUT BUFFER attached timestamp 0:00:00.099999999
0:00:02.775537661 55981 0x5aa6f4f08240 DEBUG nvstreammux gstnvstreammux.cpp:521:gst_nvstreammux_chain: Got buffer 0x5aa6f54ccea0 from source 0 pts = 0:00:00.133333333
0:00:02.775606317 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2933:gst_nvstreammux_src_push_loop: Pushing buffer 0x5aa6f54cc6c0, batch size 1, PTS 0:00:00.133333332
0:00:02.775631240 55981 0x5aa6f4f08580 DEBUG nvstreammux gstnvstreammux.cpp:2954:gst_nvstreammux_src_push_loop: STREAMMUX OUT BUFFER attached timestamp 0:00:00.133333332
Error connecting: [Errno 111] Connection refused
W0902 04:16:27.505894 55981 metrics.cc:512] Unable to get power limit for GPU 0. Status:Success, value:0.000000
W0902 04:16:27.505968 55981 metrics.cc:530] Unable to get power usage for GPU 0. Status:Success, value:0.000000
W0902 04:16:27.506001 55981 metrics.cc:554] Unable to get energy consumption for GPU 0. Status:Success, value:0
Cleaning up…
Error connecting: [Errno 111] Connection refused
Am I going on the right path on achieving the solution, could you please specify how these error logs can be useful in this case?