How do you change the open souce code? And could you attach the log with GST_DEBUG=3?
I tried changing the open source code ( deepstream_source_bin.cpp) same as done in this postā> How to change source pixel format from YUYV to MJPG - #7 by jhchris713
However, I got the below log again when I use GST_DEBUG=3. Camera starts after 1 minute and between each new frame, nearly 3000 frames are getting lost!!! (Please see the messages in the log). Is there a buffer problem?
I have the same problem when running all the TAO Models with the 2 USB cameras I have (One of them is Logitech Brio, the other one is Microsoft Kinect). I am using the latest jetpack version with Jetson AGX Orin.
Also note that this problem does not occur if I run the deepstream-app directly with the usb cameras : (deepstream-app -c source1_usb_dec_infer_resnet_int8.txt)
Were there any patches to solve this issue?
GST_DEBUG=3 ./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt v4l2:///dev/video0 ./gazenet
Request sink_0 pad from streammux
Now playing: v4l2:///dev/video0
Using winsys: x11
Inside Custom Lib : Setting Prop Key=config-file Value=../../../configs/gaze_tao/sample_gazenet_model_config.txt
0:00:02.127890021 648337 0xaaaabb1bd200 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<second-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/models/faciallandmark/faciallandmarks.etlt_b32_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 4
0 INPUT kFLOAT input_face_images 1x80x80 min: 1x1x80x80 opt: 32x1x80x80 Max: 32x1x80x80
1 OUTPUT kFLOAT conv_keypoints_m80 80x80x80 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT softargmax 80x2 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT softargmax:1 80 min: 0 opt: 0 Max: 0
ERROR: [TRT]: 3: Cannot find binding of given name: softargmax,softargmax:1,conv_keypoints_m80
0:00:02.280710142 648337 0xaaaabb1bd200 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<second-infer-engine1> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 2]: Could not find output layer 'softargmax,softargmax:1,conv_keypoints_m80' in engine
0:00:02.280762337 648337 0xaaaabb1bd200 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<second-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/models/faciallandmark/faciallandmarks.etlt_b32_gpu0_int8.engine
0:00:02.495921030 648337 0xaaaabb1bd200 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<second-infer-engine1> [UID 2]: Load new model:../../../configs/facial_tao/faciallandmark_sgie_config.txt sucessfully
0:00:02.496122580 648337 0xaaaabb1bd200 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:03.951602405 648337 0xaaaabb1bd200 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x416x736
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46
2 OUTPUT kFLOAT output_cov/Sigmoid 1x26x46
0:00:04.107970860 648337 0xaaaabb1bd200 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
0:00:04.111334697 648337 0xaaaabb1bd200 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:../../../configs/facial_tao/config_infer_primary_facenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...
Decodebin child added: nvjpegdec0
0:00:04.668504186 648337 0xaaaad4b34a40 FIXME videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:04.668636771 648337 0xaaaad4b34a40 FIXME videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:04.688270444 648337 0xaaaad4b34a40 WARN v4l2bufferpool gstv4l2bufferpool.c:809:gst_v4l2_buffer_pool_start:<source:pool:src> Uncertain or not enough buffers, enabling copy threshold
In cb_newpad
###Decodebin pick nvidia decoder plugin.
Deserializing engine from: ./gazeinfer_impl/../../../../models/gazenet/gazenet_facegrid.etlt_b8_gpu0_fp16.engineThe logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value.
Loaded engine size: 9 MiB
Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Deserialization required 16830 microseconds.
[MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +9, now: CPU 0, GPU 153 (MiB)
Total per-runner device persistent memory is 56832
Total per-runner host persistent memory is 109056
Allocated activation device memory of size 22133248
[MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +21, now: CPU 0, GPU 174 (MiB)
0:01:49.222172741 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:914:gst_v4l2src_create:<source> Timestamp does not correlate with any clock, ignoring driver timestamps
Frame Number = 0 Face Count = 0
Frame Number = 1 Face Count = 0
Frame Number = 2 Face Count = 0
0:06:46.434095088 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 3013 - ts: 0:06:41.802925254
Frame Number = 3 Face Count = 0
0:08:28.993810703 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2960 - ts: 0:08:24.362643141
Frame Number = 4 Face Count = 0
0:10:09.927904656 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2906 - ts: 0:10:05.296740006
Frame Number = 5 Face Count = 0
0:11:49.180885204 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2875 - ts: 0:11:44.549713226
Frame Number = 6 Face Count = 0
0:13:27.290887861 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 3029 - ts: 0:13:22.659718347
Frame Number = 7 Face Count = 0
0:15:10.057218118 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2988 - ts: 0:15:05.426038588
Frame Number = 8 Face Count = 0
0:16:51.594950636 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2940 - ts: 0:16:46.963780322
Frame Number = 9 Face Count = 0
0:18:30.829688863 648337 0xaaaad4b34a40 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2909 - ts: 0:18:26.198518421
Frame Number = 10 Face Count = 0
Itās only for deepstream-app, not for other apps,like deepstream-gaze-app.You should add patches to the source code, like: deepstream_gaze_app.cpp
2. Could you test the latency in your env?
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/12
This latency test is for deepstream-app and in my case I have no problem running the deesptream-app . Both of my USB cameras work perfectly with deesptream-app and the latency is very low. See the log below:
************BATCH-NUM = 169**************
Comp name = nvstreammux-src_bin_muxer source_id = 0 pad_index = 0 frame_num = 0 in_system_timestamp = 1679142637834.819092 out_system_timestamp = 1679142637834.877930 component_latency = 0.058838
Comp name = primary_gie in_system_timestamp = 1679142637834.927002 out_system_timestamp = 1679142637838.049072 component latency= 3.122070
Comp name = tiled_display_tiler in_system_timestamp = 1679142637838.095947 out_system_timestamp = 1679142637844.960938 component latency= 6.864990
Comp name = osd_conv in_system_timestamp = 1679142637845.044922 out_system_timestamp = 1679142637847.385010 component latency= 2.340088
Comp name = nvosd0 in_system_timestamp = 1679142637847.434082 out_system_timestamp = 1679142637848.361084 component latency= 0.927002
Source id = 0 Frame_num = 0 Frame latency = 1679142637848.440918 (ms)
I tested all the "deepstream tao apps "with the 2 USB cameras and itās the same problem as before. How do I do latency test with the deepstream-tao apps? Is there a way to replicate the problem on someoneās board? (Jetson AGX Orin+ Logitech Brio with the latest jetpack)
I donāt know the root cause of this problem is so I donāt know what to patch.
I tried the below pipeline as an example and it worked for my USB cameras. How do I incorporate a similar pipeline?
GST_DEBUG=3 gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,width=1920,height=1080,framerate=30/1 ! jpegparse ! jpegdec ! autovideosink sync=false
The second item from the link is how to add latency to a separate app.
2.If you are using other deepstream sample apps such as deepstream-test3, you need to apply the following patch and set the env
You can refer to our open source demo: sources\apps\sample_apps\deepstream-test1\deepstream_test1_app.c
You can customize it according to your needs, like change the source to v4l2 sourceā¦:
Just for example:
source = gst_element_factory_make ("v4l2src", "file-source");
caps_filter = gst_element_factory_make ("capsfilter", NULL);
caps = gst_caps_new_simple ("image/jpeg",
"width", G_TYPE_INT, 1920, "height", G_TYPE_INT,
1080, "framerate", GST_TYPE_FRACTION,
30, 1, NULL);
g_object_set (G_OBJECT (caps_filter ), "caps", caps, NULL);
jpegparser = gst_element_factory_make ("jpegparse", "jpeg-parser");
decoder= gst_element_factory_make ("nvjpegdec", "jpeg-decoder");
The latency patch worked with deepstream-test3 as below, however the latency patch does not work for deepstream-gaze-app (I applied the same patch but it gave errors). There is obviously latency per the previous logs.
************BATCH-NUM = 89**************
Comp name = nvv4l2decoder0 in_system_timestamp = 1679314790955.870117 out_system_timestamp = 1679314790989.819092 component latency= 33.948975
Comp name = nvstreammux-stream-muxer source_id = 0 pad_index = 0 frame_num = 89 in_system_timestamp = 1679314790989.862061 out_system_timestamp = 1679314791088.554932 component_latency = 98.692871
Comp name = primary-nvinference-engine in_system_timestamp = 1679314791088.610107 out_system_timestamp = 1679314791094.482910 component latency= 5.872803
Comp name = nvtiler in_system_timestamp = 1679314791212.158936 out_system_timestamp = 1679314791221.554932 component latency= 9.395996
Comp name = nvvideo-converter in_system_timestamp = 1679314791342.839111 out_system_timestamp = 1679314791345.239990 component latency= 2.400879
Comp name = nv-onscreendisplay in_system_timestamp = 1679314791345.354004 out_system_timestamp = 1679314791345.887939 component latency= 0.533936
Source id = 0 Frame_num = 89 Frame latency = 390.077881 (ms)
I did some customization for the caps as suggested in the previous post, and I got the below. Again the display shows on the screen but it is extremely slow with~3000 frames lost in between each display frame update. There is a lot of latency. The log is slightly different than the previous logs (with GST-DEBUG=3) however still could not draw any conclusion.
I tested logitech 720p and it works well, but my application requires a high resolution usb cam. Is there any plan to create a patch to solve this problem since I believe other people will run into the same problem.
Decodebin child added: source
Decodebin child added: decodebin0
Running...
Decodebin child added: nvjpegdec0
0:00:04.995885101 16221 0xaaab2aa3ef00 FIXME videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:04.995998319 16221 0xaaab2aa3ef00 FIXME videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:05.016596539 16221 0xaaab2aa3ef00 WARN v4l2bufferpool gstv4l2bufferpool.c:809:gst_v4l2_buffer_pool_start:<source:pool:src> Uncertain or not enough buffers, enabling copy threshold
In cb_newpad
###Decodebin pick nvidia decoder plugin.
0:01:41.616084110 16221 0xaaab2aa3ef00 WARN v4l2src gstv4l2src.c:914:gst_v4l2src_create:<source> Timestamp does not correlate with any clock, ignoring driver timestamps
Deserializing engine from: ./gazeinfer_impl/../../../../models/gazenet/gazenet_facegrid.etlt_b8_gpu0_fp16.engineThe logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value.
Loaded engine size: 9 MiB
Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Keep the original bbox
Deserialization required 464917 microseconds.
[MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +9, now: CPU 0, GPU 153 (MiB)
Total per-runner device persistent memory is 56832
Total per-runner host persistent memory is 109056
Allocated activation device memory of size 22133248
[MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +21, now: CPU 0, GPU 174 (MiB)
Gaze: 258.006195 -143.412491 -74.006882 0.045197 -0.100402
Frame Number = 0 Face Count = 1
Keep the original bbox
Gaze: 271.582672 -125.255020 -79.581390 0.019716 -0.123271
Frame Number = 1 Face Count = 1
Keep the original bbox
Gaze: 281.816040 -115.070984 -172.622025 0.030466 -0.111039
Frame Number = 2 Face Count = 1
0:06:19.170011289 16221 0xaaab2aa3ef00 WARN v4l2src gstv4l2src.c:978:gst_v4l2src_create:<source> lost frames detected: count = 2793 - ts: 0:06:14.156461051
Keep the original bbox
Gaze: 278.370544 -124.234871 -180.674515 0.034349 -0.092809
Frame Number = 3 Face Count = 1
Could you just use file source, like 1080P video, to see the frame rateļ¼ We can confirm whether it is your cameraās problem firstly.
@yuweiw I have revised the deepstream-test1-app.cpp per your recommendation, check the below code. Basically changed the source to v4l2src and commented the unnecessary sections as below. However, this time I get the below error message.
/*
* SPDX-FileCopyrightText: Copyright (c) 2018-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <cuda_runtime_api.h>
#include "gstnvdsmeta.h"
#include "nvds_yml_parser.h"
#define MAX_DISPLAY_LEN 64
#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2
/* The muxer output resolution must be set if the input streams will be of
* different resolution. The muxer will scale all the input frames to this
* resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080
/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
* based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 40000
/* Check for parsing error. */
#define RETURN_ON_PARSER_ERROR(parse_expr) \
if (NVDS_YAML_PARSER_SUCCESS != parse_expr) { \
g_printerr("Error in parsing configuration file.\n"); \
return -1; \
}
gint frame_number = 0;
gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
"Roadsign"
};
/* osd_sink_pad_buffer_probe will extract metadata received on OSD sink pad
* and update params for drawing rectangle, object information etc. */
static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
{
GstBuffer *buf = (GstBuffer *) info->data;
guint num_rects = 0;
NvDsObjectMeta *obj_meta = NULL;
guint vehicle_count = 0;
guint person_count = 0;
NvDsMetaList * l_frame = NULL;
NvDsMetaList * l_obj = NULL;
NvDsDisplayMeta *display_meta = NULL;
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next) {
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
int offset = 0;
for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
l_obj = l_obj->next) {
obj_meta = (NvDsObjectMeta *) (l_obj->data);
if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
vehicle_count++;
num_rects++;
}
if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
person_count++;
num_rects++;
}
}
display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
NvOSD_TextParams *txt_params = &display_meta->text_params[0];
display_meta->num_labels = 1;
txt_params->display_text = g_malloc0 (MAX_DISPLAY_LEN);
offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count);
offset = snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count);
/* Now set the offsets where the string should appear */
txt_params->x_offset = 10;
txt_params->y_offset = 12;
/* Font , font-color and font-size */
txt_params->font_params.font_name = "Serif";
txt_params->font_params.font_size = 10;
txt_params->font_params.font_color.red = 1.0;
txt_params->font_params.font_color.green = 1.0;
txt_params->font_params.font_color.blue = 1.0;
txt_params->font_params.font_color.alpha = 1.0;
/* Text background color */
txt_params->set_bg_clr = 1;
txt_params->text_bg_clr.red = 0.0;
txt_params->text_bg_clr.green = 0.0;
txt_params->text_bg_clr.blue = 0.0;
txt_params->text_bg_clr.alpha = 1.0;
nvds_add_display_meta_to_frame(frame_meta, display_meta);
}
g_print ("Frame Number = %d Number of objects = %d "
"Vehicle Count = %d Person Count = %d\n",
frame_number, num_rects, vehicle_count, person_count);
frame_number++;
return GST_PAD_PROBE_OK;
}
static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
GMainLoop *loop = (GMainLoop *) data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_EOS:
g_print ("End of stream\n");
g_main_loop_quit (loop);
break;
case GST_MESSAGE_ERROR:{
gchar *debug;
GError *error;
gst_message_parse_error (msg, &error, &debug);
g_printerr ("ERROR from element %s: %s\n",
GST_OBJECT_NAME (msg->src), error->message);
if (debug)
g_printerr ("Error details: %s\n", debug);
g_free (debug);
g_error_free (error);
g_main_loop_quit (loop);
break;
}
default:
break;
}
return TRUE;
}
int
main (int argc, char *argv[])
{
GMainLoop *loop = NULL;
GstElement *pipeline = NULL, *source = NULL, *jpegparser = NULL,
*decoder = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL, *nvvidconv = NULL,
*nvosd = NULL;
GstCaps *caps = NULL;
GstElement *capfilt = NULL;
GstBus *bus = NULL;
guint bus_watch_id;
GstPad *osd_sink_pad = NULL;
gboolean yaml_config = FALSE;
NvDsGieType pgie_type = NVDS_GIE_PLUGIN_INFER;
int current_device = -1;
cudaGetDevice(¤t_device);
struct cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, current_device);
/* Check input arguments */
/*
if (argc != 2) {
g_printerr ("Usage: %s <yml file>\n", argv[0]);
g_printerr ("OR: %s <H264 filename>\n", argv[0]);
return -1;
}
*/
/* Standard GStreamer initialization */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
/*
// Parse inference plugin type
yaml_config = (g_str_has_suffix (argv[1], ".yml") ||
g_str_has_suffix (argv[1], ".yaml"));
if (yaml_config) {
RETURN_ON_PARSER_ERROR(nvds_parse_gie_type(&pgie_type, argv[1],
"primary-gie"));
}
*/
/* Create gstreamer elements */
/* Create Pipeline element that will form a connection of other elements */
pipeline = gst_pipeline_new ("dstest1-pipeline");
/* Source element for reading from the file */
//source = gst_element_factory_make ("filesrc", "file-source");
source = gst_element_factory_make ("v4l2src", NULL);
capfilt = gst_element_factory_make ("capsfilter", "nvvideo-caps");
jpegparser = gst_element_factory_make ("jpegparse", "jpeg-parser");
decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");
g_object_set (G_OBJECT (decoder), "mjpeg", 1, NULL);
caps = gst_caps_new_simple ("image/jpeg",
"width", G_TYPE_INT, 1920, "height", G_TYPE_INT,
1080, "framerate", GST_TYPE_FRACTION,
30, 1, NULL);
g_object_set (G_OBJECT (capfilt), "caps", caps, NULL);
g_object_set (G_OBJECT (source), "device", argv[1], NULL);
g_printerr( "Value of argv[1]:%s \n",argv[1]);
/* Since the data format in the input file is elementary h264 stream,
* we need a h264parser */
//h264parser = gst_element_factory_make ("h264parse", "h264-parser");
/* Use nvdec_h264 for hardware accelerated decode on GPU */
//decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");
/* Create nvstreammux instance to form batches from one or more sources. */
streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");
if (!pipeline || !streammux) {
g_printerr ("One element could not be created. Exiting.\n");
return -1;
}
/* Use nvinfer or nvinferserver to run inferencing on decoder's output,
* behaviour of inferencing is set through config file */
if (pgie_type == NVDS_GIE_PLUGIN_INFER_SERVER) {
pgie = gst_element_factory_make ("nvinferserver", "primary-nvinference-engine");
} else {
pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");
}
/* Use convertor to convert from NV12 to RGBA as required by nvosd */
nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");
/* Create OSD to draw on the converted RGBA buffer */
nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");
/* Finally render the osd output */
if(prop.integrated) {
sink = gst_element_factory_make("nv3dsink", "nv3d-sink");
//sink= gst_element_factory_make ("nv3dsink", "nvvideo-renderer");
} else {
sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
}
if (!source || !jpegparser || !decoder || !pgie
|| !nvvidconv || !nvosd || !sink || !capfilt) {
g_printerr ("One element could not be created. Exiting.\n");
return -1;
}
g_object_set (G_OBJECT (streammux), "batch-size", 1, NULL);
g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
MUXER_OUTPUT_HEIGHT,
"batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);
// Set all the necessary properties of the nvinfer element,
// the necessary ones are :
g_object_set (G_OBJECT (pgie),
"config-file-path", "dstest1_pgie_config.txt", NULL);
/* we add a message handler */
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
gst_object_unref (bus);
/* Set up the pipeline */
/* we add all elements into the pipeline */
gst_bin_add_many (GST_BIN (pipeline),
source, jpegparser, decoder, streammux, pgie,
nvvidconv, nvosd, sink, capfilt, NULL);
g_print ("Added elements to bin\n");
GstPad *sinkpad, *srcpad;
gchar pad_name_sink[16] = "sink_0";
gchar pad_name_src[16] = "src";
sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
if (!sinkpad) {
g_printerr ("Streammux request sink pad failed. Exiting.\n");
return -1;
}
srcpad = gst_element_get_static_pad (decoder, pad_name_src);
if (!srcpad) {
g_printerr ("Decoder request src pad failed. Exiting.\n");
return -1;
}
if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
g_printerr ("Failed to link decoder to stream muxer. Exiting.\n");
return -1;
}
gst_object_unref (sinkpad);
gst_object_unref (srcpad);
if (!gst_element_link_many (source, capfilt, jpegparser, decoder, NULL)) {
g_printerr ("Elements could not be linked: 1. Exiting.\n");
return -1;
}
if (!gst_element_link_many (streammux, pgie,
nvvidconv, nvosd, sink, NULL)) {
g_printerr ("Elements could not be linked: 2. Exiting.\n");
return -1;
}
/* Lets add probe to get informed of the meta data generated, we add probe to
* the sink pad of the osd element, since by that time, the buffer would have
* had got all the metadata. */
osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
if (!osd_sink_pad)
g_print ("Unable to get sink pad\n");
else
gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
osd_sink_pad_buffer_probe, NULL, NULL);
gst_object_unref (osd_sink_pad);
/* Set the pipeline to "playing" state */
g_print ("Using file: %s\n", argv[1]);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* Wait till pipeline encounters an error or EOS */
g_print ("Running...\n");
g_main_loop_run (loop);
/* Out of the main loop, clean up nicely */
g_print ("Returned, stopping playback\n");
gst_element_set_state (pipeline, GST_STATE_NULL);
g_print ("Deleting pipeline\n");
gst_object_unref (GST_OBJECT (pipeline));
g_source_remove (bus_watch_id);
g_main_loop_unref (loop);
return 0;
}
Below is the error message I get when I run with /dev/video0:
/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app /dev/video0
Value of argv[1]:/dev/video0
Added elements to bin
Using file: /dev/video0
Opening in BLOCKING MODE
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:03.899201201 125687 0xaaaaded19840 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:04.071328337 125687 0xaaaaded19840 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:04.078154997 125687 0xaaaaded19840 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 277
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 277
ERROR from element v4l2src0: Internal data stream error.
Error details: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:dstest1-pipeline/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
nvstreammux: Successfully handled EOS for source_id=0
Deleting pipeline
You need to set the caps parameters based on your own device, like width, height, format, fpsā¦
caps = gst_caps_new_simple ("image/jpeg",
"width", G_TYPE_INT, XXX, "height", G_TYPE_INT,
XXX, "framerate", GST_TYPE_FRACTION,
XXX, XXX, NULL);
@yuweiw Yes, I did set to my device parameters. Is there any other thing I am missing in the code?
By the way you mentioned in post 25, that I need to use
decoder= gst_element_factory_make (ānvjpegdecā, ājpeg-decoderā);
However instead I used:
decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");
I tried both but did not make a difference.
From the log attached, the error is from the v4l2 plugin. So the most likely issue is the configuration of your camera.
1.You can refer to the link below to set the v4l2 source.
https://github.com/NVIDIA-AI-IOT/redaction_with_deepstream/blob/8c51d49b084eb7bb04c38ffe843c95470916ce66/deepstream_redaction_app.c#L292
2.Please get the pieline graphs of deepstream-app demo and your own app with the following link. Then you can compare it by yourself.
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/10
I successfully incorporated the below into deepstream_image_decode_app and the cameras work now.
source = gst_element_factory_make (āv4l2srcā, NULL);
decoder = gst_element_factory_make (ānvv4l2decoderā, ānvv4l2-decoderā);
g_object_set (G_OBJECT (decoder), āmjpegā, 1, NULL);
Now my problem is how to revise the deepstream_faciallandmark_app.cpp to work with my cameras in a similar manner. Should I add these to under the āmainā function or the ācreate_source_binā function? The code structure is different there so pipeline creation is a little bit complex in ādeepstream_faciallandmark_app.cppā. Any direction is appreciated. Also there is an addtional āuridecodebinā element added under ācreate_source_binā function, which is different from other pipelines.
OK, Glad to hear that. You can refer to our open source code to write a create_camera_source_bin:
sources\apps\apps-common\src\deepstream_source_bin.c(create_camera_source_bin)
Or you rewrite directly in the main function without use the create_source_bin fucntion. You can use the front part of the pipeline you are currently using(before streammux). Then link the decoder to the streammux in the deepstream_faciallandmark_app.cpp.
Also if you have new problems, you can open a new topic. We try to maintain the singularity of the topic and provide better reference for others. Thanks
Thank you for the directions. I think I figured out how to revise all the apps after looking into the reference apps.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.