Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
jetson agx orin
• DeepStream Version
6.2
• JetPack Version (valid for Jetson only)
5.1
• TensorRT Version
8.5
• NVIDIA GPU Driver Version (valid for GPU only)
11.4
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
This problem occurs when doing event detection. My picture and video saving functions are triggered by events. When an event occurs, I need to complete the picture and video saving function. The event detection is placed after detecting the src pad. The following is my pipeline and code for saving images and videos.
application:
enable-perf-measurement: 1
perf-measurement-interval-sec: 5
##gie-kitti-output-dir=streamscl
tiled-display:
enable: 0
rows: 2
columns: 2
width: 1920
height: 1080
gpu-id: 0
nvbuf-memory-type: 0
source:
#csv-file-path: sources_4.csv
csv-file-path: sources_4_different_source_rtsp.csv
sink0:
enable: 0
#Type - 1=FakeSink 2=EglSink 3=File 7=nv3dsink (Jetson only)
type: 2
source-id: 0
gpu-id: 0
nvbuf-memory-type: 0
osd:
enable: 0
gpu-id: 0
border-width: 1
text-size: 15
#value changed
text-color: 1;1;1;1
text-bg-color: 0.3;0.3;0.3;1
font: Serif
show-clock: 0
clock-x-offset: 800
clock-y-offset: 820
clock-text-size: 12
clock-color: 1;0;0;0
nvbuf-memory-type: 0
streammux:
gpu-id: 0
##Boolean property to inform muxer that sources are live
live-source: 1
buffer-pool-size: 5
batch-size: 16
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout: 400000
batched-push-timeout: 120000
Set muxer output width and height
width: 1920
height: 1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding: 0
nvbuf-memory-type: 0
primary-gie0:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 2
interval: 5
gie-unique-id: 1
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_renjifei_yoloV5.txt
branch0:
pgie’s id
pgie-id: 1
src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15
tracker0:
enable: 1
cfg-file-path: tracker0.yml
primary-gie1:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 1
interval: 10
gie-unique-id: 2
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_road_covered_resnet18.txt
branch1:
pgie-id: 2
select sources by sourceid
src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15
tracker1:
enable: 0
cfg-file-path: tracker0.yml
primary-gie2:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 1
interval: 10
gie-unique-id: 3
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_visibility_resnet18.txt
branch2:
pgie’s id
pgie-id: 3
select sources by sourceid
src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15
tracker2:
enable: 0
cfg-file-path: tracker0.yml
primary-gie3:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 1
interval: 10
gie-unique-id: 4
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_fire_yoloV5.txt
branch3:
pgie’s id
pgie-id: 4
select sources by sourceid
src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15
tracker3:
enable: 0
cfg-file-path: tracker0.yml
meta-mux:
enable: 1
config-file: …/metamux/config_metamux0.txt
tests:
file-loop: 0
type or paste code here
guint gpu_id = 0;
NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context(gpu_id);
if (!obj_ctx_handle) {
g_print ("Unable to create context\n");
// return -1;
}
GstMapInfo inmap = GST_MAP_INFO_INIT;
if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
GST_ERROR ("input buffer mapinfo failed");
// return GST_PAD_PROBE_DROP;
}
// 获取ip_surf
NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
// NvBufSurface idx_surface;
gst_buffer_unmap (buf, &inmap); // 16
NvDsObjEncUsrArgs frameData = { 0 };
/* Preset */
frameData.isFrame = TRUE;
/* To be set by user */
frameData.saveImg = TRUE; //save_img
frameData.attachUsrMeta = FALSE;
g_snprintf(frameData.fileNameImg, image_path.length()+1, "%s", image_path.c_str());
/* Quality */
frameData.quality = 80;
/* Main Function Call */
nvds_obj_enc_process (obj_ctx_handle, &frameData, ip_surf, NULL, frame_meta);
nvds_obj_enc_finish (obj_ctx_handle);
nvds_obj_enc_destroy_context (obj_ctx_handle);
// save videos
NvDsSrcBin *src_bin = &appCtx->pipeline.multi_src_bin.sub_bins[source_id];
if (!src_bin->recordCtx && src_bin->reconfiguring) return;
NvDsSRContext *ctx = (NvDsSRContext *) src_bin->recordCtx;
if (!ctx->recordOn) {
NvDsSRStart (ctx, &sessId, startTime, duration, NULL);
}
Has anyone encountered similar problems?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)