• Hardware Platform : GPU
• DeepStream Version : 6.0.1
• TensorRT Version : 8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only) : 470.129.06
Hi,
I am using a custom deepstream-ssd-parser example from deepstream-python_apps. I used the retina face model to detect the face from tensorrtx and was also able to convert the model to tensorrt in deepstream-6-0.1-triton docker container.
Now, while running the pipeline, I am getting two issues:
- The pipeline freezes/halts in between the frames. Like, sometimes the pipeline stops at 100 frames, or 250 frames, and so on.
- If the pipeline does not freeze and comes to end-of-stream then there is a segmentation fault (core dumped), which means the app does not run completely.
Any solution to the above bugs?
Config file :
infer_config {
unique_id: 1
gpu_ids: [0]
max_batch_size: 1
backend {
inputs: [ {
name: "data"
}]
outputs: [
{name: "prob"}
]
trt_is {
model_name: "retinaface"
version: -1
model_repo {
root: "./models/"
log_level: 1
strict_model_config: false
}
}
}
preprocess {
network_format: IMAGE_FORMAT_BGR
tensor_order: TENSOR_ORDER_LINEAR
tensor_name: "data"
maintain_aspect_ratio: 0
frame_scaling_hw: FRAME_SCALING_HW_GPU
frame_scaling_filter: 1
normalize {
scale_factor: 1.0
channel_offsets: [104.0, 117.0, 123.0]
}
}
postprocess {
labelfile_path: "labels_retina.txt"
other {}
}
extra {
copy_input_to_host_buffers: false
}
custom_lib {
path: "/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so"
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}
output_control {
output_tensor_meta: true
}