Prolems with model YoloV8 + Jetson Xavier AGX

I need help understanding an issue I’m encountering with my system. I’m using a Jetson Xavier AGX hardware platform with the following software configurations:
• Hardware Platform (jetson Xavier AGX )
• DeepStream Version 6.3
• JetPack Version 5.1.2 GA
• TensorRT Version 8.5.2.2
I’ve employed a YOLOv8 model to detect faces connected to another YOLOv8 model for person detection. Initially, everything works fine for about an hour. However, after that time, the system stops and shows a warning. Surprisingly, the process doesn’t terminate, as confirmed by the command ‘ps aux | grep python3,’ but it stops receiving frames.
Here’s a snippet of the logs:

Unknown or legacy key specified 'is-classifier' for group [property]
Unknown or legacy key specified 'is-classifier' for group [property]
Unknown or legacy key specified 'is-classifier' for group [property]
0:00:03.955841848 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary3-nvinference-engine>e[00m NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 5]: deserialized trt engine from :test_model_age/age_googlenet.caffemodel_b2_gpu0_fp32.engine
0:00:04.005038422 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary3-nvinference-engine>e[00m NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 5]: Use deserialized engine model:age_googlenet.caffemodel_b2_gpu0_fp32.engine
0:00:04.014347932 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary3-nvinference-engine>e[00m [UID 5]: Load new model:config_age_1.txt sucessfully
0:00:04.940455311 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine>e[00m NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 4]: deserialized trt engine from :model_classi_gendre/gender_net.caffemodel_b2_gpu0_fp32.engine
0:00:04.989419231 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine>e[00m NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 4]: Use deserialized engine model: model_classi_gendre/gender_net.caffemodel_b2_gpu0_fp32.engine
0:00:04.993044021 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary2-nvinference-engine>e[00m [UID 4]: Load new model:dstest2_sgie2_config.txt sucessfully
0:00:05.899151015 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary1-nvinference-engine>e[00m NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 3]: deserialized trt engine from :gender_googlenet/gender_googlenet.caffemodel_b2_gpu0_fp32.engine
0:00:05.947119996 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary1-nvinference-engine>e[00m NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 3]: Use deserialized engine model: gender_googlenet/gender_googlenet.caffemodel_b2_gpu0_fp32.engine
0:00:05.950555495 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary1-nvinference-engine>e[00m [UID 3]: Load new model:confi_genre_google.txt sucessfully
0:00:06.888169950 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference2>e[00m NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :model_face_detection/yolov8n-face.onnx_b2_gpu0_fp32.engine
0:00:06.936303037 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference2>e[00m NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 2]: Use deserialized engine model: model_face_detection/yolov8n-face.onnx_b2_gpu0_fp32.engine
0:00:06.942079154 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference2>e[00m [UID 2]: Load new model:config_detect_face.txt sucessfully
0:00:08.201292784 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference1>e[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :model_b2_gpu0_fp32.engine
0:00:08.248198342 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference1>e[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: model_b2_gpu0_fp32.engine
0:00:08.252823927 e[331m27756e[00m     0x3ac38b50 e[36mINFO   e[00m e[00m             nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference1>e[00m [UID 1]: Load new model:config_person_2rtsp.txt sucessfully
sh: 1: modprobe: not found
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261  

Warning Message…

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4575: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:07:10.377733093 e[331m27756e[00m     0x3ac431e0 e[33;01mWARN   e[00m e[00m             nvinfer gstnvinfer.cpp:1463:convert_batch_and_push_to_input_thread:<primary-inference2>e[00m error: NvBufSurfTransform failed with error -3 while converting buffer
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4575: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:07:10.385980199 e[331m27756e[00m     0x3ac431e0 e[33;01mWARN   e[00m e[00m             nvinfer gstnvinfer.cpp:1463:convert_batch_and_push_to_input_thread:<primary-inference2>e[00m error: NvBufSurfTransform failed with error -3 while converting buffer

The warning seems to indicate an issue with image scale factors exceeding 16, which impacts the VIC configuration. This might be causing problems with buffer transformations. I hope someone can assist in resolving this issue.

please refer to this FAQ.

Hello,
I integrated a solution into my face detection configuration file, but it initially worked and then stopped without detecting faces problems . I’m unsure about the problems. It worked for 3 hours, then stopped receiving frames. I used a thread to check for frame reception; if it doesn’t receive frames, it should kill the process. However, sometimes it stops receiving frames but doesn’t kill the process. It stays in this state, and the memory consumption doesn’t decrease. I need help; I’m using two RTSP links for my work.

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
which sample are your referring to? what is the whole media pipeline? if you want the app to exit when it stops receiving frames, please refer to deepstream-app opensource code. deepstream-app uses rtspsrc_monitor_probe_func to monitor src’s data receiving. deepstream-app will reconnect rtsp srouce in watch_source_status when not receiving data in a specific time. you can let app exit in when finding no receiving data in a specific time.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.