DeepStream stopped working after several seconds while using webcam

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Orin NX
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 5.1.1
• NVIDIA GPU Driver Version (valid for GPU only) 11.4
• Issue Type( questions, new requirements, bugs)
Bugs and questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

With the above configuration, I’m trying to deploy YOLO on Jetson, with a usb webcam attached. It worked fine for several seconds, then it stopped with information saying “nvdsinfer_cuda_error”, and something like “failed to dequeue output from inferencing”, “Failed to add cudaStream callback for retruning input buffers”, “Error from primary_gie: Failed to queue input batch for inferencing”, “Unable to set device in gst_nvvideoconvert_transform Line 3326”. Then it freeze, so I cannot copy more information, or type commands after that.

However, before the video panel shows up, there are error messages saying
“Error code 1: serialization … Magic tag does not match)”,
"error code 4, internal error. "
“Warning: [TRT]: your onnx model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.”

Can you give me some instruction in this situation?

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
The command I run:
deepstream-app -c deepstream_app_config.txt

1.Can deepstream-test1 work normally?

2.Have you changed the code or configuration file? Can you post log after GST_DEBUG=3 deepstream-app -c deepstream_app_config.txt?

Maybe you can refer this FAQ about webcam.

Hello,

Thanks for your timely response.

I’m following the install instruction here to use yolov5 on Jetson: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv5.md

And as I checked, I don’t have deepstream-test1 in the repository, but if I use the sample video provided, it could run without any issue, the issue only comes when I use webcam, and changed the configuration file “deepstream_app_config.txt”.

Here is the output of the command you provided.

GST_DEBUG=3 deepstream-app -c deepstream_app_config.txt
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::32] Error Code 1: Serialization (Serialization assertion magicTagRead == kMAGIC_TAG failed.Magic tag does not match)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::65] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /home/ift500G/DeepStream-Yolo/yolov5s.wts
0:00:03.381407108 12835 0xaaaad0e64290 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/ift500G/DeepStream-Yolo/yolov5s.wts failed
0:00:03.590350652 12835 0xaaaad0e64290 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/ift500G/DeepStream-Yolo/yolov5s.wts failed, try rebuild
0:00:03.590469664 12835 0xaaaad0e64290 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

Then the video panel shows up after several minutes, with info on screen:

Building complete

0:06:19.676560012 12835 0xaaaad0e64290 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /home/ift500G/DeepStream-Yolo/model_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           25200x4         
2   OUTPUT kFLOAT scores          25200x1         
3   OUTPUT kFLOAT classes         25200x1         

0:06:19.964668658 12835 0xaaaad0e64290 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/ift500G/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully
0:06:19.966301829 12835 0xaaaad0e64290 WARN                 v4l2src gstv4l2src.c:695:gst_v4l2src_query:<src_elem> Can't give latency since framerate isn't fixated !

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	
**PERF:  0.00 (0.00)	
** INFO: <bus_callback:239>: Pipeline ready

** INFO: <bus_callback:225>: Pipeline running

0:06:20.047200292 12835 0xaaaad0ea2920 WARN          v4l2bufferpool gstv4l2bufferpool.c:809:gst_v4l2_buffer_pool_start:<src_elem:pool:src> Uncertain or not enough buffers, enabling copy threshold
**PERF:  15.33 (15.04)	
**PERF:  14.49 (14.76)	
0:06:33.512530606 12835 0xaaaad0ea2920 WARN                 v4l2src gstv4l2src.c:978:gst_v4l2src_create:<src_elem> lost frames detected: count = 1 - ts: 0:00:13.463749602

but just after about 10 seconds, it stopped and throws the error below:

0:06:50.028806352 12835 0xaaaad0ea2920 WARN                 v4l2src gstv4l2src.c:978:gst_v4l2src_create:<src_elem> lost frames detected: count = 1 - ts: 0:00:29.968252706
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:06:50.178165799 12835 0xaaaad05ba360 WARN                 nvinfer gstnvinfer.cpp:1404:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1404): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:335: => Failed in mem copy

Quitting
nvstreammux: Successfully handled EOS for source_id=0
0:06:51.951143932 12835 0xaaaad0ea2800 ERROR                nvinfer gstnvinfer.cpp:1251:get_converted_buffer:<primary_gie> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:06:51.951171261 12835 0xaaaad0ea2800 WARN                 nvinfer gstnvinfer.cpp:1560:gst_nvinfer_process_full_frame:<primary_gie> error: Buffer conversion failed
0:06:51.951359555 12835 0xaaaad0ea2800 WARN                   queue gstqueue.c:1566:gst_queue_loop:<primary_gie_queue> error: Internal data stream error.
0:06:51.951371427 12835 0xaaaad0ea2800 WARN                   queue gstqueue.c:1566:gst_queue_loop:<primary_gie_queue> error: streaming stopped, reason error (-5)
ERROR from primary_gie: Buffer conversion failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1560): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie_queue: Internal data stream error.
Debug info: gstqueue.c(1566): gst_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstQueue:primary_gie_queue:
streaming stopped, reason error (-5)

Following is my deepstream_app_config.txt file:

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

Thank you!

If it is normal to use local file input, then the problem may be with V4L2 camera as input.

I don’t see a [source] group from your config file.

Do you use this config file?

You can refer to the configuration of the v4l2 camera in the following path

/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt

It is likely that the camera data did not enter the deepstream pipeline correctly

Hello,

Thanks for your help.

I’m not sure what the exact cause is, but I found that my JetPack version is 5.1.1, and the DeepStream version was 6.3. So I downgraded to 6.2, and this issue disappeared.

Thank you and if you can, would you please highlight the version compatibility in your document, or provide a easier way to locate and install other version deepstream file?

For others who may encounter the same issue, all previous deepstream are located here: Log in | NVIDIA Developer

Thank you.

1 Like

Thanks for sharing,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.