• Hardware Platform (Jetson / GPU) Nvidia A10 • DeepStream Version 6.3 • TensorRT Version 8.5.3 • NVIDIA GPU Driver Version (valid for GPU only) 535.129.03 • Issue Type( questions, new requirements, bugs) bugs
I tested default deepstream example deepstream-image-meta-test to encode detected croppers to JPEG images on Nvidia A10 but got the following error in the in function nvds_obj_enc_process:
Cuda failure: status=801
Cuda failure: status=801
CUDA Runtime error cudaGetLastError() # operation not supported, code = cudaErrorNotSupported [ 801 ] in file cuosd.cpp:756
Does this mean that Nvidia A10 does not support JPEG encoding at all? For example, the RTX 3090 can encode JPEG images even though it doesn’t have a JPEG HW encoder.
How can I know If some Nvidia GPU supports JPEG encoding? For example I can not see it here:
I commented the code in osd_sink_pad_buffer_probe function and I still get the error:
# ./deepstream-image-meta-test 0 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
WARNING: Overriding infer-config batch-size (2) with number of sources (1)
Now playing...
0:00:00.125792173 70472 0x55e9c40a40c0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.125942123 70472 0x55e9c40a40c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:00:26.355819497 70472 0x55e9c40a40c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:26.442559073 70472 0x55e9c40a40c0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:ds_image_meta_pgie_config.txt sucessfully
Running...
Cuda failure: status=801
Cuda failure: status=801
Frame Number = 0 Number of objects = 13 Vehicle Count = 9 Person Count = 4
CUDA Runtime error cudaGetLastError() # operation not supported, code = cudaErrorNotSupported [ 801 ] in file cuosd.cpp:756
0:00:27.043328374 70472 0x55e9c2c1c460 WARN nvinfer gstnvinfer.cpp:2397:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:27.043369007 70472 0x55e9c2c1c460 WARN nvinfer gstnvinfer.cpp:2397:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(2397): gst_nvinfer_output_loop (): /GstPipeline:ds-image-meta-test-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Cuda failure: status=801
Cuda failure: status=801
Returned, stopping playback
Frame Number = 1 Number of objects = 11 Vehicle Count = 8 Person Count = 3
CUDA Runtime error cudaGetLastError() # operation not supported, code = cudaErrorNotSupported [ 801 ] in file cuosd.cpp:756
Cuda failure: status=801
Cuda failure: status=801
Cuda failure: status=801
Cuda failure: status=801
Cuda failure: status=801
Cuda failure: status=801
Cuda failure: status=801
Cuda failure: status=801
Deleting pipeline
Could you try to preliminarily locate this problem. Just from the log attached, it does not mean that there is a problem with the nvds_obj_enc_process.
You can comment out the 2 probes separately and check if it can run properly.