"No Kernel Image is Available for execution on the device" Error using Deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (K80)
**• DeepStream Version DS5, (but also DS4.2, DS4.0)
• TensorRT Version TRT10.2
• NVIDIA GPU Driver Version (valid for GPU only)

I have a VM in Azure that come automatically with Nvidia Drivers and K80 GPUs. I installed Cuda 10.2, TensorRT7 and Deepstream 5.0. When I run the “deepstream-app -c config” I get the following error. This error does not occur on V100s only on K80.

`ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)`

I already modify the config_infer* file to force the engine to FP32 since K80
I run the trtexec test to isolate tensorrt and it PASSED
/usr/src/tensorrt/bin/trtexec --deploy=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.prototxt --output=conv2d_bbox --output=conv2d_cov/Sigmoid

Bellow is entire error
deepstream-app -c samples/configs/deepstream-app/test1.txt *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test *** WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles 0:00:02.412796195 24796 0x5615b3697660 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_fp32.engine INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT input_1 3x368x640 1 OUTPUT kFLOAT conv2d_bbox 16x23x40 2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 0:00:02.412893794 24796 0x5615b3697660 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_fp32.engine 0:00:02.416767248 24796 0x5615b3697660 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_4j.txt sucessfully Runtime commands: h: Print this help q: Quit p: Pause r: Resume NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window. ** INFO: <bus_callback:181>: Pipeline ready ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device) ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:00:02.661474277 24796 0x5615acdada80 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing ERROR from primary_gie: Failed to queue input batch for inferencing Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie Quitting ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device) ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:00:02.707134642 24796 0x5615acdada80 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device) ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:00:02.743666413 24796 0x5615acdada80 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing ERROR from primary_gie: Failed to queue input batch for inferencing Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie ERROR from primary_gie: Failed to queue input batch for inferencing Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device) ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:00:02.791575451 24796 0x5615acdada80 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device) ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:00:02.850948955 24796 0x5615acdada80 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device) ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:00:02.897822705 24796 0x5615acdada80 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing App run failed

Hi

`ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)`

Tesla K80’s compute capability is 3.7,
could you please add this
-arch=sm_37 to the nvcc compile command line, and rebuild it. sources/libs/nvdsinfer/Makefile
remember copy built library to /opt/nvidia/deepstream/deepstream-$VERSION if you installed ds here or copy to your installed deepstream library path.

I did what you suggested. in the Makefile I added flags -arch=sm_37. Save the Makefile and run make and sudo make installed.
That seemed to move the problem to this
jflo@jflo-datascience-ds5-low:/opt/nvidia/deepstream/deepstream$ GST_DEBUG=nvinfer:5 deepstream-app -c samples/configs/deepstream-app/test1.txt

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

WARNING: [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.523303245 26305 0x556ccf9cd460 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:02.523376745 26305 0x556ccf9cd460 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_fp32.engine
0:00:02.526589839 26305 0x556ccf9cd460 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_4j.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

Bus error (core dumped)

1)where is the dump_file located?
2) for reference, this problem seems to be very similar to this other (unresolved) issue Bus error while running deepstream refrerence app
3)I also set GST_DEBUG=nvinfer:5 where is the Debug output located.
Thanks!!

for how to dump file you can refer to this,


after you run with GST_DEBEUG=:level app, you will see the log on standard ouput, you also can save it to one file,
GST_DEBEUG=
:level app > logname 2>&1

also this file for reference,

FYI, I turn on GST_DEBUG=3 You can see a lot more errors. deepstream5_bus_error_gstdebug3.txt (168.8 KB)

Can you paste your config used?

Hi Amycao, a couple of new developments. It was suggested to changed the config to
codec=2 #H234 and the enc-type=1 #software
That change effectively makes the application to not die with the bus error, but still not working fully.
On the surface it appears that the app is working, BUT It does not produce any output in the RSTP or in the out.mp4 file. It produces a GREEN Background screen. Perhaps it is a different issue or not. Let me know if you want me to open a different discussion. Thank
See config files bellow :config_infer_primary_4j.txt (3.1 KB) test1.txt (3.9 KB)

See the current output:

See application running log:
**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg) FPS 4 (Avg) FPS 5 (Avg) FPS 6 (Avg) FPS 7 (Avg) FPS 8 (Avg) FPS 9 (Avg) FPS 10 (Avg) FPS 11 (Avg) FPS 12 (Avg) FPS 13 (Avg) FPS 14 (Avg) FPS 15 (Avg) FPS 16 (Avg) FPS 17 (Avg) FPS 18 (Avg) FPS 19 (Avg) FPS 20 (Avg) FPS 21 (Avg) FPS 22 (Avg) FPS 23 (Avg) FPS 24 (Avg) FPS 25 (Avg) FPS 26 (Avg) FPS 27 (Avg) FPS 28 (Avg) FPS 29 (Avg)
**PERF: 3.50 (2.82) 4.57 (4.02) 3.63 (3.05) 4.10 (3.58) 3.47 (2.72) 3.47 (2.74) 4.65 (4.18) 4.07 (3.63) 3.62 (3.16) 3.45 (2.96) 3.79 (3.40) 3.49 (3.03) 3.49 (3.06) 3.49 (3.08) 3.90 (3.66) 3.49 (3.23) 4.44 (4.22) 3.45 (3.18) 3.89 (3.71) 3.42 (3.13) 3.42 (3.23) 4.37 (4.25) 3.64 (3.53) 3.48 (3.38) 3.42 (3.33) 3.64 (3.59) 3.46 (3.43) 4.01 (4.01) 4.44 (3.82)4.46 (3.84)

(deepstream-app:61312): GLib-GObject-WARNING **: 01:38:13.573: g_object_get_is_valid_property: object class 'GstUDPSrc' has no property named 'pt'
**PERF: 3.40 (3.27)     3.40 (3.59)     3.40 (3.31)     3.40 (3.45)     3.40 (3.28)     3.40 (3.28)     3.40 (3.63)     3.40 (3.46)     3.40 (3.34)     3.40 (3.31)     3.40 (3.40)     3.40 (3.32)     3.40 (3.33)     3.40 (3.33)     3.41 (3.47)        3.41 (3.36)     3.41 (3.64)     3.41 (3.35)     3.41 (3.48)     3.41 (3.36)     3.41 (3.37)     3.41 (3.65)     3.41 (3.43)     3.41 (3.39)     3.41 (3.39)     3.40 (3.45)     3.41 (3.41)     3.41 (3.57)     3.40 (3.66)3.40 (3.53)
**PERF: 3.37 (3.33)     3.37 (3.51)     3.37 (3.35)     3.37 (3.43)     3.37 (3.33)     3.37 (3.33)     3.37 (3.54)     3.37 (3.44)     3.37 (3.37)     3.37 (3.35)     3.37 (3.40)     3.37 (3.36)     3.36 (3.36)     3.37 (3.36)     3.36 (3.44)        3.36 (3.38)     3.36 (3.54)     3.36 (3.37)     3.36 (3.45)     3.36 (3.38)     3.36 (3.38)     3.36 (3.55)     3.36 (3.42)     3.37 (3.31)     3.37 (3.30)     3.37 (3.34)     3.37 (3.32)     3.36 (3.42)     3.36 (3.47)3.37 (3.48)
**PERF: 3.37 (3.35)     3.37 (3.48)     3.37 (3.37)     3.37 (3.42)     3.37 (3.35)     3.37 (3.35)     3.37 (3.50)     3.37 (3.43)     3.37 (3.38)     3.37 (3.36)     3.37 (3.40)     3.37 (3.37)     3.38 (3.37)     3.38 (3.37)     3.38 (3.43)        3.38 (3.38)     3.38 (3.50)     3.38 (3.38)     3.38 (3.43)     3.38 (3.38)     3.37 (3.32)     3.37 (3.45)     3.37 (3.35)     3.37 (3.34)     3.37 (3.33)     3.37 (3.36)     3.37 (3.34)     3.37 (3.41)     3.37 (3.45)3.37 (3.45)
**PERF: 3.30 (3.36)     3.30 (3.46)     3.30 (3.33)     3.30 (3.37)     3.30 (3.32)     3.30 (3.32)     3.30 (3.43)     3.30 (3.37)     3.30 (3.34)     3.30 (3.33)     3.30 (3.35)     3.30 (3.33)     3.29 (3.33)     3.30 (3.33)     3.29 (3.38)        3.29 (3.34)     3.29 (3.43)     3.29 (3.34)     3.29 (3.38)     3.29 (3.34)     3.30 (3.34)     3.30 (3.44)     3.30 (3.36)     3.30 (3.35)     3.30 (3.35)     3.30 (3.37)     3.30 (3.35)     3.30 (3.41)     3.29 (3.44)3.30 (3.44)
**PERF: 3.34 (3.33)     3.33 (3.41)     3.34 (3.34)     3.33 (3.38)     3.33 (3.33)     3.33 (3.33)     3.33 (3.42)     3.33 (3.38)     3.33 (3.35)     3.33 (3.34)     3.33 (3.36)     3.33 (3.34)     3.34 (3.34)     3.32 (3.35)     3.34 (3.38)        3.33 (3.35)     3.33 (3.43)     3.33 (3.35)     3.33 (3.38)     3.33 (3.35)     3.33 (3.35)     3.33 (3.43)     3.33 (3.37)     3.33 (3.36)     3.34 (3.32)     3.34 (3.34)     3.33 (3.33)     3.34 (3.37)     3.35 (3.39)3.34 (3.40)
**PERF: 3.26 (3.34)     3.27 (3.41)     3.27 (3.35)     3.27 (3.35)     3.27 (3.31)     3.27 (3.31)     3.27 (3.39)     3.27 (3.35)     3.27 (3.32)     3.26 (3.32)     3.27 (3.34)     3.26 (3.32)     3.27 (3.32)     3.27 (3.32)     3.27 (3.35)        3.27 (3.33)     3.27 (3.39)     3.26 (3.33)     3.27 (3.36)     3.26 (3.33)     3.26 (3.33)     3.26 (3.39)     3.26 (3.34)     3.26 (3.34)     3.26 (3.33)     3.26 (3.35)     3.27 (3.34)     3.26 (3.37)     3.26 (3.40)3.26 (3.40)
**PERF: 3.26 (3.32)     3.26 (3.38)     3.26 (3.33)     3.26 (3.35)     3.26 (3.32)     3.26 (3.32)     3.26 (3.39)     3.26 (3.36)     3.26 (3.33)     3.26 (3.33)     3.26 (3.35)     3.26 (3.33)     3.26 (3.33)     3.26 (3.33)     3.25 (3.33)        3.25 (3.31)     3.25 (3.37)     3.26 (3.31)     3.26 (3.33)     3.25 (3.31)     3.26 (3.31)     3.26 (3.37)     3.26 (3.32)     3.26 (3.32)     3.26 (3.31)     3.26 (3.33)     3.25 (3.32)     3.26 (3.35)     3.26 (3.37)3.26 (3.37)
**PERF: 3.30 (3.31)     3.30 (3.36)     3.30 (3.31)     3.30 (3.34)     3.30 (3.31)     3.30 (3.31)     3.30 (3.37)     3.29 (3.34)     3.30 (3.32)     3.30 (3.31)     3.30 (3.33)     3.30 (3.32)     3.30 (3.32)     3.30 (3.32)     3.30 (3.34)        3.30 (3.32)     3.30 (3.37)     3.30 (3.32)     3.30 (3.34)     3.31 (3.32)     3.31 (3.32)     3.31 (3.37)     3.31 (3.33)     3.31 (3.33)     3.31 (3.32)     3.29 (3.34)     3.30 (3.33)     3.30 (3.33)     3.30 (3.35)3.30 (3.35)
**PERF: 3.33 (3.32)     3.34 (3.37)     3.34 (3.32)     3.34 (3.34)     3.34 (3.32)     3.34 (3.32)     3.34 (3.37)     3.32 (3.35)     3.32 (3.33)     3.32 (3.32)     3.32 (3.34)     3.32 (3.32)     3.32 (3.33)     3.32 (3.33)     3.32 (3.35)        3.32 (3.33)     3.32 (3.37)     3.33 (3.31)     3.33 (3.33)     3.33 (3.31)     3.33 (3.31)     3.33 (3.35)     3.33 (3.32)     3.32 (3.31)     3.33 (3.31)     3.35 (3.32)     3.34 (3.31)     3.33 (3.34)     3.33 (3.35)3.33 (3.36)
**PERF: 3.28 (3.31)     3.27 (3.35)     3.28 (3.31)     3.28 (3.33)     3.28 (3.31)     3.27 (3.31)     3.27 (3.35)     3.28 (3.33)     3.28 (3.31)     3.28 (3.31)     3.28 (3.32)     3.28 (3.31)     3.28 (3.31)     3.28 (3.31)     3.28 (3.33)        3.29 (3.32)     3.29 (3.36)     3.28 (3.32)     3.28 (3.33)     3.28 (3.32)     3.28 (3.32)     3.28 (3.36)     3.27 (3.33)     3.29 (3.32)     3.28 (3.32)     3.28 (3.33)     3.28 (3.32)     3.28 (3.35)     3.28 (3.36)3.29 (3.34)
**PERF: 3.29 (3.31)     3.30 (3.35)     3.29 (3.32)     3.29 (3.34)     3.29 (3.31)     3.30 (3.32)     3.30 (3.36)     3.30 (3.34)     3.30 (3.32)     3.30 (3.32)     3.30 (3.33)     3.30 (3.32)     3.30 (3.32)     3.30 (3.32)     3.29 (3.32)        3.29 (3.31)     3.29 (3.34)     3.29 (3.31)     3.29 (3.32)     3.28 (3.31)     3.29 (3.31)     3.29 (3.34)     3.30 (3.32)     3.29 (3.31)     3.29 (3.31)     3.29 (3.32)     3.29 (3.31)     3.29 (3.33)     3.29 (3.34)3.29 (3.35)
**PERF: 3.29 (3.30)     3.29 (3.34)     3.29 (3.31)     3.29 (3.32)     3.29 (3.31)     3.29 (3.31)     3.29 (3.35)     3.29 (3.33)     3.28 (3.31)     3.29 (3.31)     3.28 (3.32)     3.28 (3.31)     3.28 (3.31)     3.28 (3.31)     3.29 (3.33)        3.29 (3.31)     3.29 (3.35)     3.29 (3.31)     3.29 (3.33)     3.30 (3.31)     3.29 (3.31)     3.29 (3.35)     3.29 (3.32)     3.29 (3.32)     3.28 (3.32)     3.29 (3.32)     3.29 (3.32)     3.29 (3.32)     3.29 (3.33)3.29 (3.33)
**PERF: 3.28 (3.31)     3.28 (3.35)     3.28 (3.32)     3.28 (3.33)     3.28 (3.31)     3.28 (3.31)     3.28 (3.35)     3.28 (3.33)     3.29 (3.32)     3.28 (3.32)     3.28 (3.31)     3.28 (3.30)     3.28 (3.30)     3.27 (3.30)     3.28 (3.32)        3.28 (3.31)     3.28 (3.34)     3.28 (3.30)     3.28 (3.32)     3.28 (3.30)     3.28 (3.31)     3.27 (3.34)     3.28 (3.31)     3.28 (3.31)     3.29 (3.31)     3.28 (3.31)     3.28 (3.31)     3.28 (3.33)     3.28 (3.34)3.28 (3.34)
^C** ERROR: <_intr_handler:140>: User Interrupted..

Quitting
App run successful

Hi,
Can you check without encode. Just fakesink in case display is not there.

Hi,

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one. Thanks