Hardware Platform: Jetson orin nano dev kit (super) - 8GB
DeepStream Version: 7.1
JetPack Version: 6.2
TensorRT Version: 10.3.0.30
CUDA: 12.6.68
Running deepstream with python bindings, tested with soruces USB camera and cctv using rtsp.
Results are as expected when running with monitor connected but the issue is when running headless, trying to access orin over SSH or VNC.
Getting the following error
Creating Pipeline
Creating Source
Creating Video Converter
Creating fakesink
Playing cam /dev/video0
Unknown or legacy key specified 'input-width' for group [property]
Unknown or legacy key specified 'input-height' for group [property]
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:00.370770320 12515 0xaaaae726bef0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/model.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0
ERROR: [TRT]: ICudaEngine::getTensorIOMode: Error Code 3: Internal Error (Given invalid tensor name: pred_bbox. Get valid tensor names with getIOTensorName())
0:00:00.371091993 12515 0xaaaae726bef0 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2059> [UID = 1]: Could not find output layer 'pred_bbox' in engine
0:00:00.371112986 12515 0xaaaae726bef0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/model.engine
nvbufsurface: Failed to create EGLImage.
0:00:00.464117614 12515 0xaaaae726bef0 WARN nvinfer gstnvinfer.cpp:1010:gst_nvinfer_start:<primary-inference> error: Failed to set buffer pool to active
=== Detection Running ===
Press Ctrl+C or 'q' to exit gracefully
=====================================
Error: gst-resource-error-quark: Failed to set buffer pool to active (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1010): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
[ERROR] Failed to send summary to API. Status code: 201, Response: {"status":true,"message":"Successfully"}
[INFO] Pipeline stopped successfully
[INFO] Cleanup completed
[INFO] Exiting program
“Failed to create EGLImage” is the main issue here.
Tried using fakesink so that there will be no feed.
X11 forwarding is on with SSH session.
X11 forwarding parameters are handled in SSH config file on jetson.
Still getting same error when try to run code over SSH on laptop.