Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:792 No cameras available

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Orin NX
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version:
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) : bugs

So, one of our customers is using the Arducam 2.3 MP AR0234 Global Shutter Camera for the NVIDIA Jetson Nano/NX and Jetson Orin NX.

We can detect the camera

nvidia@tegra-ubuntu:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app$ ls /dev/video*
/dev/video0

But when we test the deepstream test app with the CSI camera sample config file we get the following issues the : No cameras available

nvidia@tegra-ubuntu:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app$ deepstream-app -c source1_csi_dec_infer_resnet_int8.txt
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:07.185011014  4512 0xaaaaf6e2f400 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60
0:00:07.626312655  4512 0xaaaaf6e2f400 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
0:00:07.643626449  4512 0xaaaaf6e2f400 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Runtime commands:
	h: Print this help
	q: Quit
	p: Pause
	r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.
**PERF:  FPS 0 (Avg)	
**PERF:  0.00 (0.00)	
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:792 No cameras available
** INFO: <bus_callback:291>: Pipeline ready
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:277>: Pipeline running
** INFO: <bus_callback:314>: Received EOS. Exiting ...
Quitting
App run successful
nvidia@tegra-ubuntu:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app$

My primary inspection is that the Gestreamer does not support the camera video format.

nvidia@tegra-ubuntu:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app$ v4l2-ctl --device=/dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'Y16 ' (16-bit Greyscale)
Size: Discrete 1920x1200
Size: Discrete 1920x1080
Size: Discrete 1280x720
[1]: 'BA10' (10-bit Bayer GRGR/BGBG)
Size: Discrete 1920x1200
Size: Discrete 1920x1080
Size: Discrete 1280x720

Can anyone help us to resolve the issue, Thanks.

What is the camera? USB camera, CSI camera, IP camera,…?

CSI camera

The default “CSI camera” configuration is for the cameras which have integrated ISP and can be supported by the nvarguscamera plugin( Accelerated GStreamer — NVIDIA Jetson Linux Developer Guide 1 documentation). Seems your camera does not have integrated ISP. You need to use v4l2 camera configuration and modify the deepstream-app source code to support the v4l2src->capsfilter-> bayer2rgb-> nvvideoconvert → capsfilter source in create_camera_source_bin() function in /opt/nvidia/deepstream/deepstream/sources/apps/apps-common/src/deepstream_source_bin.c

v4l2src (gstreamer.freedesktop.org)
bayer2rgb (gstreamer.freedesktop.org)

Thanks for your response

create_camera_source_bin (NvDsSourceConfig * config, NvDsSrcBin * bin)
{
  GstCaps *caps = NULL, *caps1 = NULL, *convertCaps = NULL;
  gboolean ret = FALSE;

  switch (config->type) {
    case NV_DS_SOURCE_CAMERA_CSI:
      bin->src_elem =
          gst_element_factory_make (NVDS_ELEM_SRC_CAMERA_CSI, "src_elem");
      break;
    case NV_DS_SOURCE_CAMERA_V4L2:
      bin->src_elem =
          gst_element_factory_make (NVDS_ELEM_SRC_CAMERA_V4L2, "src_elem");
      bin->cap_filter1 =
          gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "src_cap_filter1");
      if (!bin->cap_filter1) {
        NVGSTDS_ERR_MSG_V ("Could not create 'src_cap_filter1'");
        goto done;
      }
      // Set caps for V4L2 source for Bayer format
      caps1 = gst_caps_new_simple ("video/x-bayer",
          "format", G_TYPE_STRING, "BA10",  // Adjust 'BA10' to match your Bayer format (e.g., BA12, BA16)
          "width", G_TYPE_INT, config->source_width, 
          "height", G_TYPE_INT, config->source_height, 
          "framerate", GST_TYPE_FRACTION, 
          config->source_fps_n, config->source_fps_d, NULL);
      break;
    default:
      NVGSTDS_ERR_MSG_V ("Unsupported source type");
      goto done;
  }

  if (!bin->src_elem) {
    NVGSTDS_ERR_MSG_V ("Could not create 'src_elem'");
    goto done;
  }

  bin->cap_filter =
      gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "src_cap_filter");
  if (!bin->cap_filter) {
    NVGSTDS_ERR_MSG_V ("Could not create 'src_cap_filter'");
    goto done;
  }

  if (config->video_format) {
    caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING,
            config->video_format, "width", G_TYPE_INT, config->source_width,
            "height", G_TYPE_INT, config->source_height, "framerate",
            GST_TYPE_FRACTION, config->source_fps_n, config->source_fps_d,
            NULL);
  } else {
    caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "NV12",
            "width", G_TYPE_INT, config->source_width, "height", G_TYPE_INT,
            config->source_height, "framerate", GST_TYPE_FRACTION,
            config->source_fps_n, config->source_fps_d, NULL);
  }

  if (config->type == NV_DS_SOURCE_CAMERA_CSI) {
    GstCapsFeatures *feature = NULL;
    feature = gst_caps_features_new ("memory:NVMM", NULL);
    gst_caps_set_features (caps, 0, feature);
  }

  struct cudaDeviceProp prop;
  cudaGetDeviceProperties (&prop, config->gpu_id);

  if (config->type == NV_DS_SOURCE_CAMERA_V4L2) {
    GstElement *nvvidconv2, *bayer2rgb;
    GstCapsFeatures *feature = NULL;
    GstElement *nvvidconv1 = NULL;
    
    // For dGPU, use videoconvert before bayer2rgb
    if (!prop.integrated) {
      nvvidconv1 = gst_element_factory_make ("videoconvert", "nvvidconv1");
      if (!nvvidconv1) {
        NVGSTDS_ERR_MSG_V ("Failed to create 'nvvidconv1'");
        goto done;
      }
    }

    // Bayer to RGB conversion element
    bayer2rgb = gst_element_factory_make ("bayer2rgb", "bayer_convert");
    if (!bayer2rgb) {
      NVGSTDS_ERR_MSG_V ("Failed to create 'bayer2rgb'");
      goto done;
    }

    feature = gst_caps_features_new ("memory:NVMM", NULL);
    gst_caps_set_features (caps, 0, feature);
    g_object_set (G_OBJECT (bin->cap_filter), "caps", caps, NULL);
    g_object_set (G_OBJECT (bin->cap_filter1), "caps", caps1, NULL);

    nvvidconv2 = gst_element_factory_make (NVDS_ELEM_VIDEO_CONV, "nvvidconv2");
    if (!nvvidconv2) {
      NVGSTDS_ERR_MSG_V ("Failed to create 'nvvidconv2'");
      goto done;
    }

    g_object_set (G_OBJECT (nvvidconv2), "gpu-id", config->gpu_id,
        "nvbuf-memory-type", config->nvbuf_memory_type, NULL);

    if (!prop.integrated) {
      gst_bin_add_many (GST_BIN (bin->bin), bin->src_elem, bin->cap_filter1,
          nvvidconv1, bayer2rgb, nvvidconv2, bin->cap_filter, NULL);
    } else {
      gst_bin_add_many (GST_BIN (bin->bin), bin->src_elem, bin->cap_filter1,
          bayer2rgb, nvvidconv2, bin->cap_filter, NULL);
    }

    NVGSTDS_LINK_ELEMENT (bin->src_elem, bin->cap_filter1);

    if (!prop.integrated) {
      NVGSTDS_LINK_ELEMENT (bin->cap_filter1, nvvidconv1);
      NVGSTDS_LINK_ELEMENT (nvvidconv1, bayer2rgb);
    } else {
      NVGSTDS_LINK_ELEMENT (bin->cap_filter1, bayer2rgb);
    }

    NVGSTDS_LINK_ELEMENT (bayer2rgb, nvvidconv2);
    NVGSTDS_LINK_ELEMENT (nvvidconv2, bin->cap_filter);

    NVGSTDS_BIN_ADD_GHOST_PAD (bin->bin, bin->cap_filter, "src");

  } else {

    g_object_set (G_OBJECT (bin->cap_filter), "caps", caps, NULL);
    gst_bin_add_many (GST_BIN (bin->bin), bin->src_elem, bin->cap_filter, NULL);
    NVGSTDS_LINK_ELEMENT (bin->src_elem, bin->cap_filter);
    NVGSTDS_BIN_ADD_GHOST_PAD (bin->bin, bin->cap_filter, "src");
  }

  switch (config->type) {
    case NV_DS_SOURCE_CAMERA_CSI:
      if (!set_camera_csi_params (config, bin)) {
        NVGSTDS_ERR_MSG_V ("Could not set CSI camera properties");
        goto done;
      }
      break;
    case NV_DS_SOURCE_CAMERA_V4L2:
      if (!set_camera_v4l2_params (config, bin)) {
        NVGSTDS_ERR_MSG_V ("Could not set V4L2 camera properties");
        goto done;
      }
      break;
    default:
      NVGSTDS_ERR_MSG_V ("Unsupported source type");
      goto done;
  }

  ret = TRUE;

  GST_CAT_DEBUG (NVDS_APP, "Created camera source bin successfully");

done:
  if (caps)
    gst_caps_unref (caps);

  if (convertCaps)
    gst_caps_unref (convertCaps);

  if (!ret) {
    NVGSTDS_ERR_MSG_V ("%s failed", __func__);
  }
  return ret;
}

Is this modification correct? Do let me know.

Please use “gst-inspect-1.0 v4l2src” command to list all supported caps. I don’t think there is “BA10” format in GStreamer. These are basic GStreamer programming skills. Please google by yourself. The v4l2src and bayer2rgb plugins are all from GStreamer community, it has nothing to do with Jetson or DeepStream.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.