How to use Gst-nvmultiurisrcbin to replace the source in the deepstream-app example?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
2060
• DeepStream Version
deepstream 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
with deepstream 7.0 docker image
• NVIDIA GPU Driver Version (valid for GPU only)
535
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

How to use Gst-nvmultiurisrcbin to replace the source in the deepstream-app example?

The deepstream-app code is very good and very flexible, and can meet almost all my needs, except that there is no control to dynamically add and delete sources. Therefore, I want to use Gst-nvmultiurisrcbin to replace its source bin. I think the add_delete_source code in the example needs manual control, while nvmultiurisrcbin supports almost everything in source control.
How should I modify the deepstream-app example?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

On DS7.0, deepstream-test5, which is based on deepstream-app, already supports using nvmultiurisrcbin, please refer to \opt\nvidia\deepstream\deepstream-7.0\sources\apps\sample_apps\deepstream-test5\configs\test5_config_file_nvmultiurisrcbin_src_list_attr_all.txt

@fanzh

so,I tested test5 but didn’t see any place to use restful, did I do something wrong?
My test results are as follows:



./deepstream-test5-app -c configs/test5_dec_infer-resnet_tracker_sgie_tiled_display_int8.yml
0:00:08.111230077 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 5]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:00:08.283396402 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 5]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
0:00:08.289872059 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_secondary_vehiclemake.yml sucessfully
0:00:17.360889886 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 6x1x1           

0:00:17.619269989 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
0:00:17.620855667 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.yml sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:27.604527387 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b2_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:27.850416036 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b2_gpu0_int8.engine
0:00:27.853183732 29623 0x5e378d3979e0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/config_infer_primary.yml sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

Active sources : 0

**PERF:  FPS 0 (Avg)    FPS 1 (Avg)
Thu Jul 25 10:34:11 2024
**PERF:  0.00 (0.00)    0.00 (0.00)
** INFO: <bus_callback:291>: Pipeline ready

** INFO: <bus_callback:277>: Pipeline running

please use test5_config_file_nvmultiurisrcbin_src_list_attr_all.txt, which include “http-ip=localhost
http-port=9000” in [source-list].

@fanzh

The restful server does work, but there is a bug in test5. When I add a new stream to it, clicking the corresponding position of the playback window is invalid (it should switch to full screen). However, clicking the position of the original two streams can switch to full screen normally.

curl -XPOST 'http://localhost:7000/api/v1/stream/add' -d '{
    "key": "sensor",
    "value": {
        "camera_id": "uniqueSensorID2",
        "camera_name": "front_door",
        "camera_url": "file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4",
        "change": "camera_add",
        "metadata": {
            "resolution": "1920 x1080",
            "codec": "h264",
            "framerate": 30
        }
    },
    "headers": {
        "source": "vst",
        "created_at": "2021-06-01T14:34:13.417Z"
    }
  }'

My service port is 7000

Thanks for the sharing! I are checking! will get back to you.

here is a fix for the new added window can’t switch to full screen.

  1. add the code in function bus_callback of \opt\nvidia\deepstream\deepstream-7.0\sources\apps\sample_apps\deepstream-app\deepstream_app.c
         g_print("new stream added [%d:%s:%s]\n\n\n\n", sensorInfo.source_id, sensorInfo.sensor_id, sensorInfo.sensor_name);
        appCtx->config.num_source_sub_bins++;  //new code
.....
        g_print("new stream removed [%d:%s]\n\n\n\n", sensorInfo.source_id, sensorInfo.sensor_id);
        appCtx->config.num_source_sub_bins--;  //new code
  1. rebuild deepstream-test5.
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.