NVIDIA Holoscan Data Flow Tracking problem using AJA input operator

Hello, i’m using Data Flow Tracking utility to measure my pipeline latency. When i’m using it with a video file input (Replayer operator) everything works fine, but when i apply the same method to measure latency using AJA input video as the source, the Data Flow Tracking provoque a segmentation fault. Have you any idea how to fix this.

Source Code

  if app.source == "aja":
        
    with Tracker(app, filename="AJA_arthrosegmentation.log") as tracker:
            
        app.config(config_file)
        app.run()

        # Retrieve the number of paths between the root operators and the leaf operators
        num_paths = tracker.get_num_paths()

        # Retrieve a vector of strings representing paths between the root operators and the leaf operators
        path_strings = tracker.get_path_strings()

        # Retrieve the value of different metrics
        max_e2e_latency = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMaxE2ELatency)
        avg_e2e_latency = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kAvgE2ELatency)
        min_e2e_latency = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMinE2ELatency)
        max_message_id = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMaxMessageID)
        min_message_id = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMinMessageID)


    elif app.source == "replayer":
        with Tracker(app, filename="AJA_arthrosegmentation.log") as tracker:
            
            app.config(config_file)
            app.run()

            tracker.print()

            # Retrieve the number of paths between the root operators and the leaf operators
            num_paths = tracker.get_num_paths()

            # Retrieve a vector of strings representing paths between the root operators and the leaf operators
            path_strings = tracker.get_path_strings()

            # Retrieve the value of different metrics
            max_e2e_latency = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMaxE2ELatency)
            avg_e2e_latency = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kAvgE2ELatency)
            min_e2e_latency = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMinE2ELatency)
            max_message_id = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMaxMessageID)
            min_message_id = tracker.get_metric("pathstring", Tracker.DataFlowMetric.kMinMessageID)

Replayer Source output

Data Flow Tracking Results:
Total paths: 2

Path 1: replayer,ImageProcessing,preprocessor,inference,postprocessor,PostImageProcessing,viz
Number of messages: 79
Min Latency Message No: 7
Min end-to-end Latency (ms): 51.934
Avg end-to-end Latency (ms): 55.978
Max Latency Message No: 55
Max end-to-end Latency (ms): 71.468

Path 2: replayer,viz
Number of messages: 79
Min Latency Message No: 7
Min end-to-end Latency (ms): 51.935
Avg end-to-end Latency (ms): 55.9781
Max Latency Message No: 55
Max end-to-end Latency (ms): 71.468

AJA Source output

root@ubuntu:/opt/nvidia/holoscan-sdk/examples/MyModel_laser_segmentation/python# sudo python3 AJA_arthrosegmentation_debugging.py -s aja
[info] [gxf_executor.cpp:210] Creating context
[info] [gxf_executor.cpp:1595] Loading extensions from configs...
[info] [gxf_executor.cpp:1741] Activating Graph...
[info] [resource_manager.cpp:79] ResourceManager cannot find Resource of type: nvidia::gxf::GPUDevice for entity [eid: 00002, name: __entity_2]
[info] [resource_manager.cpp:106] ResourceManager cannot find Resource of type: nvidia::gxf::GPUDevice for component [cid: 00003, name: cuda_stream]
[info] [resource.hpp:44] Resource [type: nvidia::gxf::GPUDevice] from component [cid: 3] cannot find its value from ResourceManager
[info] [gxf_executor.cpp:1771] Running Graph...
[info] [gxf_executor.cpp:1773] Waiting for completion...
[info] [gxf_executor.cpp:1774] Graph execution waiting. Fragment: AJA_arthrosegmentation
[info] [greedy_scheduler.cpp:190] Scheduling 9 entities
[info] [aja_source.cpp:371] AJA Source: Capturing from NTV2_CHANNEL1
[info] [aja_source.cpp:372] AJA Source: RDMA is disabled
[info] [aja_source.cpp:378] AJA Source: Overlay output is disabled
[info] [infer_utils.cpp:222] Input tensor names empty from Config. Creating from pre_processor map.
[info] [infer_utils.cpp:224] Input Tensor names: [source_video]
[info] [infer_utils.cpp:258] Output tensor names empty from Config. Creating from inference map.
[info] [infer_utils.cpp:260] Output Tensor names: [output]
[info] [inference.cpp:202] Inference Specifications created
[info] [core.cpp:46] TRT Inference: converting ONNX model at ../data/arthroscopic_segmentation/model/model_full_image_for_clahe_converted.onnx
[info] [utils.cpp:81] Cached engine found: ../data/arthroscopic_segmentation/model/model_full_image_for_clahe_converted.Orin.8.7.16.trt.8.2.3.0.engine.fp32
[info] [core.cpp:79] Loading Engine: ../data/arthroscopic_segmentation/model/model_full_image_for_clahe_converted.Orin.8.7.16.trt.8.2.3.0.engine.fp32
[info] [utils.hpp:44] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[info] [core.cpp:122] Engine loaded: ../data/arthroscopic_segmentation/model/model_full_image_for_clahe_converted.Orin.8.7.16.trt.8.2.3.0.engine.fp32
[info] [infer_manager.cpp:343] HoloInfer buffer created for output
[info] [inference.cpp:213] Inference context setup complete
[info] [context.cpp:50] _______________
[info] [context.cpp:50] Vulkan Version:
[info] [context.cpp:50]  - available:  1.3.204
[info] [context.cpp:50]  - requesting: 1.2.0
[info] [context.cpp:50] ______________________
[info] [context.cpp:50] Used Instance Layers :
[info] [context.cpp:50] 
[info] [context.cpp:50] Used Instance Extensions :
[info] [context.cpp:50] VK_KHR_surface
[info] [context.cpp:50] VK_KHR_xcb_surface
[info] [context.cpp:50] VK_EXT_debug_utils
[info] [context.cpp:50] VK_KHR_external_memory_capabilities
[info] [context.cpp:50] ____________________
[info] [context.cpp:50] Compatible Devices :
[info] [context.cpp:50] 0: NVIDIA Tegra Orin (nvgpu)
[info] [context.cpp:50] Physical devices found : 
[info] [context.cpp:50] 1
[info] [context.cpp:50] ________________________
[info] [context.cpp:50] Used Device Extensions :
[info] [context.cpp:50] VK_KHR_swapchain
[info] [context.cpp:50] VK_KHR_external_memory
[info] [context.cpp:50] VK_KHR_external_memory_fd
[info] [context.cpp:50] VK_KHR_external_semaphore
[info] [context.cpp:50] VK_KHR_external_semaphore_fd
[info] [context.cpp:50] VK_KHR_push_descriptor
[info] [context.cpp:50] VK_EXT_line_rasterization
[info] [context.cpp:50] 
[info] [vulkan_app.cpp:777] Using device 0: NVIDIA Tegra Orin (nvgpu) (UUID b4ab2c49f44f5692b029c064fa653ac)
[warning] [operator.cpp:89] Not a root operator but still input input_message_labels is 0. Op: viz
[critical] [expected.hpp:829] Expected does not have a value. Check before accessing.
#01 /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(+0x58d168) [0xffff7e5f8168]
#02 holoscan::AnnotatedDoubleBufferReceiver::receive_abi(long*) /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan29AnnotatedDoubleBufferReceiver11receive_abiEPl+0xc8) [0xffff7e5f8238]
#03 nvidia::gxf::Receiver::receive() /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf8Receiver7receiveEv+0x24) [0xffff7df40ab4]
#04 holoscan::gxf::GXFInputContext::receive_impl(char const*, bool) /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan3gxf15GXFInputContext12receive_implEPKcb+0xb0) [0xffff7e5deac0]
#05 tl::expected<holoscan::gxf::Entity, holoscan::RuntimeError> holoscan::InputContext::receive<holoscan::gxf::Entity>(char const*) /usr/local/lib/python3.8/dist-packages/holoscan/operators/aja_source/../..//lib/libholoscan_op_aja.so.0(_ZN8holoscan12InputContext7receiveINS_3gxf6EntityEEEN2tl8expectedIT_NS_12RuntimeErrorEEEPKc+0x250) [0xffff9b5513b8]
#06 holoscan::ops::AJASourceOp::compute(holoscan::InputContext&, holoscan::OutputContext&, holoscan::ExecutionContext&) /usr/local/lib/python3.8/dist-packages/holoscan/operators/aja_source/../..//lib/libholoscan_op_aja.so.0(_ZN8holoscan3ops11AJASourceOp7computeERNS_12InputContextERNS_13OutputContextERNS_16ExecutionContextE+0x58) [0xffff9b53a920]
#07 holoscan::gxf::GXFWrapper::tick() /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan3gxf10GXFWrapper4tickEv+0x234) [0xffff7e5e4c44]
#08 nvidia::gxf::EntityExecutor::EntityItem::tickCodelet(nvidia::gxf::Handle<nvidia::gxf::Codelet> const&) /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor10EntityItem11tickCodeletERKNS0_6HandleINS0_7CodeletEEE+0x498) [0xffff7df0b008]
#09 nvidia::gxf::EntityExecutor::EntityItem::tick(long, nvidia::gxf::Router*) /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor10EntityItem4tickElPNS0_6RouterE+0x344) [0xffff7df0bf14]
#10 nvidia::gxf::EntityExecutor::EntityItem::execute(long, nvidia::gxf::Router*, long&) /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor10EntityItem7executeElPNS0_6RouterERl+0x2c0) [0xffff7df0c750]
#11 nvidia::gxf::EntityExecutor::executeEntity(long, long) /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor13executeEntityEll+0x364) [0xffff7df0cea4]
#12 /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(+0xee79c) [0xffff7de9179c]
#13 /lib/aarch64-linux-gnu/libstdc++.so.6(+0xccf9c) [0xffff800d5f9c]
#14 /lib/aarch64-linux-gnu/libpthread.so.0(+0x7624) [0xffffa42af624]
#15 /lib/aarch64-linux-gnu/libc.so.6(+0xd162c) [0xffffa43aa62c]
[ubuntu:1888522:0:1888663] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x8)
==== backtrace (tid:1888663) ====
 0  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libucs.so.0(ucs_handle_error+0x2d4) [0xfffedf418ce4]
 1  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libucs.so.0(+0x2ae74) [0xfffedf418e74]
 2  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libucs.so.0(+0x2b21c) [0xfffedf41921c]
 3  linux-vdso.so.1(__kernel_rt_sigreturn+0) [0xffffa44a07c0]
 4  /lib/aarch64-linux-gnu/libnvinfer.so.8(+0xc89d10) [0xfffeb63d0d10]
 5  /lib/aarch64-linux-gnu/libnvinfer.so.8(+0x7cdbb8) [0xfffeb5f14bb8]
 6  /lib/aarch64-linux-gnu/libnvinfer.so.8(+0x121bdb8) [0xfffeb6962db8]
 7  /lib/aarch64-linux-gnu/libnvinfer.so.8(+0x6b6150) [0xfffeb5dfd150]
 8  /usr/local/lib/python3.8/dist-packages/holoscan/operators/inference/../..//lib/libholoscan_infer.so.0(_ZN8holoscan9inference8TrtInferD1Ev+0x358) [0xfffeec02b4d8]
 9  /usr/local/lib/python3.8/dist-packages/holoscan/operators/inference/../..//lib/libholoscan_infer.so.0(_ZN8holoscan9inference8TrtInferD0Ev+0x14) [0xfffeec02b534]
10  /usr/local/lib/python3.8/dist-packages/holoscan/operators/inference/../..//lib/libholoscan_infer.so.0(_ZN8holoscan9inference12ManagerInfer7cleanupEv+0x54) [0xfffeec05332c]
11  /usr/local/lib/python3.8/dist-packages/holoscan/operators/inference/../..//lib/libholoscan_infer.so.0(_ZN8holoscan9inference12ManagerInferD2Ev+0x20) [0xfffeec053478]
12  /usr/local/lib/python3.8/dist-packages/holoscan/operators/inference/../..//lib/libholoscan_infer.so.0(_ZNSt10unique_ptrIN8holoscan9inference12ManagerInferESt14default_deleteIS2_EED1Ev+0x1c) [0xfffeec05d99c]
13  /lib/aarch64-linux-gnu/libc.so.6(+0x3647c) [0xffffa430f47c]
14  /lib/aarch64-linux-gnu/libc.so.6(+0x3660c) [0xffffa430f60c]
15  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan29AnnotatedDoubleBufferReceiver11receive_abiEPl+0) [0xffff7e5f8170]
16  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan29AnnotatedDoubleBufferReceiver11receive_abiEPl+0xc8) [0xffff7e5f8238]
17  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf8Receiver7receiveEv+0x24) [0xffff7df40ab4]
18  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan3gxf15GXFInputContext12receive_implEPKcb+0xb0) [0xffff7e5deac0]
19  /usr/local/lib/python3.8/dist-packages/holoscan/operators/aja_source/../..//lib/libholoscan_op_aja.so.0(_ZN8holoscan12InputContext7receiveINS_3gxf6EntityEEEN2tl8expectedIT_NS_12RuntimeErrorEEEPKc+0x250) [0xffff9b5513b8]
20  /usr/local/lib/python3.8/dist-packages/holoscan/operators/aja_source/../..//lib/libholoscan_op_aja.so.0(_ZN8holoscan3ops11AJASourceOp7computeERNS_12InputContextERNS_13OutputContextERNS_16ExecutionContextE+0x58) [0xffff9b53a920]
21  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libholoscan_core.so.0(_ZN8holoscan3gxf10GXFWrapper4tickEv+0x234) [0xffff7e5e4c44]
22  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor10EntityItem11tickCodeletERKNS0_6HandleINS0_7CodeletEEE+0x498) [0xffff7df0b008]
23  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor10EntityItem4tickElPNS0_6RouterE+0x344) [0xffff7df0bf14]
24  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor10EntityItem7executeElPNS0_6RouterERl+0x2c0) [0xffff7df0c750]
25  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(_ZN6nvidia3gxf14EntityExecutor13executeEntityEll+0x364) [0xffff7df0cea4]
26  /usr/local/lib/python3.8/dist-packages/holoscan/graphs/..//lib/libgxf_std.so(+0xee79c) [0xffff7de9179c]
27  /lib/aarch64-linux-gnu/libstdc++.so.6(+0xccf9c) [0xffff800d5f9c]
28  /lib/aarch64-linux-gnu/libpthread.so.0(+0x7624) [0xffffa42af624]
29  /lib/aarch64-linux-gnu/libc.so.6(+0xd162c) [0xffffa43aa62c]
=================================
Segmentation fault