How can I make the pipeline which includes the function of dewaper and pose_estimation?

Dear NVIDIA Developers,
I want to construct a pipeline that receives a stream from a fisheye camera, dewarps it, and run pose estimation model.
To achieve this, I refered these sample apps:

And then, I made the following pipeline.
source_bin->nvvidconvert1->capsfilter->nvdewarper->streammux->pgie->nvvideoconvert2->nvosd->transform->sink

Here, nvdewarper is set as bellow :
g_object_set(G_OBJECT(nvdewarper), “config-file”, “./config_dewarp.txt”, NULL);.

And pgie is set as bellow :
g_object_set(G_OBJECT(pgie), “output-tensor-meta”, TRUE, “config-file-path”, “deepstream_pose_estimation_config.txt”, NULL);.

However, the keypoints and skeletons are not displayed and confirmed the pose information is included in meta data by checking the function, pgie_src_pad_buffer_probe().

Provide complete information as applicable to my setup.
• Hardware Platform (Jetson / GPU)Jetson Xavier Nx
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3.0
• Issue Type questions

Kind regards

Please refer to deepstream_pose_estimation/deepstream_pose_estimation_app.cpp at master · NVIDIA-AI-IOT/deepstream_pose_estimation · GitHub

Whether the keypoints and skeletons can be displayed should have nothing to do with video source