Deepstream_pose_estimation Error

Hi all,
I’m trying deepstream_pose_estimation.

  The batch-size parameter defaults to 1.  Why can't the batch-size parameter be modified to 2 ?

The error is as follows:
[host] NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1835> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
[host] NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2014> [UID = 1]: deserialized backend context :/home/alex/nvidia_deeplearning/poseCali/data/trtPose.engine failed to match config params, trying rebuild
[host] NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1916> [UID = 1]: Trying to create engine from model filesERROR: …/src/nvdsinfer_model_builder.cpp:861 failed to build network since there is no model file matched.
ERROR: …/src/nvdsinfer_model_builder.cpp:799 failed to build network.