Hi sirs,
I use two servers equipped with ConnectX-5 adapter cards connected back-to-back to achieve the Packet Pacing Coding example now. And the version of OFED is v5.8-1.0.1.1 LTS.
I try to modify the QP rate limit and use ibv_ query_ qp function to obtain the modified QP rate limit. Unfortunately, no matter how I modify the speed, I will get the same result in the end. All of them are 124.
Why is that? What should I do to get a real qp rate limit?
IBTA define rate limit on QP. You can use Verbs API below implement that.
#include <infiniband/verbs.h>
int ibv_modify_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr,
int attr_mask);
#include <infiniband/verbs.h>
int ibv_modify_qp_rate_limit(struct ibv_qp *qp, struct ibv_qp_rate_limit_attr *attr);
https://man7.org/linux/man-pages/man3/ibv_modify_qp.3.html
https://man7.org/linux/man-pages/man3/ibv_modify_qp_rate_limit.3.html
I’m sorry I didn’t express my question clearly.
I successfully limit the rate of QP by the API ibv_modify_qp_rate_limit which you pointed. In the next step, I hope to get the rate after QP is limited in a certain way. I tried this, as follows

Finally, I get the result is 0. But I set the rate_limit to 10000Kbps. Whatever I set the rate_limit to, the printed result always is 0. The other attributes of QP which are printed are correct.
I wonder why is that and how I can print the real rate_limit attribute of QP.
modify QP and query QP API define different qp_attr and mask. On query QP attr rate_limit not supported.
https://man7.org/linux/man-pages/man3/ibv_query_qp.3.html
struct ibv_qp_attr {
enum ibv_qp_state qp_state; /* Current QP state */
enum ibv_qp_state cur_qp_state; /* Current QP state - irrelevant for ibv_query_qp */
enum ibv_mtu path_mtu; /* Path MTU (valid only for RC/UC QPs) */
enum ibv_mig_state path_mig_state; /* Path migration state (valid if HCA supports APM) */
uint32_t qkey; /* Q_Key of the QP (valid only for UD QPs) */
uint32_t rq_psn; /* PSN for receive queue (valid only for RC/UC QPs) */
uint32_t sq_psn; /* PSN for send queue */
uint32_t dest_qp_num; /* Destination QP number (valid only for RC/UC QPs) */
unsigned int qp_access_flags; /* Mask of enabled remote access operations (valid only for RC/UC QPs) */
struct ibv_qp_cap cap; /* QP capabilities */
struct ibv_ah_attr ah_attr; /* Primary path address vector (valid only for RC/UC QPs) */
struct ibv_ah_attr alt_ah_attr; /* Alternate path address vector (valid only for RC/UC QPs) */
uint16_t pkey_index; /* Primary P_Key index */
uint16_t alt_pkey_index; /* Alternate P_Key index */
uint8_t en_sqd_async_notify; /* Enable SQD.drained async notification - irrelevant for ibv_query_qp */
uint8_t sq_draining; /* Is the QP draining? (Valid only if qp_state is SQD) */
uint8_t max_rd_atomic; /* Number of outstanding RDMA reads & atomic operations on the destination QP (valid only for RC QPs) */
uint8_t max_dest_rd_atomic; /* Number of responder resources for handling incoming RDMA reads & atomic operations (valid only for RC QPs) */
uint8_t min_rnr_timer; /* Minimum RNR NAK timer (valid only for RC QPs) */
uint8_t port_num; /* Primary port number */
uint8_t timeout; /* Local ack timeout for primary path (valid only for RC QPs) */
uint8_t retry_cnt; /* Retry count (valid only for RC QPs) */
uint8_t rnr_retry; /* RNR retry (valid only for RC QPs) */
uint8_t alt_port_num; /* Alternate port number */
uint8_t alt_timeout; /* Local ack timeout for alternate path (valid only for RC QPs) */
};
I think no API can “print the real rate_limit attribute of QP”.
alternative you can set QP SL then print SLx xmit by MAD CMD. Then calculate。
root@mtbc-r740-06:~# perfquery 7 1 -X
PortXmitDataSL counters: Lid 7 port 1
PortSelect:…1
CounterSelect:…0x0000
XmtDataSL0:…528247
XmtDataSL1:…0
XmtDataSL2:…0
XmtDataSL3:…0
XmtDataSL4:…0
XmtDataSL5:…0
XmtDataSL6:…0
XmtDataSL7:…0
XmtDataSL8:…0
XmtDataSL9:…0
XmtDataSL10:…0
XmtDataSL11:…0
XmtDataSL12:…0
XmtDataSL13:…0
XmtDataSL14:…0
XmtDataSL15:…0