From patchwork Thu Aug 20 14:28:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ophir Munk X-Patchwork-Id: 75861 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5A99A04AC; Mon, 24 Aug 2020 09:34:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4308C1BECB; Mon, 24 Aug 2020 09:34:29 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 748661C0D4 for ; Thu, 20 Aug 2020 16:29:02 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from ophirmu@mellanox.com) with SMTP; 20 Aug 2020 17:28:56 +0300 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07KESun2016006; Thu, 20 Aug 2020 17:28:56 +0300 From: Ophir Munk To: dev@dpdk.org Cc: Raslan Darawsheh , Ophir Munk , Matan Azrad Date: Thu, 20 Aug 2020 14:28:29 +0000 Message-Id: <20200820142834.2984-8-ophirmu@mellanox.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200820142834.2984-1-ophirmu@mellanox.com> References: <20200820142834.2984-1-ophirmu@mellanox.com> X-Mailman-Approved-At: Mon, 24 Aug 2020 09:34:27 +0200 Subject: [dpdk-dev] [PATCH v1 08/13] net/mlx5: call meter detach only if DR is supported X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Flow metering is supported only in direct rules (DR). Currently the APIs of meter actions create and modify are under #ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER, while detaching the meter action is executed unconditionally. This commit adds the same ifdef to API mlx5_flow_meter_detach(). This commit avoids compilation failure of non-Linux operating systems which do not support DR. Signed-off-by: Ophir Munk Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_flow_meter.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index bf34687..b36bc7b 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -1221,6 +1221,7 @@ mlx5_flow_meter_attach(struct mlx5_priv *priv, uint32_t meter_id, void mlx5_flow_meter_detach(struct mlx5_flow_meter *fm) { +#ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER MLX5_ASSERT(fm->ref_cnt); if (--fm->ref_cnt) return; @@ -1230,6 +1231,9 @@ mlx5_flow_meter_detach(struct mlx5_flow_meter *fm) fm->ingress = 0; fm->egress = 0; fm->transfer = 0; +#else + (void)fm; +#endif } /** From patchwork Thu Aug 20 14:28:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ophir Munk X-Patchwork-Id: 75863 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A523A04AC; Mon, 24 Aug 2020 09:34:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 44F501C0B3; Mon, 24 Aug 2020 09:34:32 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 790731C0D7 for ; Thu, 20 Aug 2020 16:29:02 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from ophirmu@mellanox.com) with SMTP; 20 Aug 2020 17:28:57 +0300 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07KESun4016006; Thu, 20 Aug 2020 17:28:56 +0300 From: Ophir Munk To: dev@dpdk.org Cc: Raslan Darawsheh , Ophir Munk , Matan Azrad Date: Thu, 20 Aug 2020 14:28:31 +0000 Message-Id: <20200820142834.2984-10-ophirmu@mellanox.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200820142834.2984-1-ophirmu@mellanox.com> References: <20200820142834.2984-1-ophirmu@mellanox.com> X-Mailman-Approved-At: Mon, 24 Aug 2020 09:34:27 +0200 Subject: [dpdk-dev] [PATCH v1 10/13] net/mlx5: remove more DV dependencies X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Several DV-based structs of type 'struct mlx5dv_devx_XXX' are replaced with 'void *' to enable compilation under non-Linux operating systems. New getter functions were added to retrieve the specific fields that were previously accessed directly. Replaced structs: 'struct mlx5dv_pp *' 'struct mlx5dv_devx_event_channel *' 'struct mlx5dv_devx_umem *' 'struct mlx5dv_devx_uar *' Signed-off-by: Ophir Munk Acked-by: Matan Azrad --- drivers/common/mlx5/linux/mlx5_common_os.h | 91 ++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5.c | 14 +++-- drivers/net/mlx5/mlx5.h | 12 ++-- drivers/net/mlx5/mlx5_rxtx.h | 10 ++-- drivers/net/mlx5/mlx5_txpp.c | 38 +++++++------ drivers/net/mlx5/mlx5_txq.c | 17 +++--- 6 files changed, 144 insertions(+), 38 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h index 55c0902..8301d90 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.h +++ b/drivers/common/mlx5/linux/mlx5_common_os.h @@ -90,4 +90,95 @@ mlx5_os_get_umem_id(void *umem) return 0; return ((struct mlx5dv_devx_umem *)umem)->umem_id; } + +/** + * Get fd. Given a pointer to DevX channel object of type + * 'struct mlx5dv_devx_event_channel*' - return its fd. + * + * @param[in] channel + * Pointer to channel object. + * + * @return + * The fd if channel is valid, 0 otherwise. + */ +static inline int +mlx5_os_get_devx_channel_fd(void *channel) +{ + if (!channel) + return 0; + return ((struct mlx5dv_devx_event_channel *)channel)->fd; +} + +/** + * Get mmap offset. Given a pointer to an DevX UAR object of type + * 'struct mlx5dv_devx_uar *' - return its mmap offset. + * + * @param[in] uar + * Pointer to UAR object. + * + * @return + * The mmap offset if uar is valid, 0 otherwise. + */ +static inline off_t +mlx5_os_get_devx_uar_mmap_offset(void *uar) +{ + if (!uar) + return 0; + return ((struct mlx5dv_devx_uar *)uar)->mmap_off; +} + +/** + * Get base addr pointer. Given a pointer to an UAR object of type + * 'struct mlx5dv_devx_uar *' - return its base address. + * + * @param[in] uar + * Pointer to an UAR object. + * + * @return + * The base address if UAR is valid, 0 otherwise. + */ +static inline void * +mlx5_os_get_devx_uar_base_addr(void *uar) +{ + if (!uar) + return 0; + return ((struct mlx5dv_devx_uar *)uar)->base_addr; +} + +/** + * Get reg addr pointer. Given a pointer to an UAR object of type + * 'struct mlx5dv_devx_uar *' - return its reg address. + * + * @param[in] uar + * Pointer to an UAR object. + * + * @return + * The reg address if UAR is valid, 0 otherwise. + */ +static inline void * +mlx5_os_get_devx_uar_reg_addr(void *uar) +{ + if (!uar) + return 0; + return ((struct mlx5dv_devx_uar *)uar)->reg_addr; +} + +/** + * Get page id. Given a pointer to an UAR object of type + * 'struct mlx5dv_devx_uar *' - return its page id. + * + * @param[in] uar + * Pointer to an UAR object. + * + * @return + * The page id if UAR is valid, 0 otherwise. + */ +static inline uint32_t +mlx5_os_get_devx_uar_page_id(void *uar) +{ + if (!uar) + return 0; + return ((struct mlx5dv_devx_uar *)uar)->page_id; +} + #endif /* RTE_PMD_MLX5_COMMON_OS_H_ */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index fdda6ff..4a807fb 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -723,6 +723,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, { uint32_t uar_mapping, retry; int err = 0; + void *base_addr; for (retry = 0; retry < MLX5_ALLOC_UAR_RETRY; ++retry) { #ifdef MLX5DV_UAR_ALLOC_TYPE_NC @@ -781,7 +782,8 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, err = ENOMEM; goto exit; } - if (sh->tx_uar->base_addr) + base_addr = mlx5_os_get_devx_uar_base_addr(sh->tx_uar); + if (base_addr) break; /* * The UARs are allocated by rdma_core within the @@ -820,7 +822,8 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, err = ENOMEM; goto exit; } - if (sh->devx_rx_uar->base_addr) + base_addr = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar); + if (base_addr) break; /* * The UARs are allocated by rdma_core within the @@ -943,8 +946,11 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, err = mlx5_alloc_rxtx_uars(sh, config); if (err) goto error; - MLX5_ASSERT(sh->tx_uar && sh->tx_uar->base_addr); - MLX5_ASSERT(sh->devx_rx_uar && sh->devx_rx_uar->base_addr); + MLX5_ASSERT(sh->tx_uar); + MLX5_ASSERT(mlx5_os_get_devx_uar_base_addr(sh->tx_uar)); + + MLX5_ASSERT(sh->devx_rx_uar); + MLX5_ASSERT(mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar)); } sh->flow_id_pool = mlx5_flow_id_pool_alloc ((1 << HAIRPIN_FLOW_ID_BITS) - 1); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a45bd0b..34d7a15 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -527,7 +527,7 @@ struct mlx5_flow_id_pool { struct mlx5_txpp_wq { /* Completion Queue related data.*/ struct mlx5_devx_obj *cq; - struct mlx5dv_devx_umem *cq_umem; + void *cq_umem; union { volatile void *cq_buf; volatile struct mlx5_cqe *cqes; @@ -537,7 +537,7 @@ struct mlx5_txpp_wq { uint32_t arm_sn:2; /* Send Queue related data.*/ struct mlx5_devx_obj *sq; - struct mlx5dv_devx_umem *sq_umem; + void *sq_umem; union { volatile void *sq_buf; volatile struct mlx5_wqe *wqes; @@ -563,10 +563,10 @@ struct mlx5_dev_txpp { int32_t skew; /* Scheduling skew. */ uint32_t eqn; /* Event Queue number. */ struct rte_intr_handle intr_handle; /* Periodic interrupt. */ - struct mlx5dv_devx_event_channel *echan; /* Event Channel. */ + void *echan; /* Event Channel. */ struct mlx5_txpp_wq clock_queue; /* Clock Queue. */ struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */ - struct mlx5dv_pp *pp; /* Packet pacing context. */ + void *pp; /* Packet pacing context. */ uint16_t pp_id; /* Packet pacing context index. */ uint16_t ts_n; /* Number of captured timestamps. */ uint16_t ts_p; /* Pointer to statisticks timestamp. */ @@ -653,10 +653,10 @@ struct mlx5_dev_ctx_shared { struct mlx5_devx_obj *tis; /* TIS object. */ struct mlx5_devx_obj *td; /* Transport domain. */ struct mlx5_flow_id_pool *flow_id_pool; /* Flow ID pool. */ - struct mlx5dv_devx_uar *tx_uar; /* Tx/packer pacing shared UAR. */ + void *tx_uar; /* Tx/packet pacing shared UAR. */ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX]; /* Flex parser profiles information. */ - struct mlx5dv_devx_uar *devx_rx_uar; /* DevX UAR for Rx. */ + void *devx_rx_uar; /* DevX UAR for Rx. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index c02a007..0fc7754 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -185,7 +185,7 @@ struct mlx5_rxq_obj { struct { struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */ struct mlx5_devx_obj *devx_cq; /* DevX CQ object. */ - struct mlx5dv_devx_event_channel *devx_channel; + void *devx_channel; }; }; }; @@ -212,8 +212,8 @@ struct mlx5_rxq_ctrl { uint32_t cq_dbr_umem_id; uint64_t cq_dbr_offset; /* Storing CQ door-bell information, needed when freeing door-bell. */ - struct mlx5dv_devx_umem *wq_umem; /* WQ buffer registration info. */ - struct mlx5dv_devx_umem *cq_umem; /* CQ buffer registration info. */ + void *wq_umem; /* WQ buffer registration info. */ + void *cq_umem; /* CQ buffer registration info. */ struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ }; @@ -361,12 +361,12 @@ struct mlx5_txq_obj { struct { struct rte_eth_dev *dev; struct mlx5_devx_obj *cq_devx; - struct mlx5dv_devx_umem *cq_umem; + void *cq_umem; void *cq_buf; int64_t cq_dbrec_offset; struct mlx5_devx_dbr_page *cq_dbrec_page; struct mlx5_devx_obj *sq_devx; - struct mlx5dv_devx_umem *sq_umem; + void *sq_umem; void *sq_buf; int64_t sq_dbrec_offset; struct mlx5_devx_dbr_page *sq_dbrec_page; diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index 14d4a66..5aa73dd 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -113,13 +113,13 @@ mlx5_txpp_alloc_pp_index(struct mlx5_dev_ctx_shared *sh) rte_errno = errno; return -errno; } - if (!sh->txpp.pp->index) { + if (!(((struct mlx5dv_pp *)(sh->txpp.pp))->index)) { DRV_LOG(ERR, "Zero packet pacing index allocated."); mlx5_txpp_free_pp_index(sh); rte_errno = ENOTSUP; return -ENOTSUP; } - sh->txpp.pp_id = sh->txpp.pp->index; + sh->txpp.pp_id = ((struct mlx5dv_pp *)(sh->txpp.pp))->index; return 0; #else RTE_SET_USED(sh); @@ -175,6 +175,7 @@ mlx5_txpp_doorbell_rearm_queue(struct mlx5_dev_ctx_shared *sh, uint16_t ci) uint32_t w32[2]; uint64_t w64; } cs; + void *reg_addr; wq->sq_ci = ci + 1; cs.w32[0] = rte_cpu_to_be_32(rte_be_to_cpu_32 @@ -186,7 +187,8 @@ mlx5_txpp_doorbell_rearm_queue(struct mlx5_dev_ctx_shared *sh, uint16_t ci) /* Make sure the doorbell record is updated. */ rte_wmb(); /* Write to doorbel register to start processing. */ - __mlx5_uar_write64_relaxed(cs.w64, sh->tx_uar->reg_addr, NULL); + reg_addr = mlx5_os_get_devx_uar_reg_addr(sh->tx_uar); + __mlx5_uar_write64_relaxed(cs.w64, reg_addr, NULL); rte_wmb(); } @@ -282,7 +284,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) /* Create completion queue object for Rearm Queue. */ cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ? MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B; - cq_attr.uar_page_id = sh->tx_uar->page_id; + cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar); cq_attr.eqn = sh->txpp.eqn; cq_attr.q_umem_valid = 1; cq_attr.q_umem_offset = 0; @@ -335,7 +337,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) sq_attr.tis_num = sh->tis->id; sq_attr.cqn = wq->cq->id; sq_attr.cd_master = 1; - sq_attr.wq_attr.uar_page = sh->tx_uar->page_id; + sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar); sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC; sq_attr.wq_attr.pd = sh->pdn; sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE); @@ -522,14 +524,14 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B; cq_attr.use_first_only = 1; cq_attr.overrun_ignore = 1; - cq_attr.uar_page_id = sh->tx_uar->page_id; + cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar); cq_attr.eqn = sh->txpp.eqn; cq_attr.q_umem_valid = 1; cq_attr.q_umem_offset = 0; - cq_attr.q_umem_id = wq->cq_umem->umem_id; + cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem); cq_attr.db_umem_valid = 1; cq_attr.db_umem_offset = umem_dbrec; - cq_attr.db_umem_id = wq->cq_umem->umem_id; + cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem); cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_CLKQ_SIZE); cq_attr.log_page_size = rte_log2_u32(page_size); wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr); @@ -587,16 +589,16 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) sq_attr.cqn = wq->cq->id; sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id; sq_attr.wq_attr.cd_slave = 1; - sq_attr.wq_attr.uar_page = sh->tx_uar->page_id; + sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar); sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC; sq_attr.wq_attr.pd = sh->pdn; sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE); sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size); sq_attr.wq_attr.dbr_umem_valid = 1; sq_attr.wq_attr.dbr_addr = umem_dbrec; - sq_attr.wq_attr.dbr_umem_id = wq->sq_umem->umem_id; + sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem); sq_attr.wq_attr.wq_umem_valid = 1; - sq_attr.wq_attr.wq_umem_id = wq->sq_umem->umem_id; + sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem); /* umem_offset must be zero for static_sq_wq queue. */ sq_attr.wq_attr.wq_umem_offset = 0; wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr); @@ -630,11 +632,14 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) static inline void mlx5_txpp_cq_arm(struct mlx5_dev_ctx_shared *sh) { + void *base_addr; + struct mlx5_txpp_wq *aq = &sh->txpp.rearm_queue; uint32_t arm_sn = aq->arm_sn << MLX5_CQ_SQN_OFFSET; uint32_t db_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | aq->cq_ci; uint64_t db_be = rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq->id); - uint32_t *addr = RTE_PTR_ADD(sh->tx_uar->base_addr, MLX5_CQ_DOORBELL); + base_addr = mlx5_os_get_devx_uar_base_addr(sh->tx_uar); + uint32_t *addr = RTE_PTR_ADD(base_addr, MLX5_CQ_DOORBELL); rte_compiler_barrier(); aq->cq_dbrec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi); @@ -881,8 +886,8 @@ static int mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh) { uint16_t event_nums[1] = {0}; - int flags; int ret; + int fd; rte_atomic32_set(&sh->txpp.err_miss_int, 0); rte_atomic32_set(&sh->txpp.err_rearm_queue, 0); @@ -890,15 +895,16 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh) rte_atomic32_set(&sh->txpp.err_ts_past, 0); rte_atomic32_set(&sh->txpp.err_ts_future, 0); /* Attach interrupt handler to process Rearm Queue completions. */ - flags = fcntl(sh->txpp.echan->fd, F_GETFL); - ret = fcntl(sh->txpp.echan->fd, F_SETFL, flags | O_NONBLOCK); + fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan); + ret = mlx5_os_set_nonblock_channel_fd(fd); if (ret) { DRV_LOG(ERR, "Failed to change event channel FD."); rte_errno = errno; return -rte_errno; } memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle)); - sh->txpp.intr_handle.fd = sh->txpp.echan->fd; + fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan); + sh->txpp.intr_handle.fd = fd; sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT; if (rte_intr_callback_register(&sh->txpp.intr_handle, mlx5_txpp_interrupt_handler, sh)) { diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 21fe16b..fed9d8a 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -907,6 +907,7 @@ mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx) size_t page_size; struct mlx5_cqe *cqe; uint32_t i, nqe; + void *reg_addr; size_t alignment = (size_t)-1; int ret = 0; @@ -991,11 +992,11 @@ mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx) /* Create completion queue object with DevX. */ cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ? MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B; - cq_attr.uar_page_id = sh->tx_uar->page_id; + cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar); cq_attr.eqn = sh->txpp.eqn; cq_attr.q_umem_valid = 1; cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size; - cq_attr.q_umem_id = txq_obj->cq_umem->umem_id; + cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem); cq_attr.db_umem_valid = 1; cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset; cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem); @@ -1069,7 +1070,7 @@ mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx) sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps; sq_attr.allow_swp = !!priv->config.swp; sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode; - sq_attr.wq_attr.uar_page = sh->tx_uar->page_id; + sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar); sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC; sq_attr.wq_attr.pd = sh->pdn; sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE); @@ -1079,7 +1080,7 @@ mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx) sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem); sq_attr.wq_attr.wq_umem_valid = 1; - sq_attr.wq_attr.wq_umem_id = txq_obj->sq_umem->umem_id; + sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem); sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size; txq_obj->sq_devx = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr); if (!txq_obj->sq_devx) { @@ -1120,9 +1121,11 @@ mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx) priv->sh->tdn = priv->sh->td->id; #endif MLX5_ASSERT(sh->tx_uar); - MLX5_ASSERT(sh->tx_uar->reg_addr); - txq_ctrl->bf_reg = sh->tx_uar->reg_addr; - txq_ctrl->uar_mmap_offset = sh->tx_uar->mmap_off; + reg_addr = mlx5_os_get_devx_uar_reg_addr(sh->tx_uar); + MLX5_ASSERT(reg_addr); + txq_ctrl->bf_reg = reg_addr; + txq_ctrl->uar_mmap_offset = + mlx5_os_get_devx_uar_mmap_offset(sh->tx_uar); rte_atomic32_set(&txq_obj->refcnt, 1); txq_uar_init(txq_ctrl); LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next); From patchwork Thu Aug 20 14:28:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ophir Munk X-Patchwork-Id: 75862 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22264A04AC; Mon, 24 Aug 2020 09:34:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AFED51C0AC; Mon, 24 Aug 2020 09:34:30 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 7F5081C0D9 for ; Thu, 20 Aug 2020 16:29:02 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from ophirmu@mellanox.com) with SMTP; 20 Aug 2020 17:28:57 +0300 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07KESun6016006; Thu, 20 Aug 2020 17:28:57 +0300 From: Ophir Munk To: dev@dpdk.org Cc: Raslan Darawsheh , Ophir Munk , Matan Azrad Date: Thu, 20 Aug 2020 14:28:33 +0000 Message-Id: <20200820142834.2984-12-ophirmu@mellanox.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200820142834.2984-1-ophirmu@mellanox.com> References: <20200820142834.2984-1-ophirmu@mellanox.com> X-Mailman-Approved-At: Mon, 24 Aug 2020 09:34:27 +0200 Subject: [dpdk-dev] [PATCH v1 12/13] net/mlx5: separate vlan strip modification X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When updating a queue vlan stripping offload - either the WQ is modified in Verbs or the RQ is modified in DevX. Add a vlan stripping modify callback to 'struct mlx5_obj_ops' and assign it with the specic Verbs and DevX implementations: 'rxq_obj_modify_wq_vlan_strip' and 'rxq_obj_modify_rq_vlan_strip' respectively. Signed-off-by: Ophir Munk Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_verbs.c | 28 ++++++++++++++++++++++ drivers/net/mlx5/mlx5.h | 6 +++++ drivers/net/mlx5/mlx5_devx.c | 48 +++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_devx.h | 12 ++++++++++ drivers/net/mlx5/mlx5_vlan.c | 27 ++++----------------- 5 files changed, 98 insertions(+), 23 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_devx.c create mode 100644 drivers/net/mlx5/mlx5_devx.h diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index d41b0fe..6271f0f 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -19,6 +19,7 @@ #include #include #include +#include #include /** * Register mr. Given protection domain pointer, pointer to addr and length @@ -61,3 +62,30 @@ const struct mlx5_verbs_ops mlx5_verbs_ops = { .reg_mr = mlx5_reg_mr, .dereg_mr = mlx5_dereg_mr, }; + +/** + * Modify Rx WQ vlan stripping offload + * + * @param rxq_obj + * Rx queue object. + * + * @return 0 on success, non-0 otherwise + */ +static int +mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) +{ + uint16_t vlan_offloads = + (on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) | + 0; + struct ibv_wq_attr mod; + mod = (struct ibv_wq_attr){ + .attr_mask = IBV_WQ_ATTR_FLAGS, + .flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING, + .flags = vlan_offloads, + }; + return mlx5_glue->modify_wq(rxq_obj->wq, &mod); +} + +struct mlx5_obj_ops ibv_obj_ops = { + .rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip, +}; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 34d7a15..431f861 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -676,6 +676,11 @@ TAILQ_HEAD(mlx5_flow_meters, mlx5_flow_meter); #define MLX5_PROC_PRIV(port_id) \ ((struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private) +/* HW objects operations structure. */ +struct mlx5_obj_ops { + int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on); +}; + struct mlx5_priv { struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ struct mlx5_dev_ctx_shared *sh; /* Shared device context. */ @@ -719,6 +724,7 @@ struct mlx5_priv { void *rss_desc; /* Intermediate rss description resources. */ int flow_idx; /* Intermediate device flow index. */ int flow_nested_idx; /* Intermediate device flow index, nested. */ + struct mlx5_obj_ops *obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ uint32_t hrxqs; /* Verbs Hash Rx queues. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c new file mode 100644 index 0000000..7340412 --- /dev/null +++ b/drivers/net/mlx5/mlx5_devx.c @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include + +#include "mlx5.h" +#include "mlx5_common_os.h" +#include "mlx5_rxtx.h" +#include "mlx5_utils.h" +#include "mlx5_devx.h" + +/** + * Modify RQ vlan stripping offload + * + * @param rxq_obj + * Rx queue object. + * + * @return 0 on success, non-0 otherwise + */ +static int +mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) +{ + struct mlx5_devx_modify_rq_attr rq_attr; + + memset(&rq_attr, 0, sizeof(rq_attr)); + rq_attr.rq_state = MLX5_RQC_STATE_RDY; + rq_attr.state = MLX5_RQC_STATE_RDY; + rq_attr.vsd = (on ? 0 : 1); + rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD; + return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr); +} + +struct mlx5_obj_ops devx_obj_ops = { + .rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip, +}; diff --git a/drivers/net/mlx5/mlx5_devx.h b/drivers/net/mlx5/mlx5_devx.h new file mode 100644 index 0000000..844985c --- /dev/null +++ b/drivers/net/mlx5/mlx5_devx.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#ifndef RTE_PMD_MLX5_DEVX_H_ +#define RTE_PMD_MLX5_DEVX_H_ + +#include "mlx5.h" + +extern struct mlx5_obj_ops devx_obj_ops; + +#endif /* RTE_PMD_MLX5_DEVX_H_ */ diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index 89983a4..ea89599 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -22,6 +22,7 @@ #include "mlx5_autoconf.h" #include "mlx5_rxtx.h" #include "mlx5_utils.h" +#include "mlx5_devx.h" /** * DPDK callback to configure a VLAN filter. @@ -97,10 +98,6 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) struct mlx5_rxq_data *rxq = (*priv->rxqs)[queue]; struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - struct ibv_wq_attr mod; - uint16_t vlan_offloads = - (on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) | - 0; int ret = 0; /* Validate hw support */ @@ -115,30 +112,14 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) dev->data->port_id, queue); return; } - DRV_LOG(DEBUG, "port %u set VLAN offloads 0x%x for port %uqueue %d", - dev->data->port_id, vlan_offloads, rxq->port_id, queue); + DRV_LOG(DEBUG, "port %u set VLAN stripping offloads %d for port %uqueue %d", + dev->data->port_id, on, rxq->port_id, queue); if (!rxq_ctrl->obj) { /* Update related bits in RX queue. */ rxq->vlan_strip = !!on; return; } - if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) { - mod = (struct ibv_wq_attr){ - .attr_mask = IBV_WQ_ATTR_FLAGS, - .flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING, - .flags = vlan_offloads, - }; - ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod); - } else if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) { - struct mlx5_devx_modify_rq_attr rq_attr; - - memset(&rq_attr, 0, sizeof(rq_attr)); - rq_attr.rq_state = MLX5_RQC_STATE_RDY; - rq_attr.state = MLX5_RQC_STATE_RDY; - rq_attr.vsd = (on ? 0 : 1); - rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD; - ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); - } + ret = priv->obj_ops->rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on); if (ret) { DRV_LOG(ERR, "port %u failed to modify object %d stripping " "mode: %s", dev->data->port_id,