From patchwork Fri Oct 9 15:01:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 80189 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7C74A04BC; Fri, 9 Oct 2020 17:01:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 853251D6CD; Fri, 9 Oct 2020 17:01:27 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id B8C761D6B7 for ; Fri, 9 Oct 2020 17:01:24 +0200 (CEST) From: Bing Zhao To: viacheslavo@mellanox.com, matan@mellanox.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com Date: Fri, 9 Oct 2020 23:01:18 +0800 Message-Id: <1602255678-108560-1-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 Subject: [dpdk-dev] [PATCH] net/mlx5: add flow sync API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When creating a flow, the rule itself might not take effort immediately once the function call returns with success. It would take some time to let the steering synchronize with the hardware. If the application wants the packet to be sent to hit the flow after it it created, this flow sync API can be used to clear the steering HW cache to enforce next packet hits the latest rules. For TX, usually the NIC TX domain and/or the FDB domain should be synchronized depends in which domain the flow is created. The application could also try to synchronize the NIC RX and/or the FDB domain for the ingress packets. But in the real life, it is hard to determine when a packet will come into the NIC. Signed-off-by: Bing Zhao --- drivers/common/mlx5/linux/mlx5_glue.c | 14 ++++++++++++++ drivers/common/mlx5/linux/mlx5_glue.h | 1 + drivers/net/mlx5/mlx5_flow.c | 22 ++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 5 +++++ drivers/net/mlx5/mlx5_flow_dv.c | 25 +++++++++++++++++++++++++ drivers/net/mlx5/rte_pmd_mlx5.h | 19 +++++++++++++++++++ drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 ++ 7 files changed, 88 insertions(+) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index fcf03e8..86047b1 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -494,6 +494,19 @@ #endif } +static int +mlx5_glue_dr_sync_domain(void *domain, uint32_t flags) +{ +#ifdef HAVE_MLX5DV_DR + return mlx5dv_dr_domain_sync(domain, flags); +#else + (void)domain; + (void)flags; + errno = ENOTSUP; + return errno; +#endif +} + static struct ibv_cq_ex * mlx5_glue_dv_create_cq(struct ibv_context *context, struct ibv_cq_init_attr_ex *cq_attr, @@ -1298,6 +1311,7 @@ .dr_destroy_flow_tbl = mlx5_glue_dr_destroy_flow_tbl, .dr_create_domain = mlx5_glue_dr_create_domain, .dr_destroy_domain = mlx5_glue_dr_destroy_domain, + .dr_sync_domain = mlx5_glue_dr_sync_domain, .dv_create_cq = mlx5_glue_dv_create_cq, .dv_create_wq = mlx5_glue_dv_create_wq, .dv_query_device = mlx5_glue_dv_query_device, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 734ace2..d24a16e 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -195,6 +195,7 @@ struct mlx5_glue { void *(*dr_create_domain)(struct ibv_context *ctx, enum mlx5dv_dr_domain_type domain); int (*dr_destroy_domain)(void *domain); + int (*dr_sync_domain)(void *domain, uint32_t flags); struct ibv_cq_ex *(*dv_create_cq) (struct ibv_context *context, struct ibv_cq_init_attr_ex *cq_attr, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a94f630..e25ec0c 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -29,6 +29,7 @@ #include "mlx5_flow.h" #include "mlx5_flow_os.h" #include "mlx5_rxtx.h" +#include "rte_pmd_mlx5.h" /** Device flow drivers. */ extern const struct mlx5_flow_driver_ops mlx5_flow_verbs_drv_ops; @@ -6310,3 +6311,24 @@ struct mlx5_meter_domains_infos * dev->data->port_id); return -ENOTSUP; } + +static int +mlx5_flow_sync_memory(struct rte_eth_dev *dev, uint32_t domains, uint32_t flags) +{ + const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = { .transfer = 0 }; + + if (flow_get_drv_type(dev, &attr) == MLX5_FLOW_TYPE_DV) { + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_DV); + return fops->sync_memory(dev, domains, flags); + } + return -ENOTSUP; +} + +int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + + return mlx5_flow_sync_memory(dev, domains, + MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW); +} diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 279daf2..ae0a508 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -910,6 +910,10 @@ typedef int (*mlx5_flow_get_aged_flows_t) void **context, uint32_t nb_contexts, struct rte_flow_error *error); +typedef int (*mlx5_flow_sync_memory_t) + (struct rte_eth_dev *dev, + uint32_t domains, + uint32_t flags); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; mlx5_flow_prepare_t prepare; @@ -926,6 +930,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_counter_free_t counter_free; mlx5_flow_counter_query_t counter_query; mlx5_flow_get_aged_flows_t get_aged_flows; + mlx5_flow_sync_memory_t sync_memory; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 79fdf34..b78ffc5 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10150,6 +10150,30 @@ struct field_modify_info modify_tcp[] = { flow_dv_shared_unlock(dev); } +static int +flow_dv_sync_domain(struct rte_eth_dev *dev, uint32_t domains, uint32_t flags) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int ret = 0; + + if (domains & (1 << MLX5DV_FLOW_TABLE_TYPE_NIC_RX)) { + ret = mlx5_glue->dr_sync_domain(priv->sh->rx_domain, flags); + if (ret) + return ret; + } + if (domains & (1 << MLX5DV_FLOW_TABLE_TYPE_NIC_TX)) { + ret = mlx5_glue->dr_sync_domain(priv->sh->tx_domain, flags); + if (ret) + return ret; + } + if (domains & (1 << MLX5DV_FLOW_TABLE_TYPE_FDB)) { + ret = mlx5_glue->dr_sync_domain(priv->sh->fdb_domain, flags); + if (ret) + return ret; + } + return 0; +} + const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .validate = flow_dv_validate, .prepare = flow_dv_prepare, @@ -10166,6 +10190,7 @@ struct field_modify_info modify_tcp[] = { .counter_free = flow_dv_counter_free, .counter_query = flow_dv_counter_query, .get_aged_flows = flow_get_aged_flows, + .sync_memory = flow_dv_sync_domain, }; #endif /* HAVE_IBV_FLOW_DV_SUPPORT */ diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index 8c69228..636dd07 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -32,4 +32,23 @@ __rte_experimental int rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n); +/** + * Synchronize the flows to make them take effort on hardware. + * + * @param[in] port_id + * The port identifier of the Ethernet device.. + * @param[in] domains + * Bitmask of domains in which synchronization will be done. + * Refer to "/usr/include/infiniband/mlx5dv.h" + * The index of bit that set represents the corresponding domain ID. + * + * @return + * - (0) if successful. + * - (-EINVAL) if bad parameter. + * - (-ENOTSUP) if hardware doesn't support. + * - Other errors + */ +__rte_experimental +int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains); + #endif diff --git a/drivers/net/mlx5/rte_pmd_mlx5_version.map b/drivers/net/mlx5/rte_pmd_mlx5_version.map index bc1d3d0..82a32b5 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5_version.map +++ b/drivers/net/mlx5/rte_pmd_mlx5_version.map @@ -7,4 +7,6 @@ EXPERIMENTAL { # added in 20.02 rte_pmd_mlx5_get_dyn_flag_names; + # added in 20.11 + rte_pmd_mlx5_sync_flow; };