From patchwork Wed Feb 8 07:33:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123437 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 721DE41C3C; Wed, 8 Feb 2023 09:31:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B9C042D40; Wed, 8 Feb 2023 09:31:25 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 1CF3941151 for ; Wed, 8 Feb 2023 09:31:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675845083; x=1707381083; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QIYpXxiy57CB08hbHYJfQScgdsp5R6Jp5z15tCx4++Q=; b=EAlS7r4nKLFmlwZWe7t9ffuoTY/JuKZijc/ZfkjAg2zhNjD8pmcgds4o Z8PROPWb70xN3eu2qmOui6yNSofg2CiRJMtJEL/BSXEz8QBERnLXa1HBa F7i8L/K5a+1cTFk9r5shnt8aWHQOVnuavOSxh3F1OdUx7Zz72sZvMD4JB 5x33YcWC6SKcVdwkfuqoYMTOwLo4MeFYsIn/3vfF4urlvdtLo1qtDe/s2 c15ZwtTiLGzek/BlePfDi4ListEYzFidPxO5KpYjAnkQL8BecxKk6xFrC ctIbnnl0mzb0+bquy8zsrQ4EOnwyf2YDTfKqveahwyVfNRHwB02lhMIlD Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="328399494" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="328399494" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 00:31:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="841100148" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="841100148" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga005.jf.intel.com with ESMTP; 08 Feb 2023 00:31:14 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu Subject: [PATCH v7 1/6] net/idpf: add hw statistics Date: Wed, 8 Feb 2023 07:33:56 +0000 Message-Id: <20230208073401.2468579-2-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230208073401.2468579-1-mingxia.liu@intel.com> References: <20230207101650.2402452-1-mingxia.liu@intel.com> <20230208073401.2468579-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch add hardware packets/bytes statistics. Signed-off-by: Mingxia Liu --- drivers/common/idpf/idpf_common_device.c | 17 +++++ drivers/common/idpf/idpf_common_device.h | 4 + drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++ drivers/common/idpf/idpf_common_virtchnl.h | 3 + drivers/common/idpf/version.map | 2 + drivers/net/idpf/idpf_ethdev.c | 86 ++++++++++++++++++++++ 6 files changed, 139 insertions(+) diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c index 48b3e3c0dd..5475a3e52c 100644 --- a/drivers/common/idpf/idpf_common_device.c +++ b/drivers/common/idpf/idpf_common_device.c @@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport, return 0; } +void +idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes) +{ + nes->rx_bytes = nes->rx_bytes - oes->rx_bytes; + nes->rx_unicast = nes->rx_unicast - oes->rx_unicast; + nes->rx_multicast = nes->rx_multicast - oes->rx_multicast; + nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast; + nes->rx_errors = nes->rx_errors - oes->rx_errors; + nes->rx_discards = nes->rx_discards - oes->rx_discards; + nes->tx_bytes = nes->tx_bytes - oes->tx_bytes; + nes->tx_unicast = nes->tx_unicast - oes->tx_unicast; + nes->tx_multicast = nes->tx_multicast - oes->tx_multicast; + nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast; + nes->tx_errors = nes->tx_errors - oes->tx_errors; + nes->tx_discards = nes->tx_discards - oes->tx_discards; +} + RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE); diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 545117df79..1d8e7d405a 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -115,6 +115,8 @@ struct idpf_vport { bool tx_vec_allowed; bool rx_use_avx512; bool tx_use_avx512; + + struct virtchnl2_vport_stats eth_stats_offset; }; /* Message type read in virtual channel from PF */ @@ -191,5 +193,7 @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues) __rte_internal int idpf_vport_info_init(struct idpf_vport *vport, struct virtchnl2_create_vport *vport_info); +__rte_internal +void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes); #endif /* _IDPF_COMMON_DEVICE_H_ */ diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 31fadefbd3..40cff34c09 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args) case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR: case VIRTCHNL2_OP_ALLOC_VECTORS: case VIRTCHNL2_OP_DEALLOC_VECTORS: + case VIRTCHNL2_OP_GET_STATS: /* for init virtchnl ops, need to poll the response */ err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer); clear_cmd(adapter); @@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter) return err; } +int +idpf_vc_stats_query(struct idpf_vport *vport, + struct virtchnl2_vport_stats **pstats) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_vport_stats vport_stats; + struct idpf_cmd_info args; + int err; + + vport_stats.vport_id = vport->vport_id; + args.ops = VIRTCHNL2_OP_GET_STATS; + args.in_args = (u8 *)&vport_stats; + args.in_args_size = sizeof(vport_stats); + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err) { + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS"); + *pstats = NULL; + return err; + } + *pstats = (struct virtchnl2_vport_stats *)args.out_buffer; + return 0; +} + #define IDPF_RX_BUF_STRIDE 64 int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index c105f02836..6b94fd5b8f 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -49,4 +49,7 @@ __rte_internal int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq); __rte_internal int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq); +__rte_internal +int idpf_vc_stats_query(struct idpf_vport *vport, + struct virtchnl2_vport_stats **pstats); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 8b33130bd6..e6a02828ba 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -46,6 +46,7 @@ INTERNAL { idpf_vc_rss_key_set; idpf_vc_rss_lut_set; idpf_vc_rxq_config; + idpf_vc_stats_query; idpf_vc_txq_config; idpf_vc_vectors_alloc; idpf_vc_vectors_dealloc; @@ -59,6 +60,7 @@ INTERNAL { idpf_vport_irq_map_config; idpf_vport_irq_unmap_config; idpf_vport_rss_config; + idpf_vport_stats_update; local: *; }; diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 33f5e90743..02ddb0330a 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) return ptypes; } +static uint64_t +idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) +{ + uint64_t mbuf_alloc_failed = 0; + struct idpf_rx_queue *rxq; + int i = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed, + __ATOMIC_RELAXED); + } + + return mbuf_alloc_failed; +} + +static int +idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct idpf_vport *vport = + (struct idpf_vport *)dev->data->dev_private; + struct virtchnl2_vport_stats *pstats = NULL; + int ret; + + ret = idpf_vc_stats_query(vport, &pstats); + if (ret == 0) { + uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads & + RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 : + RTE_ETHER_CRC_LEN; + + idpf_vport_stats_update(&vport->eth_stats_offset, pstats); + stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + + pstats->rx_broadcast - pstats->rx_discards; + stats->opackets = pstats->tx_broadcast + pstats->tx_multicast + + pstats->tx_unicast; + stats->imissed = pstats->rx_discards; + stats->oerrors = pstats->tx_errors + pstats->tx_discards; + stats->ibytes = pstats->rx_bytes; + stats->ibytes -= stats->ipackets * crc_stats_len; + stats->obytes = pstats->tx_bytes; + + dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev); + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + } else { + PMD_DRV_LOG(ERR, "Get statistics failed"); + } + return ret; +} + +static void +idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) +{ + struct idpf_rx_queue *rxq; + int i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + __atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); + } +} + +static int +idpf_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct idpf_vport *vport = + (struct idpf_vport *)dev->data->dev_private; + struct virtchnl2_vport_stats *pstats = NULL; + int ret; + + ret = idpf_vc_stats_query(vport, &pstats); + if (ret != 0) + return ret; + + /* set stats offset base on current values */ + vport->eth_stats_offset = *pstats; + + idpf_reset_mbuf_alloc_failed_stats(dev); + + return 0; +} + static int idpf_init_rss(struct idpf_vport *vport) { @@ -327,6 +408,9 @@ idpf_dev_start(struct rte_eth_dev *dev) goto err_vport; } + if (idpf_dev_stats_reset(dev)) + PMD_DRV_LOG(ERR, "Failed to reset stats"); + vport->stopped = 0; return 0; @@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = { .tx_queue_release = idpf_dev_tx_queue_release, .mtu_set = idpf_dev_mtu_set, .dev_supported_ptypes_get = idpf_dev_supported_ptypes_get, + .stats_get = idpf_dev_stats_get, + .stats_reset = idpf_dev_stats_reset, }; static uint16_t From patchwork Wed Feb 8 07:33:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123436 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76FE641C3C; Wed, 8 Feb 2023 09:31:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4B44A42D31; Wed, 8 Feb 2023 09:31:24 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 330A242D44 for ; Wed, 8 Feb 2023 09:31:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675845082; x=1707381082; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wWkTMeO2nFR/BGXffz6+ckhweJKb8buAN9+QlKVXWtU=; b=duNNZMC9mBFjcCeW/xNcnvqLA8HlZg0x6G2TUX6j6Qc5xy+alpeyHfTN jRCgtrguLHvfSXsXlHGp+tXOADSpBuzTP3FNabTr7RDpd1l3GvtGupC5t UeoulDgGmKuHS7+wWF5rT7nqSUkGlPZuOgFAXWqe6fqKyh7ExsySzbRPV rhTsSgIYvqIC9dmqoxfS/TloBMzY+/Wf+2Q5ls88jyW9l02d9sxXI6+0K 5/oiq66Nu8bhRk3MXmcxIK9Zg5j4ZDBb/HtrC53EGZ8bjIoa/GZ4qOGO2 hBczUQ03v4GAmGk9fY+hoqH9k1jHrqRDz4Z6zFTbKusYBZuXK4u9U9/bF A==; X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="328399493" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="328399493" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 00:31:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="841100161" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="841100161" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga005.jf.intel.com with ESMTP; 08 Feb 2023 00:31:16 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu Subject: [PATCH v7 2/6] net/idpf: add RSS set/get ops Date: Wed, 8 Feb 2023 07:33:57 +0000 Message-Id: <20230208073401.2468579-3-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230208073401.2468579-1-mingxia.liu@intel.com> References: <20230207101650.2402452-1-mingxia.liu@intel.com> <20230208073401.2468579-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for these device ops: - rss_reta_update - rss_reta_query - rss_hash_update - rss_hash_conf_get Signed-off-by: Mingxia Liu --- drivers/common/idpf/idpf_common_device.h | 1 + drivers/common/idpf/idpf_common_virtchnl.c | 119 +++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 6 + drivers/common/idpf/version.map | 3 + drivers/net/idpf/idpf_ethdev.c | 268 +++++++++++++++++++++ drivers/net/idpf/idpf_ethdev.h | 3 +- 6 files changed, 399 insertions(+), 1 deletion(-) diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 1d8e7d405a..7abc4d2a3a 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -98,6 +98,7 @@ struct idpf_vport { uint32_t *rss_lut; uint8_t *rss_key; uint64_t rss_hf; + uint64_t last_general_rss_hf; /* MSIX info*/ struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */ diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 40cff34c09..10cfa33704 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -218,6 +218,9 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args) case VIRTCHNL2_OP_ALLOC_VECTORS: case VIRTCHNL2_OP_DEALLOC_VECTORS: case VIRTCHNL2_OP_GET_STATS: + case VIRTCHNL2_OP_GET_RSS_KEY: + case VIRTCHNL2_OP_GET_RSS_HASH: + case VIRTCHNL2_OP_GET_RSS_LUT: /* for init virtchnl ops, need to poll the response */ err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer); clear_cmd(adapter); @@ -448,6 +451,48 @@ idpf_vc_rss_key_set(struct idpf_vport *vport) return err; } +int idpf_vc_rss_key_get(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_rss_key *rss_key_ret; + struct virtchnl2_rss_key rss_key; + struct idpf_cmd_info args; + int err; + + memset(&rss_key, 0, sizeof(rss_key)); + rss_key.vport_id = vport->vport_id; + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_GET_RSS_KEY; + args.in_args = (uint8_t *)&rss_key; + args.in_args_size = sizeof(rss_key); + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + + if (!err) { + rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer; + if (rss_key_ret->key_len != vport->rss_key_size) { + rte_free(vport->rss_key); + vport->rss_key = NULL; + vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN, + rss_key_ret->key_len); + vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0); + if (!vport->rss_key) { + vport->rss_key_size = 0; + DRV_LOG(ERR, "Failed to allocate RSS key"); + return -ENOMEM; + } + } + rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size); + } else { + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY"); + } + + return err; +} + int idpf_vc_rss_lut_set(struct idpf_vport *vport) { @@ -482,6 +527,80 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport) return err; } +int +idpf_vc_rss_lut_get(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_rss_lut *rss_lut_ret; + struct virtchnl2_rss_lut rss_lut; + struct idpf_cmd_info args; + int err; + + memset(&rss_lut, 0, sizeof(rss_lut)); + rss_lut.vport_id = vport->vport_id; + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_GET_RSS_LUT; + args.in_args = (uint8_t *)&rss_lut; + args.in_args_size = sizeof(rss_lut); + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + + if (!err) { + rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer; + if (rss_lut_ret->lut_entries != vport->rss_lut_size) { + rte_free(vport->rss_lut); + vport->rss_lut = NULL; + vport->rss_lut = rte_zmalloc("rss_lut", + sizeof(uint32_t) * rss_lut_ret->lut_entries, 0); + if (vport->rss_lut == NULL) { + DRV_LOG(ERR, "Failed to allocate RSS lut"); + return -ENOMEM; + } + } + rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries); + vport->rss_lut_size = rss_lut_ret->lut_entries; + } else { + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT"); + } + + return err; +} + +int +idpf_vc_rss_hash_get(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_rss_hash *rss_hash_ret; + struct virtchnl2_rss_hash rss_hash; + struct idpf_cmd_info args; + int err; + + memset(&rss_hash, 0, sizeof(rss_hash)); + rss_hash.ptype_groups = vport->rss_hf; + rss_hash.vport_id = vport->vport_id; + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_GET_RSS_HASH; + args.in_args = (uint8_t *)&rss_hash; + args.in_args_size = sizeof(rss_hash); + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + + if (!err) { + rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer; + vport->rss_hf = rss_hash_ret->ptype_groups; + } else { + DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH"); + } + + return err; +} + int idpf_vc_rss_hash_set(struct idpf_vport *vport) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 6b94fd5b8f..205d1a932d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -52,4 +52,10 @@ int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq); __rte_internal int idpf_vc_stats_query(struct idpf_vport *vport, struct virtchnl2_vport_stats **pstats); +__rte_internal +int idpf_vc_rss_key_get(struct idpf_vport *vport); +__rte_internal +int idpf_vc_rss_lut_get(struct idpf_vport *vport); +__rte_internal +int idpf_vc_rss_hash_get(struct idpf_vport *vport); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index e6a02828ba..f6c92e7e57 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -42,8 +42,11 @@ INTERNAL { idpf_vc_ptype_info_query; idpf_vc_queue_switch; idpf_vc_queues_ena_dis; + idpf_vc_rss_hash_get; idpf_vc_rss_hash_set; + idpf_vc_rss_key_get; idpf_vc_rss_key_set; + idpf_vc_rss_lut_get; idpf_vc_rss_lut_set; idpf_vc_rxq_config; idpf_vc_stats_query; diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 02ddb0330a..7262109d0a 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = { NULL }; +static const uint64_t idpf_map_hena_rss[] = { + [IDPF_HASH_NONF_UNICAST_IPV4_UDP] = + RTE_ETH_RSS_NONFRAG_IPV4_UDP, + [IDPF_HASH_NONF_MULTICAST_IPV4_UDP] = + RTE_ETH_RSS_NONFRAG_IPV4_UDP, + [IDPF_HASH_NONF_IPV4_UDP] = + RTE_ETH_RSS_NONFRAG_IPV4_UDP, + [IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] = + RTE_ETH_RSS_NONFRAG_IPV4_TCP, + [IDPF_HASH_NONF_IPV4_TCP] = + RTE_ETH_RSS_NONFRAG_IPV4_TCP, + [IDPF_HASH_NONF_IPV4_SCTP] = + RTE_ETH_RSS_NONFRAG_IPV4_SCTP, + [IDPF_HASH_NONF_IPV4_OTHER] = + RTE_ETH_RSS_NONFRAG_IPV4_OTHER, + [IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4, + + /* IPv6 */ + [IDPF_HASH_NONF_UNICAST_IPV6_UDP] = + RTE_ETH_RSS_NONFRAG_IPV6_UDP, + [IDPF_HASH_NONF_MULTICAST_IPV6_UDP] = + RTE_ETH_RSS_NONFRAG_IPV6_UDP, + [IDPF_HASH_NONF_IPV6_UDP] = + RTE_ETH_RSS_NONFRAG_IPV6_UDP, + [IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] = + RTE_ETH_RSS_NONFRAG_IPV6_TCP, + [IDPF_HASH_NONF_IPV6_TCP] = + RTE_ETH_RSS_NONFRAG_IPV6_TCP, + [IDPF_HASH_NONF_IPV6_SCTP] = + RTE_ETH_RSS_NONFRAG_IPV6_SCTP, + [IDPF_HASH_NONF_IPV6_OTHER] = + RTE_ETH_RSS_NONFRAG_IPV6_OTHER, + [IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6, + + /* L2 Payload */ + [IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD +}; + +static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP | + RTE_ETH_RSS_NONFRAG_IPV4_TCP | + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | + RTE_ETH_RSS_NONFRAG_IPV4_OTHER | + RTE_ETH_RSS_FRAG_IPV4; + +static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP | + RTE_ETH_RSS_NONFRAG_IPV6_TCP | + RTE_ETH_RSS_NONFRAG_IPV6_SCTP | + RTE_ETH_RSS_NONFRAG_IPV6_OTHER | + RTE_ETH_RSS_FRAG_IPV6; + static int idpf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_mtu = vport->max_mtu; dev_info->min_mtu = RTE_ETHER_MIN_MTU; + dev_info->hash_key_size = vport->rss_key_size; + dev_info->reta_size = vport->rss_lut_size; + dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL; dev_info->rx_offload_capa = @@ -221,6 +274,36 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev) return 0; } +static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf) +{ + uint64_t hena = 0; + uint16_t i; + + /** + * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2 + * generalizations of all other IPv4 and IPv6 RSS types. + */ + if (rss_hf & RTE_ETH_RSS_IPV4) + rss_hf |= idpf_ipv4_rss; + + if (rss_hf & RTE_ETH_RSS_IPV6) + rss_hf |= idpf_ipv6_rss; + + for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) { + if (idpf_map_hena_rss[i] & rss_hf) + hena |= BIT_ULL(i); + } + + /** + * At present, cp doesn't process the virtual channel msg of rss_hf configuration, + * tips are given below. + */ + if (hena != vport->rss_hf) + PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present."); + + return 0; +} + static int idpf_init_rss(struct idpf_vport *vport) { @@ -257,6 +340,187 @@ idpf_init_rss(struct idpf_vport *vport) return ret; } +static int +idpf_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct idpf_adapter *adapter = vport->adapter; + uint16_t idx, shift; + int ret = 0; + uint16_t i; + + if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(DEBUG, "RSS is not supported"); + return -ENOTSUP; + } + + if (reta_size != vport->rss_lut_size) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number of hardware can " + "support (%d)", + reta_size, vport->rss_lut_size); + return -EINVAL; + } + + for (i = 0; i < reta_size; i++) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + shift = i % RTE_ETH_RETA_GROUP_SIZE; + if (reta_conf[idx].mask & (1ULL << shift)) + vport->rss_lut[i] = reta_conf[idx].reta[shift]; + } + + /* send virtchnl ops to configure RSS */ + ret = idpf_vc_rss_lut_set(vport); + if (ret) + PMD_INIT_LOG(ERR, "Failed to configure RSS lut"); + + return ret; +} + +static int +idpf_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct idpf_adapter *adapter = vport->adapter; + uint16_t idx, shift; + int ret = 0; + uint16_t i; + + if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(DEBUG, "RSS is not supported"); + return -ENOTSUP; + } + + if (reta_size != vport->rss_lut_size) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number of hardware can " + "support (%d)", reta_size, vport->rss_lut_size); + return -EINVAL; + } + + ret = idpf_vc_rss_lut_get(vport); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get RSS LUT"); + return ret; + } + + for (i = 0; i < reta_size; i++) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + shift = i % RTE_ETH_RETA_GROUP_SIZE; + if (reta_conf[idx].mask & (1ULL << shift)) + reta_conf[idx].reta[shift] = vport->rss_lut[i]; + } + + return 0; +} + +static int +idpf_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct idpf_adapter *adapter = vport->adapter; + int ret = 0; + + if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(DEBUG, "RSS is not supported"); + return -ENOTSUP; + } + + if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) { + PMD_DRV_LOG(DEBUG, "No key to be configured"); + goto skip_rss_key; + } else if (rss_conf->rss_key_len != vport->rss_key_size) { + PMD_DRV_LOG(ERR, "The size of hash key configured " + "(%d) doesn't match the size of hardware can " + "support (%d)", + rss_conf->rss_key_len, + vport->rss_key_size); + return -EINVAL; + } + + rte_memcpy(vport->rss_key, rss_conf->rss_key, + vport->rss_key_size); + ret = idpf_vc_rss_key_set(vport); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to configure RSS key"); + return ret; + } + +skip_rss_key: + ret = idpf_config_rss_hf(vport, rss_conf->rss_hf); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to configure RSS hash"); + return ret; + } + + return 0; +} + +static uint64_t +idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf) +{ + uint64_t valid_rss_hf = 0; + uint16_t i; + + for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) { + uint64_t bit = BIT_ULL(i); + + if (bit & config_rss_hf) + valid_rss_hf |= idpf_map_hena_rss[i]; + } + + if (valid_rss_hf & idpf_ipv4_rss) + valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4; + + if (valid_rss_hf & idpf_ipv6_rss) + valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6; + + return valid_rss_hf; +} + +static int +idpf_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct idpf_vport *vport = dev->data->dev_private; + struct idpf_adapter *adapter = vport->adapter; + int ret = 0; + + if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(DEBUG, "RSS is not supported"); + return -ENOTSUP; + } + + ret = idpf_vc_rss_hash_get(vport); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get RSS hf"); + return ret; + } + + rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf); + + if (!rss_conf->rss_key) + return 0; + + ret = idpf_vc_rss_key_get(vport); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get RSS key"); + return ret; + } + + if (rss_conf->rss_key_len > vport->rss_key_size) + rss_conf->rss_key_len = vport->rss_key_size; + + rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len); + + return 0; +} + static int idpf_dev_configure(struct rte_eth_dev *dev) { @@ -692,6 +956,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = { .dev_supported_ptypes_get = idpf_dev_supported_ptypes_get, .stats_get = idpf_dev_stats_get, .stats_reset = idpf_dev_stats_reset, + .reta_update = idpf_rss_reta_update, + .reta_query = idpf_rss_reta_query, + .rss_hash_update = idpf_rss_hash_update, + .rss_hash_conf_get = idpf_rss_hash_conf_get, }; static uint16_t diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h index d791d402fb..839a2bd82c 100644 --- a/drivers/net/idpf/idpf_ethdev.h +++ b/drivers/net/idpf/idpf_ethdev.h @@ -48,7 +48,8 @@ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ - RTE_ETH_RSS_NONFRAG_IPV6_OTHER) + RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \ + RTE_ETH_RSS_L2_PAYLOAD) #define IDPF_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) From patchwork Wed Feb 8 07:33:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123439 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED92041C3C; Wed, 8 Feb 2023 09:32:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E349242D46; Wed, 8 Feb 2023 09:31:41 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 5148642D46 for ; Wed, 8 Feb 2023 09:31:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675845099; x=1707381099; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RGLjnAkFAWZWToBk81TeoejET+iw9yyU7ni2n4Krrkw=; b=Tx3Gql3BMiB9jMXH4z0DD7Xdu6HvwaSc8D9uygAgkUxvxWDZMe1LYYe3 QswdT4KJRo3HTA7Wr8zSEaEDNMjLBqBRIwnUwkWVtwnxtlviOJpjWAInl 1qnxDhjINafyQfYRHY7Dvt6mcgDhMAYCtke6vrayACo9RfWAl/tvj3WAj VGYjTSyT72yzu4rjPnCs3s2kC6qOw6lXHgWmmRT4gLQkBynBrokacRW9e M0gUD3pXV5P+oXxvXDg93WFGZd2iq8BH3kBlMKm/irOoQXnF2UAI68y02 jn2Fpket50CwoJDB+lxZLrSaSEGtszwk+7gHMIS7PvI9FHwkAjehzX77h g==; X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="328399557" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="328399557" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 00:31:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="841100175" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="841100175" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga005.jf.intel.com with ESMTP; 08 Feb 2023 00:31:20 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu , Wenjun Wu Subject: [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Date: Wed, 8 Feb 2023 07:33:58 +0000 Message-Id: <20230208073401.2468579-4-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230208073401.2468579-1-mingxia.liu@intel.com> References: <20230207101650.2402452-1-mingxia.liu@intel.com> <20230208073401.2468579-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch add single q recv scatter rx function. Signed-off-by: Mingxia Liu Signed-off-by: Wenjun Wu --- drivers/common/idpf/idpf_common_rxtx.c | 135 +++++++++++++++++++++++++ drivers/common/idpf/idpf_common_rxtx.h | 3 + drivers/common/idpf/version.map | 1 + drivers/net/idpf/idpf_ethdev.c | 3 +- drivers/net/idpf/idpf_rxtx.c | 28 +++++ drivers/net/idpf/idpf_rxtx.h | 2 + 6 files changed, 171 insertions(+), 1 deletion(-) diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c index fdac2c3114..9303b51cce 100644 --- a/drivers/common/idpf/idpf_common_rxtx.c +++ b/drivers/common/idpf/idpf_common_rxtx.c @@ -1146,6 +1146,141 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx; } +uint16_t +idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct idpf_rx_queue *rxq = rx_queue; + volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring; + volatile union virtchnl2_rx_desc *rxdp; + union virtchnl2_rx_desc rxd; + struct idpf_adapter *ad; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *rxm; + struct rte_mbuf *nmb; + struct rte_eth_dev *dev; + const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl; + uint16_t rx_id = rxq->rx_tail; + uint16_t rx_packet_len; + uint16_t nb_hold = 0; + uint16_t rx_status0; + uint16_t nb_rx = 0; + uint64_t pkt_flags; + uint64_t dma_addr; + uint64_t ts_ns; + + ad = rxq->adapter; + + if (unlikely(!rxq) || unlikely(!rxq->q_started)) + return nb_rx; + + while (nb_rx < nb_pkts) { + rxdp = &rx_ring[rx_id]; + rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0); + + /* Check the DD bit first */ + if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S))) + break; + + nmb = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!nmb)) { + __atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED); + RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + break; + } + + rxd = *rxdp; + + nb_hold++; + rxm = rxq->sw_ring[rx_id]; + rxq->sw_ring[rx_id] = nmb; + rx_id++; + if (unlikely(rx_id == rxq->nb_rx_desc)) + rx_id = 0; + + /* Prefetch next mbuf */ + rte_prefetch0(rxq->sw_ring[rx_id]); + + /* When next RX descriptor is on a cache line boundary, + * prefetch the next 4 RX descriptors and next 8 pointers + * to mbufs. + */ + if ((rx_id & 0x3) == 0) { + rte_prefetch0(&rx_ring[rx_id]); + rte_prefetch0(rxq->sw_ring[rx_id]); + } + dma_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + rxdp->read.hdr_addr = 0; + rxdp->read.pkt_addr = dma_addr; + rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) & + VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M); + rxm->data_len = rx_packet_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + + /** + * If this is the first buffer of the received packet, set the + * pointer to the first mbuf of the packet and initialize its + * context. Otherwise, update the total length and the number + * of segments of the current scattered packet, and update the + * pointer to the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = rxm; + first_seg->nb_segs = 1; + first_seg->pkt_len = rx_packet_len; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rx_packet_len); + first_seg->nb_segs++; + last_seg->next = rxm; + } + + if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) { + last_seg = rxm; + continue; + } + + rxm->next = NULL; + + first_seg->port = rxq->port_id; + first_seg->ol_flags = 0; + pkt_flags = idpf_rxd_to_pkt_flags(rx_status0); + first_seg->packet_type = + ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) & + VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)]; + + if (idpf_timestamp_dynflag > 0 && + (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) { + /* timestamp */ + ts_ns = idpf_tstamp_convert_32b_64b(ad, + rxq->hw_register_set, + rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high)); + rxq->hw_register_set = 0; + *RTE_MBUF_DYNFIELD(rxm, + idpf_timestamp_dynfield_offset, + rte_mbuf_timestamp_t *) = ts_ns; + first_seg->ol_flags |= idpf_timestamp_dynflag; + } + + first_seg->ol_flags |= pkt_flags; + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, + first_seg->data_off)); + rx_pkts[nb_rx++] = first_seg; + first_seg = NULL; + } + rxq->rx_tail = rx_id; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + idpf_update_rx_tail(rxq, nb_hold, rx_id); + + return nb_rx; +} + static inline int idpf_xmit_cleanup(struct idpf_tx_queue *txq) { diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h index 263dab061c..7e6df080e6 100644 --- a/drivers/common/idpf/idpf_common_rxtx.h +++ b/drivers/common/idpf/idpf_common_rxtx.h @@ -293,5 +293,8 @@ uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, __rte_internal uint16_t idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +__rte_internal +uint16_t idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* _IDPF_COMMON_RXTX_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index f6c92e7e57..e31f6ff4d9 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -7,6 +7,7 @@ INTERNAL { idpf_dp_prep_pkts; idpf_dp_singleq_recv_pkts; idpf_dp_singleq_recv_pkts_avx512; + idpf_dp_singleq_recv_scatter_pkts; idpf_dp_singleq_xmit_pkts; idpf_dp_singleq_xmit_pkts_avx512; idpf_dp_splitq_recv_pkts; diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 7262109d0a..11f0ca0085 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | - RTE_ETH_RX_OFFLOAD_TIMESTAMP; + RTE_ETH_RX_OFFLOAD_TIMESTAMP | + RTE_ETH_RX_OFFLOAD_SCATTER; dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index 38d9829912..d16acd87fb 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -503,6 +503,8 @@ int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct idpf_rx_queue *rxq; + uint16_t max_pkt_len; + uint32_t frame_size; int err; if (rx_queue_id >= dev->data->nb_rx_queues) @@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) return -EINVAL; } + frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD; + + max_pkt_len = + RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, + frame_size); + + rxq->max_pkt_len = max_pkt_len; + if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) || + frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + err = idpf_qc_ts_mbuf_register(rxq); if (err != 0) { PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u", @@ -807,6 +820,14 @@ idpf_set_rx_function(struct rte_eth_dev *dev) } #endif /* CC_AVX512_SUPPORT */ } + + if (dev->data->scattered_rx) { + PMD_DRV_LOG(NOTICE, + "Using Single Scalar Scatterd Rx (port %d).", + dev->data->port_id); + dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts; + return; + } PMD_DRV_LOG(NOTICE, "Using Single Scalar Rx (port %d).", dev->data->port_id); @@ -819,6 +840,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev) dev->data->port_id); dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts; } else { + if (dev->data->scattered_rx) { + PMD_DRV_LOG(NOTICE, + "Using Single Scalar Scatterd Rx (port %d).", + dev->data->port_id); + dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts; + return; + } PMD_DRV_LOG(NOTICE, "Using Single Scalar Rx (port %d).", dev->data->port_id); diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h index 3a5084dfd6..41a7495083 100644 --- a/drivers/net/idpf/idpf_rxtx.h +++ b/drivers/net/idpf/idpf_rxtx.h @@ -23,6 +23,8 @@ #define IDPF_DEFAULT_TX_RS_THRESH 32 #define IDPF_DEFAULT_TX_FREE_THRESH 32 +#define IDPF_SUPPORT_CHAIN_NUM 5 + int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, From patchwork Wed Feb 8 07:33:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123438 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B69041C3C; Wed, 8 Feb 2023 09:32:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B1E7542D63; Wed, 8 Feb 2023 09:31:38 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 9B9B942D44 for ; Wed, 8 Feb 2023 09:31:36 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675845096; x=1707381096; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FoXThq6M2dHQRtHNuhWUJhgDz0yUTJ4E2chX6qA8UhU=; b=O2R7LrIcrhsdx/ISRWR349NXwDAraScUoK3Y7rzB7cpcvJeXGqfCpdID 27/szA0922nfDdemLp8Dfc3syFVmojWGI19zXaRpn9bU0gUFza38kXYfg mV5Wkrn0xgdIR6873O+/5sBmLhQWSSgjEe01ascYL7+0hUibpcjnv+rzm TbAQf+RSuceRM3wVJQqYGLfap8M8qV4KroqR4ZD7OzAhTZUD3L7+Xy4a+ wHRUzj1T7vnrZ9k0v/H0pxLL23TON3J3wfU+chYTmljLo2gXFptoD4m0z Z14YYExV2cVJZ0GbxuRn0Tcr5EpAXZabvTQBLIbfHziEsJ8lFnZsghQC3 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="328399546" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="328399546" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 00:31:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="841100184" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="841100184" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga005.jf.intel.com with ESMTP; 08 Feb 2023 00:31:23 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu Subject: [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Date: Wed, 8 Feb 2023 07:33:59 +0000 Message-Id: <20230208073401.2468579-5-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230208073401.2468579-1-mingxia.liu@intel.com> References: <20230207101650.2402452-1-mingxia.liu@intel.com> <20230208073401.2468579-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch add rss valid flag and hash value parsing of rx descriptor. Signed-off-by: Mingxia Liu --- drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c index 9303b51cce..d7e8df1895 100644 --- a/drivers/common/idpf/idpf_common_rxtx.c +++ b/drivers/common/idpf/idpf_common_rxtx.c @@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold, rxq->nb_rx_hold = nb_hold; } +static inline void +idpf_singleq_rx_rss_offload(struct rte_mbuf *mb, + volatile struct virtchnl2_rx_flex_desc_nic *rx_desc, + uint64_t *pkt_flags) +{ + uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0); + + if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) { + *pkt_flags |= RTE_MBUF_F_RX_RSS_HASH; + mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash); + } + +} + uint16_t idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -1118,6 +1132,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->port = rxq->port_id; rxm->ol_flags = 0; pkt_flags = idpf_rxd_to_pkt_flags(rx_status0); + idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags); rxm->packet_type = ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) & VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)]; @@ -1249,6 +1264,7 @@ idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, first_seg->port = rxq->port_id; first_seg->ol_flags = 0; pkt_flags = idpf_rxd_to_pkt_flags(rx_status0); + idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags); first_seg->packet_type = ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) & VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)]; From patchwork Wed Feb 8 07:34:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123441 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3648741C3C; Wed, 8 Feb 2023 09:32:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E9DD242D73; Wed, 8 Feb 2023 09:31:43 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 1827842D6D for ; Wed, 8 Feb 2023 09:31:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675845100; x=1707381100; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uiJHhXV5cP9LcidkOUUdvRSaJtbrs0ir9iCQhTITrmg=; b=noByMUYcYrjrdFrWtILfkz2vZQclvO6pyLSnfb6/AvlBrLeDXfmzFe+p Nc7eiMCz7Lpp46QsWWtx0RivDl/NZKVUi7tyBT5ie/kVoOor3hit9w0G0 whDk1kLMkpkbx59f2cbnRQwOY7Nt2sI7aKEsdbHhgiFYLWj24TGs9gSX1 Dg5nv9XACvpop4ZwJEbysoNxcdpa//ahHQ599EwhtSfygWuGDa/+gfyYN Gch9OYihrITesrB3xOBENiwJ/be+tFh+PTgz+FvdZNPLAsx92m/6tg+s8 CkIPBbBFEBUJcDzJb/zn2CyHaFctu7E9PXVNTLQywOrWAlmDCWd/kVkXN Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="328399560" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="328399560" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 00:31:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="841100242" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="841100242" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga005.jf.intel.com with ESMTP; 08 Feb 2023 00:31:25 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu Subject: [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Date: Wed, 8 Feb 2023 07:34:00 +0000 Message-Id: <20230208073401.2468579-6-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230208073401.2468579-1-mingxia.liu@intel.com> References: <20230207101650.2402452-1-mingxia.liu@intel.com> <20230208073401.2468579-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Handle virtual channel message. Refine link status update. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_device.h | 5 + drivers/common/idpf/idpf_common_virtchnl.c | 33 ++-- drivers/common/idpf/idpf_common_virtchnl.h | 6 + drivers/common/idpf/version.map | 2 + drivers/net/idpf/idpf_ethdev.c | 169 ++++++++++++++++++++- drivers/net/idpf/idpf_ethdev.h | 2 + 6 files changed, 195 insertions(+), 22 deletions(-) diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 7abc4d2a3a..364a60221a 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -118,6 +118,11 @@ struct idpf_vport { bool tx_use_avx512; struct virtchnl2_vport_stats eth_stats_offset; + + void *dev; + /* Event from ipf */ + bool link_up; + uint32_t link_speed; }; /* Message type read in virtual channel from PF */ diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 10cfa33704..99d9efbb7c 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -202,25 +202,6 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args) switch (args->ops) { case VIRTCHNL_OP_VERSION: case VIRTCHNL2_OP_GET_CAPS: - case VIRTCHNL2_OP_CREATE_VPORT: - case VIRTCHNL2_OP_DESTROY_VPORT: - case VIRTCHNL2_OP_SET_RSS_KEY: - case VIRTCHNL2_OP_SET_RSS_LUT: - case VIRTCHNL2_OP_SET_RSS_HASH: - case VIRTCHNL2_OP_CONFIG_RX_QUEUES: - case VIRTCHNL2_OP_CONFIG_TX_QUEUES: - case VIRTCHNL2_OP_ENABLE_QUEUES: - case VIRTCHNL2_OP_DISABLE_QUEUES: - case VIRTCHNL2_OP_ENABLE_VPORT: - case VIRTCHNL2_OP_DISABLE_VPORT: - case VIRTCHNL2_OP_MAP_QUEUE_VECTOR: - case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR: - case VIRTCHNL2_OP_ALLOC_VECTORS: - case VIRTCHNL2_OP_DEALLOC_VECTORS: - case VIRTCHNL2_OP_GET_STATS: - case VIRTCHNL2_OP_GET_RSS_KEY: - case VIRTCHNL2_OP_GET_RSS_HASH: - case VIRTCHNL2_OP_GET_RSS_LUT: /* for init virtchnl ops, need to poll the response */ err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer); clear_cmd(adapter); @@ -1111,3 +1092,17 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) return err; } + +int +idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, + struct idpf_ctlq_msg *q_msg) +{ + return idpf_ctlq_recv(cq, num_q_msg, q_msg); +} + +int +idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + u16 *buff_count, struct idpf_dma_mem **buffs) +{ + return idpf_ctlq_post_rx_buffs(hw, cq, buff_count, buffs); +} diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 205d1a932d..d479d93c8e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -58,4 +58,10 @@ __rte_internal int idpf_vc_rss_lut_get(struct idpf_vport *vport); __rte_internal int idpf_vc_rss_hash_get(struct idpf_vport *vport); +__rte_internal +int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, + struct idpf_ctlq_msg *q_msg); +__rte_internal +int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + u16 *buff_count, struct idpf_dma_mem **buffs); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index e31f6ff4d9..70334a1b03 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -38,6 +38,8 @@ INTERNAL { idpf_vc_api_version_check; idpf_vc_caps_get; idpf_vc_cmd_execute; + idpf_vc_ctlq_post_rx_buffs; + idpf_vc_ctlq_recv; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 11f0ca0085..751c0d8717 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "idpf_ethdev.h" #include "idpf_rxtx.h" @@ -83,14 +84,51 @@ static int idpf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { + struct idpf_vport *vport = dev->data->dev_private; struct rte_eth_link new_link; memset(&new_link, 0, sizeof(new_link)); - new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + switch (vport->link_speed) { + case RTE_ETH_SPEED_NUM_10M: + new_link.link_speed = RTE_ETH_SPEED_NUM_10M; + break; + case RTE_ETH_SPEED_NUM_100M: + new_link.link_speed = RTE_ETH_SPEED_NUM_100M; + break; + case RTE_ETH_SPEED_NUM_1G: + new_link.link_speed = RTE_ETH_SPEED_NUM_1G; + break; + case RTE_ETH_SPEED_NUM_10G: + new_link.link_speed = RTE_ETH_SPEED_NUM_10G; + break; + case RTE_ETH_SPEED_NUM_20G: + new_link.link_speed = RTE_ETH_SPEED_NUM_20G; + break; + case RTE_ETH_SPEED_NUM_25G: + new_link.link_speed = RTE_ETH_SPEED_NUM_25G; + break; + case RTE_ETH_SPEED_NUM_40G: + new_link.link_speed = RTE_ETH_SPEED_NUM_40G; + break; + case RTE_ETH_SPEED_NUM_50G: + new_link.link_speed = RTE_ETH_SPEED_NUM_50G; + break; + case RTE_ETH_SPEED_NUM_100G: + new_link.link_speed = RTE_ETH_SPEED_NUM_100G; + break; + case RTE_ETH_SPEED_NUM_200G: + new_link.link_speed = RTE_ETH_SPEED_NUM_200G; + break; + default: + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + } + new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; - new_link.link_autoneg = !(dev->data->dev_conf.link_speeds & - RTE_ETH_LINK_SPEED_FIXED); + new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP : + RTE_ETH_LINK_DOWN; + new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ? + RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG; return rte_eth_linkstatus_set(dev, &new_link); } @@ -891,6 +929,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap return ret; } +static struct idpf_vport * +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id) +{ + struct idpf_vport *vport = NULL; + int i; + + for (i = 0; i < adapter->cur_vport_nb; i++) { + vport = adapter->vports[i]; + if (vport->vport_id != vport_id) + continue; + else + return vport; + } + + return vport; +} + +static void +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen) +{ + struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg; + struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev; + + if (msglen < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + + switch (vc_event->event) { + case VIRTCHNL2_EVENT_LINK_CHANGE: + PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE"); + vport->link_up = !!(vc_event->link_status); + vport->link_speed = vc_event->link_speed; + idpf_dev_link_update(dev, 0); + break; + default: + PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event); + break; + } +} + +static void +idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex) +{ + struct idpf_adapter *adapter = &adapter_ex->base; + struct idpf_dma_mem *dma_mem = NULL; + struct idpf_hw *hw = &adapter->hw; + struct virtchnl2_event *vc_event; + struct idpf_ctlq_msg ctlq_msg; + enum idpf_mbx_opc mbx_op; + struct idpf_vport *vport; + enum virtchnl_ops vc_op; + uint16_t pending = 1; + int ret; + + while (pending) { + ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg); + if (ret) { + PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret); + return; + } + + rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va, + IDPF_DFLT_MBX_BUF_SIZE); + + mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode); + vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode); + adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval); + + switch (mbx_op) { + case idpf_mbq_opc_send_msg_to_peer_pf: + if (vc_op == VIRTCHNL2_OP_EVENT) { + if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + vc_event = (struct virtchnl2_event *)adapter->mbx_resp; + vport = idpf_find_vport(adapter_ex, vc_event->vport_id); + if (!vport) { + PMD_DRV_LOG(ERR, "Can't find vport."); + return; + } + idpf_handle_event_msg(vport, adapter->mbx_resp, + ctlq_msg.data_len); + } else { + if (vc_op == adapter->pend_cmd) + notify_cmd(adapter, adapter->cmd_retval); + else + PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u", + adapter->pend_cmd, vc_op); + + PMD_DRV_LOG(DEBUG, " Virtual channel response is received," + "opcode = %d", vc_op); + } + goto post_buf; + default: + PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op); + } + } + +post_buf: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + pending = 0; + + ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem); + if (ret && dma_mem) + idpf_free_dma_mem(hw, dma_mem); +} + +static void +idpf_dev_alarm_handler(void *param) +{ + struct idpf_adapter_ext *adapter = param; + + idpf_handle_virtchnl_msg(adapter); + + rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter); +} + static int idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter) { @@ -913,6 +1072,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a goto err_adapter_init; } + rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter); + adapter->max_vport_nb = adapter->base.caps.max_vports; adapter->vports = rte_zmalloc("vports", @@ -996,6 +1157,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params) vport->adapter = &adapter->base; vport->sw_idx = param->idx; vport->devarg_id = param->devarg_id; + vport->dev = dev; memset(&create_vport_info, 0, sizeof(create_vport_info)); ret = idpf_vport_info_init(vport, &create_vport_info); @@ -1065,6 +1227,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev) static void idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter) { + rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter); idpf_adapter_deinit(&adapter->base); rte_free(adapter->vports); diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h index 839a2bd82c..3c2c932438 100644 --- a/drivers/net/idpf/idpf_ethdev.h +++ b/drivers/net/idpf/idpf_ethdev.h @@ -53,6 +53,8 @@ #define IDPF_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) +#define IDPF_ALARM_INTERVAL 50000 /* us */ + struct idpf_vport_param { struct idpf_adapter_ext *adapter; uint16_t devarg_id; /* arg id from user */ From patchwork Wed Feb 8 07:34:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123440 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1612141C3C; Wed, 8 Feb 2023 09:32:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA39742D69; Wed, 8 Feb 2023 09:31:42 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 17CD042D6C for ; Wed, 8 Feb 2023 09:31:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675845100; x=1707381100; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5SpgYB3FmHaN8tZnXwxErbDxeI6/hbr+hGCN+xz2meE=; b=BhXY528TnMu96/iUjUR5dBlGkHPehwuFsq9SzayTiPNiiNBnzjGT/JBg K7gPy3agO75FetWYopGHQm1WUxN3OMC+EBqKZcACt30dXQTi0az3IgoRZ RSVnxFCT0EholjSBM4Rgu81aOAIB8d4CxtOejK+tXB/DYeXDd+PTY7JZI VDs+eQndWN0eqWYUhK8GS+UAPPgMsDdZhgwVMwnNvIgmvXdXWILV2G4Ro L0xbM6Y9eJC4t9Q051fAoSN872nnCULC9q63PWtd19i0e9F5f2Kdw1p+A /JR6kUCmb/jXjXWq2ojmBXWA4AxSKxezgsCe1mFM6QVrcaukctly0j6/J Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="328399561" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="328399561" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 00:31:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10614"; a="841100256" X-IronPort-AV: E=Sophos;i="5.97,280,1669104000"; d="scan'208";a="841100256" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga005.jf.intel.com with ESMTP; 08 Feb 2023 00:31:31 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu Subject: [PATCH v7 6/6] net/idpf: add xstats ops Date: Wed, 8 Feb 2023 07:34:01 +0000 Message-Id: <20230208073401.2468579-7-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230208073401.2468579-1-mingxia.liu@intel.com> References: <20230207101650.2402452-1-mingxia.liu@intel.com> <20230208073401.2468579-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for these device ops: -idpf_dev_xstats_get -idpf_dev_xstats_get_names -idpf_dev_xstats_reset Signed-off-by: Mingxia Liu --- drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 751c0d8717..38cbbf369d 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_FRAG_IPV6; +struct rte_idpf_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = { + {"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)}, + {"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)}, + {"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)}, + {"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)}, + {"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)}, + {"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)}, + {"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats, + rx_unknown_protocol)}, + {"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)}, + {"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)}, + {"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)}, + {"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)}, + {"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)}, + {"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}}; + +#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \ + sizeof(rte_idpf_stats_strings[0])) + static int idpf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -313,6 +337,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev) return 0; } +static int idpf_dev_xstats_reset(struct rte_eth_dev *dev) +{ + idpf_dev_stats_reset(dev); + return 0; +} + +static int idpf_dev_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *xstats, unsigned int n) +{ + struct idpf_vport *vport = + (struct idpf_vport *)dev->data->dev_private; + struct virtchnl2_vport_stats *pstats = NULL; + unsigned int i; + int ret; + + if (n < IDPF_NB_XSTATS) + return IDPF_NB_XSTATS; + + if (!xstats) + return 0; + + ret = idpf_vc_stats_query(vport, &pstats); + if (ret) { + PMD_DRV_LOG(ERR, "Get statistics failed"); + return 0; + } + + idpf_vport_stats_update(&vport->eth_stats_offset, pstats); + + /* loop over xstats array and values from pstats */ + for (i = 0; i < IDPF_NB_XSTATS; i++) { + xstats[i].id = i; + xstats[i].value = *(uint64_t *)(((char *)pstats) + + rte_idpf_stats_strings[i].offset); + } + return IDPF_NB_XSTATS; +} + +static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit) +{ + unsigned int i; + + if (xstats_names) + for (i = 0; i < IDPF_NB_XSTATS; i++) { + snprintf(xstats_names[i].name, + sizeof(xstats_names[i].name), + "%s", rte_idpf_stats_strings[i].name); + } + return IDPF_NB_XSTATS; +} + static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf) { uint64_t hena = 0; @@ -1122,6 +1199,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = { .reta_query = idpf_rss_reta_query, .rss_hash_update = idpf_rss_hash_update, .rss_hash_conf_get = idpf_rss_hash_conf_get, + .xstats_get = idpf_dev_xstats_get, + .xstats_get_names = idpf_dev_xstats_get_names, + .xstats_reset = idpf_dev_xstats_reset, }; static uint16_t