From patchwork Wed Sep 29 20:57:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100021 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3F5CA0032; Wed, 29 Sep 2021 22:57:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CBE98410FC; Wed, 29 Sep 2021 22:57:46 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id A1D9F410EF for ; Wed, 29 Sep 2021 22:57:44 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 44CC37F5FD; Wed, 29 Sep 2021 23:57:44 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 44CC37F5FD DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949064; bh=2PMgdsbKfxuzAYmT2PK+q9Z9jwGP5z/44yo1iyc3kc0=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=gne/g0oH8BunDrgOdQ+5boCQA9hJHvzgktSv/lqJGXmjLSh5Kj7rgdEdISUcjZI++ qyfSeUZZI2ez/3hrgbU7ggN3a5oFto1ZWV0a5BFMgORdcMkIQPwe9Qpq16Pxlh0Ob9 3f+RwKLzlGGZva5LAxjCl9MpvcJr0Dt0RT1hee5k= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko , Andy Moreton Date: Wed, 29 Sep 2021 23:57:21 +0300 Message-Id: <20210929205730.775-2-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 01/10] net/sfc: fence off 8 bits in Rx mark for tunnel offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Later patches add support for tunnel offload on Riverhead (EF100). A board can host at most 254 tunnels. Partially offloaded (missed) tunnel packets are identified by virtue of 8 high bits in Rx mark. Add basic definitions of the upcoming tunnel offload support and take care of the dedicated bits in Rx mark across the driver. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Reviewed-by: Andy Moreton --- drivers/net/sfc/meson.build | 1 + drivers/net/sfc/sfc_dp_rx.h | 3 +++ drivers/net/sfc/sfc_ef100_rx.c | 14 ++++++++++-- drivers/net/sfc/sfc_ethdev.c | 4 ++++ drivers/net/sfc/sfc_flow.c | 8 ++++++- drivers/net/sfc/sfc_flow_tunnel.c | 29 +++++++++++++++++++++++ drivers/net/sfc/sfc_flow_tunnel.h | 38 +++++++++++++++++++++++++++++++ drivers/net/sfc/sfc_mae.c | 6 +++++ drivers/net/sfc/sfc_rx.c | 10 +++++++- 9 files changed, 109 insertions(+), 4 deletions(-) create mode 100644 drivers/net/sfc/sfc_flow_tunnel.c create mode 100644 drivers/net/sfc/sfc_flow_tunnel.h diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build index 948c65968a..6a8ccd564b 100644 --- a/drivers/net/sfc/meson.build +++ b/drivers/net/sfc/meson.build @@ -90,6 +90,7 @@ sources = files( 'sfc_mae.c', 'sfc_mae_counter.c', 'sfc_flow.c', + 'sfc_flow_tunnel.c', 'sfc_dp.c', 'sfc_ef10_rx.c', 'sfc_ef10_essb_rx.c', diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h index b6c44085ce..1be301ced6 100644 --- a/drivers/net/sfc/sfc_dp_rx.h +++ b/drivers/net/sfc/sfc_dp_rx.h @@ -92,6 +92,9 @@ struct sfc_dp_rx_qcreate_info { efsys_dma_addr_t fcw_offset; /** VI window size shift */ unsigned int vi_window_shift; + + /** Mask to extract user bits from Rx prefix mark field */ + uint32_t user_mark_mask; }; /** diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c index 7d0d6b3d00..3219c972db 100644 --- a/drivers/net/sfc/sfc_ef100_rx.c +++ b/drivers/net/sfc/sfc_ef100_rx.c @@ -20,7 +20,9 @@ #include "efx_regs_ef100.h" #include "efx.h" +#include "sfc.h" #include "sfc_debug.h" +#include "sfc_flow_tunnel.h" #include "sfc_tweak.h" #include "sfc_dp_rx.h" #include "sfc_kvargs.h" @@ -74,6 +76,7 @@ struct sfc_ef100_rxq { uint64_t rearm_data; uint16_t buf_size; uint16_t prefix_size; + uint32_t user_mark_mask; unsigned int evq_hw_index; volatile void *evq_prime; @@ -420,10 +423,13 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq, if (rxq->flags & SFC_EF100_RXQ_USER_MARK) { uint32_t user_mark; + uint32_t mark; /* EFX_OWORD_FIELD converts little-endian to CPU */ - user_mark = EFX_OWORD_FIELD(rx_prefix[0], - ESF_GZ_RX_PREFIX_USER_MARK); + mark = EFX_OWORD_FIELD(rx_prefix[0], + ESF_GZ_RX_PREFIX_USER_MARK); + + user_mark = mark & rxq->user_mark_mask; if (user_mark != SFC_EF100_USER_MARK_INVALID) { ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID; m->hash.fdir.hi = user_mark; @@ -745,6 +751,10 @@ sfc_ef100_rx_qcreate(uint16_t port_id, uint16_t queue_id, rxq->max_fill_level = info->max_fill_level; rxq->refill_threshold = info->refill_threshold; rxq->prefix_size = info->prefix_size; + + SFC_ASSERT(info->user_mark_mask != 0); + rxq->user_mark_mask = info->user_mark_mask; + rxq->buf_size = info->buf_size; rxq->refill_mb_pool = info->refill_mb_pool; rxq->rxq_hw_ring = info->rxq_hw_ring; diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 75c3da2e52..ab25674929 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -26,6 +26,7 @@ #include "sfc_rx.h" #include "sfc_tx.h" #include "sfc_flow.h" +#include "sfc_flow_tunnel.h" #include "sfc_dp.h" #include "sfc_dp_rx.h" #include "sfc_sw_stats.h" @@ -1873,6 +1874,9 @@ sfc_rx_meta_negotiate(struct rte_eth_dev *dev, uint64_t *features) if ((sa->priv.dp_rx->features & SFC_DP_RX_FEAT_FLOW_MARK) != 0) supported |= RTE_ETH_RX_META_USER_MARK; + if (sfc_flow_tunnel_is_supported(sa)) + supported |= RTE_ETH_RX_META_TUNNEL_ID; + sa->negotiated_rx_meta = supported & *features; *features = sa->negotiated_rx_meta; diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index 57cf1ad02b..7510cbb95b 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -22,6 +22,7 @@ #include "sfc_rx.h" #include "sfc_filter.h" #include "sfc_flow.h" +#include "sfc_flow_tunnel.h" #include "sfc_log.h" #include "sfc_dp_rx.h" #include "sfc_mae_counter.h" @@ -1740,8 +1741,13 @@ sfc_flow_parse_mark(struct sfc_adapter *sa, struct sfc_flow_spec *spec = &flow->spec; struct sfc_flow_spec_filter *spec_filter = &spec->filter; const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic); + uint32_t mark_max; - if (mark == NULL || mark->id > encp->enc_filter_action_mark_max) + mark_max = encp->enc_filter_action_mark_max; + if (sfc_flow_tunnel_is_active(sa)) + mark_max = RTE_MIN(mark_max, SFC_FT_USER_MARK_MASK); + + if (mark == NULL || mark->id > mark_max) return EINVAL; spec_filter->template.efs_flags |= EFX_FILTER_FLAG_ACTION_MARK; diff --git a/drivers/net/sfc/sfc_flow_tunnel.c b/drivers/net/sfc/sfc_flow_tunnel.c new file mode 100644 index 0000000000..06b4a27a65 --- /dev/null +++ b/drivers/net/sfc/sfc_flow_tunnel.c @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2021 Xilinx, Inc. + */ + +#include +#include + +#include "sfc.h" +#include "sfc_dp_rx.h" +#include "sfc_flow_tunnel.h" +#include "sfc_mae.h" + +bool +sfc_flow_tunnel_is_supported(struct sfc_adapter *sa) +{ + SFC_ASSERT(sfc_adapter_is_locked(sa)); + + return ((sa->priv.dp_rx->features & SFC_DP_RX_FEAT_FLOW_MARK) != 0 && + sa->mae.status == SFC_MAE_STATUS_SUPPORTED); +} + +bool +sfc_flow_tunnel_is_active(struct sfc_adapter *sa) +{ + SFC_ASSERT(sfc_adapter_is_locked(sa)); + + return ((sa->negotiated_rx_meta & RTE_ETH_RX_META_TUNNEL_ID) != 0); +} diff --git a/drivers/net/sfc/sfc_flow_tunnel.h b/drivers/net/sfc/sfc_flow_tunnel.h new file mode 100644 index 0000000000..fec891fdad --- /dev/null +++ b/drivers/net/sfc/sfc_flow_tunnel.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2021 Xilinx, Inc. + */ + +#ifndef _SFC_FLOW_TUNNEL_H +#define _SFC_FLOW_TUNNEL_H + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** Flow Tunnel (FT) SW entry ID */ +typedef uint8_t sfc_ft_id_t; + +#define SFC_FT_TUNNEL_MARK_BITS \ + (sizeof(sfc_ft_id_t) * CHAR_BIT) + +#define SFC_FT_USER_MARK_BITS \ + (sizeof(uint32_t) * CHAR_BIT - SFC_FT_TUNNEL_MARK_BITS) + +#define SFC_FT_USER_MARK_MASK \ + RTE_LEN2MASK(SFC_FT_USER_MARK_BITS, uint32_t) + +struct sfc_adapter; + +bool sfc_flow_tunnel_is_supported(struct sfc_adapter *sa); + +bool sfc_flow_tunnel_is_active(struct sfc_adapter *sa); + +#ifdef __cplusplus +} +#endif +#endif /* _SFC_FLOW_TUNNEL_H */ diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 5ecad7347a..2b80492e59 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -16,6 +16,7 @@ #include "efx.h" #include "sfc.h" +#include "sfc_flow_tunnel.h" #include "sfc_mae_counter.h" #include "sfc_log.h" #include "sfc_switch.h" @@ -2793,6 +2794,11 @@ sfc_mae_rule_parse_action_mark(struct sfc_adapter *sa, { int rc; + if (conf->id > SFC_FT_USER_MARK_MASK) { + sfc_err(sa, "the mark value is too large"); + return EINVAL; + } + rc = efx_mae_action_set_populate_mark(spec, conf->id); if (rc != 0) sfc_err(sa, "failed to request action MARK: %s", strerror(rc)); diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c index a3331c5089..3ef43d7c38 100644 --- a/drivers/net/sfc/sfc_rx.c +++ b/drivers/net/sfc/sfc_rx.c @@ -13,6 +13,7 @@ #include "sfc.h" #include "sfc_debug.h" +#include "sfc_flow_tunnel.h" #include "sfc_log.h" #include "sfc_ev.h" #include "sfc_rx.h" @@ -1181,7 +1182,8 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index, if ((sa->negotiated_rx_meta & RTE_ETH_RX_META_USER_FLAG) != 0) rxq_info->type_flags |= EFX_RXQ_FLAG_USER_FLAG; - if ((sa->negotiated_rx_meta & RTE_ETH_RX_META_USER_MARK) != 0) + if ((sa->negotiated_rx_meta & RTE_ETH_RX_META_USER_MARK) != 0 || + sfc_flow_tunnel_is_active(sa)) rxq_info->type_flags |= EFX_RXQ_FLAG_USER_MARK; rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_RX, sw_index, @@ -1231,6 +1233,12 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index, info.buf_size = buf_size; info.batch_max = encp->enc_rx_batch_max; info.prefix_size = encp->enc_rx_prefix_size; + + if (sfc_flow_tunnel_is_active(sa)) + info.user_mark_mask = SFC_FT_USER_MARK_MASK; + else + info.user_mark_mask = UINT32_MAX; + info.flags = rxq_info->rxq_flags; info.rxq_entries = rxq_info->entries; info.rxq_hw_ring = rxq->mem.esm_base; From patchwork Wed Sep 29 20:57:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100022 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA436A0032; Wed, 29 Sep 2021 22:57:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C85FE41101; Wed, 29 Sep 2021 22:57:47 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id EAC0E410EF for ; Wed, 29 Sep 2021 22:57:44 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 8215C7F6D0; Wed, 29 Sep 2021 23:57:44 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 8215C7F6D0 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949064; bh=Dgq79iVupfr6ns8tPilkD/wdMaP1Usb9Hsy0jj/AwgM=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Cpy+0E/B9atYckHb6wPnNrSXMAWbAR+JRJh9G9Dfbvgx0FDH0T7uAlW8/iMRIxyB1 zy+DlOD4Ii5eeMrC4aE1hEeUVyMa9oWxlgN+bbO1MMsU9c5ABsW+TqYOmhPL/k9JBF bp5dId6xMKNu8GTvyGL9KjeArdfy6l6BUEljnTuU= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko , Ray Kinsella Date: Wed, 29 Sep 2021 23:57:22 +0300 Message-Id: <20210929205730.775-3-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 02/10] common/sfc_efx/base: add API to set RECIRC ID in outer rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When an outer rule is hit, it can pass recirculation ID down to action rule lookup, and action rules can match on this ID instead of matching on the outer rule allocation handle. By default, recirculation ID is assumed to be zero. Add an API to set recirculation ID in outer rules. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Ray Kinsella --- drivers/common/sfc_efx/base/efx.h | 9 +++++++++ drivers/common/sfc_efx/base/efx_impl.h | 1 + drivers/common/sfc_efx/base/efx_mae.c | 24 ++++++++++++++++++++++++ drivers/common/sfc_efx/version.map | 1 + 4 files changed, 35 insertions(+) diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h index bed1029f59..ca747de7a4 100644 --- a/drivers/common/sfc_efx/base/efx.h +++ b/drivers/common/sfc_efx/base/efx.h @@ -4385,6 +4385,15 @@ typedef struct efx_mae_rule_id_s { uint32_t id; } efx_mae_rule_id_t; +/* + * Set the initial recirculation ID. It goes to action rule (AR) lookup. + */ +LIBEFX_API +extern __checkReturn efx_rc_t +efx_mae_outer_rule_recirc_id_set( + __in efx_mae_match_spec_t *spec, + __in uint8_t recirc_id); + LIBEFX_API extern __checkReturn efx_rc_t efx_mae_outer_rule_insert( diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h index 992edbabe3..45dc7803db 100644 --- a/drivers/common/sfc_efx/base/efx_impl.h +++ b/drivers/common/sfc_efx/base/efx_impl.h @@ -1727,6 +1727,7 @@ struct efx_mae_match_spec_s { MAE_FIELD_MASK_VALUE_PAIRS_V2_LEN]; uint8_t outer[MAE_ENC_FIELD_PAIRS_LEN]; } emms_mask_value_pairs; + uint8_t emms_outer_rule_recirc_id; }; typedef enum efx_mae_action_e { diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c index c22206e227..c37e90831f 100644 --- a/drivers/common/sfc_efx/base/efx_mae.c +++ b/drivers/common/sfc_efx/base/efx_mae.c @@ -1945,6 +1945,27 @@ efx_mae_match_specs_class_cmp( fail2: EFSYS_PROBE(fail2); +fail1: + EFSYS_PROBE1(fail1, efx_rc_t, rc); + return (rc); +} + + __checkReturn efx_rc_t +efx_mae_outer_rule_recirc_id_set( + __in efx_mae_match_spec_t *spec, + __in uint8_t recirc_id) +{ + efx_rc_t rc; + + if (spec->emms_type != EFX_MAE_RULE_OUTER) { + rc = EINVAL; + goto fail1; + } + + spec->emms_outer_rule_recirc_id = recirc_id; + + return (0); + fail1: EFSYS_PROBE1(fail1, efx_rc_t, rc); return (rc); @@ -2023,6 +2044,9 @@ efx_mae_outer_rule_insert( memcpy(payload + offset, spec->emms_mask_value_pairs.outer, MAE_ENC_FIELD_PAIRS_LEN); + MCDI_IN_SET_BYTE(req, MAE_OUTER_RULE_INSERT_IN_RECIRC_ID, + spec->emms_outer_rule_recirc_id); + efx_mcdi_execute(enp, &req); if (req.emr_rc != 0) { diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map index 0c5bcdfa84..d4878dfb9a 100644 --- a/drivers/common/sfc_efx/version.map +++ b/drivers/common/sfc_efx/version.map @@ -127,6 +127,7 @@ INTERNAL { efx_mae_mport_by_pcie_function; efx_mae_mport_by_phy_port; efx_mae_outer_rule_insert; + efx_mae_outer_rule_recirc_id_set; efx_mae_outer_rule_remove; efx_mcdi_fini; From patchwork Wed Sep 29 20:57:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100023 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D2A4A0032; Wed, 29 Sep 2021 22:58:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E225041109; Wed, 29 Sep 2021 22:57:48 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 24E02410EE for ; Wed, 29 Sep 2021 22:57:45 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id CAF0F7F553; Wed, 29 Sep 2021 23:57:44 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru CAF0F7F553 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949064; bh=tgcp9Ml3rKJz4djm1/bhq0oiTqt/AM9MdUjfO0FbApY=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=VN1f23IqZLj5kBSD7UMoa3W+Uke77QbNJxc+Qh2dWtWM7a4iaQUFFt3MGu2faKJQ9 yU/dU9fm9dTOa0rUZCF1zo5WUGonqNuYaQG2Ca+DlId9LnRABUx7jPF7vWl59nntIa 4hml5cYTQMkY+tn8RpGOFxejbUAU/pfTyAaz7+JA= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:23 +0300 Message-Id: <20210929205730.775-4-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 03/10] net/sfc: support JUMP flows in tunnel offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" JUMP is an in-house term for so-called "tunnel_set" flows. On parsing, they are identified by virtue of actions MARK (PMD-internal) and JUMP. The action MARK associates a given flow with its tunnel context. Such a flow is represented by a MAE outer rule (OR) which has its recirculation ID set. This ID is also associated with the tunnel context. The OR is supposed to set this ID in 8 high bits of Rx mark in matching packets. It also counts the packets. Packets that hit the OR but miss in action rule (AR) table, should go to MAE admin PF (that is, to DPDK) by default. Support for the use of action COUNT in JUMP flows will be introduced by later patches. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc.h | 3 + drivers/net/sfc/sfc_flow.c | 35 ++++++- drivers/net/sfc/sfc_flow.h | 12 +++ drivers/net/sfc/sfc_flow_tunnel.c | 116 ++++++++++++++++++++++ drivers/net/sfc/sfc_flow_tunnel.h | 31 ++++++ drivers/net/sfc/sfc_mae.c | 156 +++++++++++++++++++++++------- drivers/net/sfc/sfc_mae.h | 2 + 7 files changed, 319 insertions(+), 36 deletions(-) diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h index 2812d76cbb..35b776f443 100644 --- a/drivers/net/sfc/sfc.h +++ b/drivers/net/sfc/sfc.h @@ -27,6 +27,7 @@ #include "sfc_debug.h" #include "sfc_log.h" #include "sfc_filter.h" +#include "sfc_flow_tunnel.h" #include "sfc_sriov.h" #include "sfc_mae.h" #include "sfc_dp.h" @@ -258,6 +259,8 @@ struct sfc_adapter { struct sfc_intr intr; struct sfc_port port; struct sfc_sw_xstats sw_xstats; + /* Registry of tunnel offload contexts */ + struct sfc_flow_tunnel flow_tunnels[SFC_FT_MAX_NTUNNELS]; struct sfc_filter filter; struct sfc_mae mae; diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index 7510cbb95b..fe726afc9c 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -2547,15 +2547,46 @@ sfc_flow_parse_rte_to_mae(struct rte_eth_dev *dev, struct sfc_flow_spec_mae *spec_mae = &spec->mae; int rc; + /* + * If the flow is meant to be a JUMP rule in tunnel offload, + * preparse its actions and save its properties in spec_mae. + */ + rc = sfc_flow_tunnel_detect_jump_rule(sa, actions, spec_mae, error); + if (rc != 0) + goto fail; + rc = sfc_mae_rule_parse_pattern(sa, pattern, spec_mae, error); if (rc != 0) - return rc; + goto fail; + + if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) { + /* + * This flow is represented solely by the outer rule. + * It is supposed to mark and count matching packets. + */ + goto skip_action_rule; + } rc = sfc_mae_rule_parse_actions(sa, actions, spec_mae, error); if (rc != 0) - return rc; + goto fail; + +skip_action_rule: + if (spec_mae->ft != NULL) { + if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) + spec_mae->ft->jump_rule_is_set = B_TRUE; + + ++(spec_mae->ft->refcnt); + } return 0; + +fail: + /* Reset these values to avoid confusing sfc_mae_flow_cleanup(). */ + spec_mae->ft_rule_type = SFC_FT_RULE_NONE; + spec_mae->ft = NULL; + + return rc; } static int diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h index 3435d53ba4..ada3d563ad 100644 --- a/drivers/net/sfc/sfc_flow.h +++ b/drivers/net/sfc/sfc_flow.h @@ -63,8 +63,20 @@ struct sfc_flow_spec_filter { struct sfc_flow_rss rss_conf; }; +/* Indicates the role of a given flow in tunnel offload */ +enum sfc_flow_tunnel_rule_type { + /* The flow has nothing to do with tunnel offload */ + SFC_FT_RULE_NONE = 0, + /* The flow represents a JUMP rule */ + SFC_FT_RULE_JUMP, +}; + /* MAE-specific flow specification */ struct sfc_flow_spec_mae { + /* FLow Tunnel (FT) rule type (or NONE) */ + enum sfc_flow_tunnel_rule_type ft_rule_type; + /* Flow Tunnel (FT) context (or NULL) */ + struct sfc_flow_tunnel *ft; /* Desired priority level */ unsigned int priority; /* Outer rule registry entry */ diff --git a/drivers/net/sfc/sfc_flow_tunnel.c b/drivers/net/sfc/sfc_flow_tunnel.c index 06b4a27a65..b03c90c9a4 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.c +++ b/drivers/net/sfc/sfc_flow_tunnel.c @@ -7,6 +7,7 @@ #include #include "sfc.h" +#include "sfc_flow.h" #include "sfc_dp_rx.h" #include "sfc_flow_tunnel.h" #include "sfc_mae.h" @@ -27,3 +28,118 @@ sfc_flow_tunnel_is_active(struct sfc_adapter *sa) return ((sa->negotiated_rx_meta & RTE_ETH_RX_META_TUNNEL_ID) != 0); } + +struct sfc_flow_tunnel * +sfc_flow_tunnel_pick(struct sfc_adapter *sa, uint32_t ft_mark) +{ + uint32_t tunnel_mark = SFC_FT_GET_TUNNEL_MARK(ft_mark); + + SFC_ASSERT(sfc_adapter_is_locked(sa)); + + if (tunnel_mark != SFC_FT_TUNNEL_MARK_INVALID) { + sfc_ft_id_t ft_id = SFC_FT_TUNNEL_MARK_TO_ID(tunnel_mark); + struct sfc_flow_tunnel *ft = &sa->flow_tunnels[ft_id]; + + ft->id = ft_id; + + return ft; + } + + return NULL; +} + +int +sfc_flow_tunnel_detect_jump_rule(struct sfc_adapter *sa, + const struct rte_flow_action *actions, + struct sfc_flow_spec_mae *spec, + struct rte_flow_error *error) +{ + const struct rte_flow_action_mark *action_mark = NULL; + const struct rte_flow_action_jump *action_jump = NULL; + struct sfc_flow_tunnel *ft; + uint32_t ft_mark = 0; + int rc = 0; + + SFC_ASSERT(sfc_adapter_is_locked(sa)); + + if (!sfc_flow_tunnel_is_active(sa)) { + /* Tunnel-related actions (if any) will be turned down later. */ + return 0; + } + + if (actions == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + "NULL actions"); + return -rte_errno; + } + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { + if (actions->type == RTE_FLOW_ACTION_TYPE_VOID) + continue; + + if (actions->conf == NULL) { + rc = EINVAL; + continue; + } + + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_MARK: + if (action_mark == NULL) { + action_mark = actions->conf; + ft_mark = action_mark->id; + } else { + rc = EINVAL; + } + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + if (action_jump == NULL) { + action_jump = actions->conf; + if (action_jump->group != 0) + rc = EINVAL; + } else { + rc = EINVAL; + } + break; + default: + rc = ENOTSUP; + break; + } + } + + ft = sfc_flow_tunnel_pick(sa, ft_mark); + if (ft != NULL && action_jump != 0) { + sfc_dbg(sa, "tunnel offload: JUMP: detected"); + + if (rc != 0) { + /* The loop above might have spotted wrong actions. */ + sfc_err(sa, "tunnel offload: JUMP: invalid actions: %s", + strerror(rc)); + goto fail; + } + + if (ft->refcnt == 0) { + sfc_err(sa, "tunnel offload: JUMP: tunnel=%u does not exist", + ft->id); + rc = ENOENT; + goto fail; + } + + if (ft->jump_rule_is_set) { + sfc_err(sa, "tunnel offload: JUMP: already exists in tunnel=%u", + ft->id); + rc = EEXIST; + goto fail; + } + + spec->ft_rule_type = SFC_FT_RULE_JUMP; + spec->ft = ft; + } + + return 0; + +fail: + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: JUMP: preparsing failed"); +} diff --git a/drivers/net/sfc/sfc_flow_tunnel.h b/drivers/net/sfc/sfc_flow_tunnel.h index fec891fdad..6a81b29438 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.h +++ b/drivers/net/sfc/sfc_flow_tunnel.h @@ -10,6 +10,8 @@ #include #include +#include "efx.h" + #ifdef __cplusplus extern "C" { #endif @@ -26,12 +28,41 @@ typedef uint8_t sfc_ft_id_t; #define SFC_FT_USER_MARK_MASK \ RTE_LEN2MASK(SFC_FT_USER_MARK_BITS, uint32_t) +#define SFC_FT_GET_TUNNEL_MARK(_mark) \ + ((_mark) >> SFC_FT_USER_MARK_BITS) + +#define SFC_FT_TUNNEL_MARK_INVALID (0) + +#define SFC_FT_TUNNEL_MARK_TO_ID(_tunnel_mark) \ + ((_tunnel_mark) - 1) + +#define SFC_FT_ID_TO_TUNNEL_MARK(_id) \ + ((_id) + 1) + +#define SFC_FT_MAX_NTUNNELS \ + (RTE_LEN2MASK(SFC_FT_TUNNEL_MARK_BITS, uint8_t) - 1) + +struct sfc_flow_tunnel { + bool jump_rule_is_set; + efx_tunnel_protocol_t encap_type; + unsigned int refcnt; + sfc_ft_id_t id; +}; + struct sfc_adapter; bool sfc_flow_tunnel_is_supported(struct sfc_adapter *sa); bool sfc_flow_tunnel_is_active(struct sfc_adapter *sa); +struct sfc_flow_tunnel *sfc_flow_tunnel_pick(struct sfc_adapter *sa, + uint32_t ft_mark); + +int sfc_flow_tunnel_detect_jump_rule(struct sfc_adapter *sa, + const struct rte_flow_action *actions, + struct sfc_flow_spec_mae *spec, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 2b80492e59..57a999d895 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -269,6 +269,9 @@ sfc_mae_outer_rule_enable(struct sfc_adapter *sa, } } + if (match_spec_action == NULL) + goto skip_action_rule; + rc = efx_mae_match_spec_outer_rule_id_set(match_spec_action, &fw_rsrc->rule_id); if (rc != 0) { @@ -283,6 +286,7 @@ sfc_mae_outer_rule_enable(struct sfc_adapter *sa, return rc; } +skip_action_rule: if (fw_rsrc->refcnt == 0) { sfc_dbg(sa, "enabled outer_rule=%p: OR_ID=0x%08x", rule, fw_rsrc->rule_id.id); @@ -806,6 +810,14 @@ sfc_mae_flow_cleanup(struct sfc_adapter *sa, spec_mae = &spec->mae; + if (spec_mae->ft != NULL) { + if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) + spec_mae->ft->jump_rule_is_set = B_FALSE; + + SFC_ASSERT(spec_mae->ft->refcnt != 0); + --(spec_mae->ft->refcnt); + } + SFC_ASSERT(spec_mae->rule_id.id == EFX_MAE_RSRC_ID_INVALID); if (spec_mae->outer_rule != NULL) @@ -2146,6 +2158,16 @@ sfc_mae_rule_process_outer(struct sfc_adapter *sa, ctx->match_spec_outer = NULL; no_or_id: + switch (ctx->ft_rule_type) { + case SFC_FT_RULE_NONE: + break; + case SFC_FT_RULE_JUMP: + /* No action rule */ + return 0; + default: + SFC_ASSERT(B_FALSE); + } + /* * In MAE, lookup sequence comprises outer parse, outer rule lookup, * inner parse (when some outer rule is hit) and action rule lookup. @@ -2183,6 +2205,7 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, struct rte_flow_error *error) { struct sfc_mae *mae = &sa->mae; + uint8_t recirc_id = 0; int rc; if (pattern == NULL) { @@ -2222,34 +2245,71 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, break; } - if (pattern->type == RTE_FLOW_ITEM_TYPE_END) - return 0; + switch (ctx->ft_rule_type) { + case SFC_FT_RULE_NONE: + if (pattern->type == RTE_FLOW_ITEM_TYPE_END) + return 0; + break; + case SFC_FT_RULE_JUMP: + if (pattern->type != RTE_FLOW_ITEM_TYPE_END) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "tunnel offload: JUMP: invalid item"); + } + ctx->encap_type = ctx->ft->encap_type; + break; + default: + SFC_ASSERT(B_FALSE); + break; + } if ((mae->encap_types_supported & (1U << ctx->encap_type)) == 0) { return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - pattern, "Unsupported tunnel item"); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "OR: unsupported tunnel type"); } - if (ctx->priority >= mae->nb_outer_rule_prios_max) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - NULL, "Unsupported priority level"); - } + switch (ctx->ft_rule_type) { + case SFC_FT_RULE_JUMP: + recirc_id = SFC_FT_ID_TO_TUNNEL_MARK(ctx->ft->id); + /* FALLTHROUGH */ + case SFC_FT_RULE_NONE: + if (ctx->priority >= mae->nb_outer_rule_prios_max) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + NULL, "OR: unsupported priority level"); + } - rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_OUTER, ctx->priority, - &ctx->match_spec_outer); - if (rc != 0) { - return rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_ITEM, pattern, - "Failed to initialise outer rule match specification"); - } + rc = efx_mae_match_spec_init(sa->nic, + EFX_MAE_RULE_OUTER, ctx->priority, + &ctx->match_spec_outer); + if (rc != 0) { + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "OR: failed to initialise the match specification"); + } + + /* + * Outermost items comprise a match + * specification of type OUTER. + */ + ctx->match_spec = ctx->match_spec_outer; - /* Outermost items comprise a match specification of type OUTER. */ - ctx->match_spec = ctx->match_spec_outer; + /* Outermost items use "ENC" EFX MAE field IDs. */ + ctx->field_ids_remap = field_ids_remap_to_encap; - /* Outermost items use "ENC" EFX MAE field IDs. */ - ctx->field_ids_remap = field_ids_remap_to_encap; + rc = efx_mae_outer_rule_recirc_id_set(ctx->match_spec, + recirc_id); + if (rc != 0) { + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "OR: failed to initialise RECIRC_ID"); + } + break; + default: + SFC_ASSERT(B_FALSE); + break; + } return 0; } @@ -2276,17 +2336,29 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, int rc; memset(&ctx_mae, 0, sizeof(ctx_mae)); + ctx_mae.ft_rule_type = spec->ft_rule_type; ctx_mae.priority = spec->priority; + ctx_mae.ft = spec->ft; ctx_mae.sa = sa; - rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION, - spec->priority, - &ctx_mae.match_spec_action); - if (rc != 0) { - rc = rte_flow_error_set(error, rc, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to initialise action rule match specification"); - goto fail_init_match_spec_action; + switch (ctx_mae.ft_rule_type) { + case SFC_FT_RULE_JUMP: + /* No action rule */ + break; + case SFC_FT_RULE_NONE: + rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION, + spec->priority, + &ctx_mae.match_spec_action); + if (rc != 0) { + rc = rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "AR: failed to initialise the match specification"); + goto fail_init_match_spec_action; + } + break; + default: + SFC_ASSERT(B_FALSE); + break; } /* @@ -2320,7 +2392,8 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, if (rc != 0) goto fail_process_outer; - if (!efx_mae_match_spec_is_valid(sa->nic, ctx_mae.match_spec_action)) { + if (ctx_mae.match_spec_action != NULL && + !efx_mae_match_spec_is_valid(sa->nic, ctx_mae.match_spec_action)) { rc = rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "Inconsistent pattern"); @@ -2338,7 +2411,8 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, sfc_mae_rule_encap_parse_fini(sa, &ctx_mae); fail_encap_parse_init: - efx_mae_match_spec_fini(sa->nic, ctx_mae.match_spec_action); + if (ctx_mae.match_spec_action != NULL) + efx_mae_match_spec_fini(sa->nic, ctx_mae.match_spec_action); fail_init_match_spec_action: return rc; @@ -3254,6 +3328,9 @@ sfc_mae_action_rule_class_verify(struct sfc_adapter *sa, { const struct rte_flow *entry; + if (spec->match_spec == NULL) + return 0; + TAILQ_FOREACH_REVERSE(entry, &sa->flow_list, sfc_flow_list, entries) { const struct sfc_flow_spec *entry_spec = &entry->spec; const struct sfc_flow_spec_mae *es_mae = &entry_spec->mae; @@ -3325,11 +3402,10 @@ sfc_mae_flow_insert(struct sfc_adapter *sa, struct sfc_flow_spec_mae *spec_mae = &spec->mae; struct sfc_mae_outer_rule *outer_rule = spec_mae->outer_rule; struct sfc_mae_action_set *action_set = spec_mae->action_set; - struct sfc_mae_fw_rsrc *fw_rsrc = &action_set->fw_rsrc; + struct sfc_mae_fw_rsrc *fw_rsrc; int rc; SFC_ASSERT(spec_mae->rule_id.id == EFX_MAE_RSRC_ID_INVALID); - SFC_ASSERT(action_set != NULL); if (outer_rule != NULL) { rc = sfc_mae_outer_rule_enable(sa, outer_rule, @@ -3338,6 +3414,11 @@ sfc_mae_flow_insert(struct sfc_adapter *sa, goto fail_outer_rule_enable; } + if (action_set == NULL) { + sfc_dbg(sa, "enabled flow=%p (no AR)", flow); + return 0; + } + rc = sfc_mae_action_set_enable(sa, action_set); if (rc != 0) goto fail_action_set_enable; @@ -3351,6 +3432,8 @@ sfc_mae_flow_insert(struct sfc_adapter *sa, } } + fw_rsrc = &action_set->fw_rsrc; + rc = efx_mae_action_rule_insert(sa->nic, spec_mae->match_spec, NULL, &fw_rsrc->aset_id, &spec_mae->rule_id); @@ -3384,8 +3467,12 @@ sfc_mae_flow_remove(struct sfc_adapter *sa, struct sfc_mae_outer_rule *outer_rule = spec_mae->outer_rule; int rc; + if (action_set == NULL) { + sfc_dbg(sa, "disabled flow=%p (no AR)", flow); + goto skip_action_rule; + } + SFC_ASSERT(spec_mae->rule_id.id != EFX_MAE_RSRC_ID_INVALID); - SFC_ASSERT(action_set != NULL); rc = efx_mae_action_rule_remove(sa->nic, &spec_mae->rule_id); if (rc != 0) { @@ -3398,6 +3485,7 @@ sfc_mae_flow_remove(struct sfc_adapter *sa, sfc_mae_action_set_disable(sa, action_set); +skip_action_rule: if (outer_rule != NULL) sfc_mae_outer_rule_disable(sa, outer_rule); @@ -3416,7 +3504,7 @@ sfc_mae_query_counter(struct sfc_adapter *sa, unsigned int i; int rc; - if (action_set->n_counters == 0) { + if (action_set == NULL || action_set->n_counters == 0) { return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "Queried flow rule does not have count actions"); diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h index 7e3b6a7a97..907e292dd1 100644 --- a/drivers/net/sfc/sfc_mae.h +++ b/drivers/net/sfc/sfc_mae.h @@ -285,9 +285,11 @@ struct sfc_mae_parse_ctx { size_t tunnel_def_mask_size; const void *tunnel_def_mask; bool match_mport_set; + enum sfc_flow_tunnel_rule_type ft_rule_type; struct sfc_mae_pattern_data pattern_data; efx_tunnel_protocol_t encap_type; unsigned int priority; + struct sfc_flow_tunnel *ft; }; int sfc_mae_attach(struct sfc_adapter *sa); From patchwork Wed Sep 29 20:57:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100024 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8490DA0032; Wed, 29 Sep 2021 22:58:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F02D94111F; Wed, 29 Sep 2021 22:57:49 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 43301410EF for ; Wed, 29 Sep 2021 22:57:45 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 126587F6D1; Wed, 29 Sep 2021 23:57:45 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 126587F6D1 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949065; bh=I800JZhpt5ExxgD7IIuIgiK0s/59U5wX6PFSkNPo90Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=sPomJgy+KmF0p3KW0CzE+yUiVLiPI9+JYYMRFnqEbyPrWm1LBRtXW/6RALCnJyZ4V 4LMFdFKZELjc9THBLECKS1a5YhYleXiwa6c+VYQ7bk+6lw/VNoj+299ZhkE9neS78U we06GB0AC1fNzgKsY9FSSHN8ojk0x7xgpT+znqaA= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko , Ray Kinsella Date: Wed, 29 Sep 2021 23:57:24 +0300 Message-Id: <20210929205730.775-5-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 04/10] common/sfc_efx/base: add RECIRC ID match in action rules API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, there is an API for setting recirculation ID in outer rules. Add an API to let action rules match on it. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/common/sfc_efx/base/efx.h | 13 ++++++++++ drivers/common/sfc_efx/base/efx_mae.c | 36 ++++++++++++++++++++++++--- drivers/common/sfc_efx/version.map | 1 + 3 files changed, 46 insertions(+), 4 deletions(-) diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h index ca747de7a4..22f5edfedd 100644 --- a/drivers/common/sfc_efx/base/efx.h +++ b/drivers/common/sfc_efx/base/efx.h @@ -4177,6 +4177,11 @@ typedef enum efx_mae_field_id_e { EFX_MAE_FIELD_ENC_HAS_OVLAN, EFX_MAE_FIELD_ENC_HAS_IVLAN, + /* + * Fields which can be set by efx_mae_match_spec_field_set() + * or by using dedicated field-specific helper APIs. + */ + EFX_MAE_FIELD_RECIRC_ID, EFX_MAE_FIELD_NIDS } efx_mae_field_id_t; @@ -4251,6 +4256,12 @@ efx_mae_match_spec_mport_set( __in const efx_mport_sel_t *valuep, __in_opt const efx_mport_sel_t *maskp); +LIBEFX_API +extern __checkReturn efx_rc_t +efx_mae_match_spec_recirc_id_set( + __in efx_mae_match_spec_t *spec, + __in uint8_t recirc_id); + LIBEFX_API extern __checkReturn boolean_t efx_mae_match_specs_equal( @@ -4387,6 +4398,8 @@ typedef struct efx_mae_rule_id_s { /* * Set the initial recirculation ID. It goes to action rule (AR) lookup. + * + * To match on this ID in an AR, use efx_mae_match_spec_recirc_id_set(). */ LIBEFX_API extern __checkReturn efx_rc_t diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c index c37e90831f..80be922e51 100644 --- a/drivers/common/sfc_efx/base/efx_mae.c +++ b/drivers/common/sfc_efx/base/efx_mae.c @@ -473,6 +473,7 @@ typedef enum efx_mae_field_cap_id_e { EFX_MAE_FIELD_ID_HAS_IVLAN = MAE_FIELD_HAS_IVLAN, EFX_MAE_FIELD_ID_ENC_HAS_OVLAN = MAE_FIELD_ENC_HAS_OVLAN, EFX_MAE_FIELD_ID_ENC_HAS_IVLAN = MAE_FIELD_ENC_HAS_IVLAN, + EFX_MAE_FIELD_ID_RECIRC_ID = MAE_FIELD_RECIRC_ID, EFX_MAE_FIELD_CAP_NIDS } efx_mae_field_cap_id_t; @@ -519,10 +520,10 @@ static const efx_mae_mv_desc_t __efx_mae_action_rule_mv_desc_set[] = { [EFX_MAE_FIELD_##_name] = \ { \ EFX_MAE_FIELD_ID_##_name, \ - MAE_FIELD_MASK_VALUE_PAIRS_##_name##_LEN, \ - MAE_FIELD_MASK_VALUE_PAIRS_##_name##_OFST, \ - MAE_FIELD_MASK_VALUE_PAIRS_##_name##_MASK_LEN, \ - MAE_FIELD_MASK_VALUE_PAIRS_##_name##_MASK_OFST, \ + MAE_FIELD_MASK_VALUE_PAIRS_V2_##_name##_LEN, \ + MAE_FIELD_MASK_VALUE_PAIRS_V2_##_name##_OFST, \ + MAE_FIELD_MASK_VALUE_PAIRS_V2_##_name##_MASK_LEN, \ + MAE_FIELD_MASK_VALUE_PAIRS_V2_##_name##_MASK_OFST, \ 0, 0 /* no alternative field */, \ _endianness \ } @@ -547,6 +548,7 @@ static const efx_mae_mv_desc_t __efx_mae_action_rule_mv_desc_set[] = { EFX_MAE_MV_DESC(TCP_FLAGS_BE, EFX_MAE_FIELD_BE), EFX_MAE_MV_DESC(ENC_VNET_ID_BE, EFX_MAE_FIELD_BE), EFX_MAE_MV_DESC(OUTER_RULE_ID, EFX_MAE_FIELD_LE), + EFX_MAE_MV_DESC(RECIRC_ID, EFX_MAE_FIELD_LE), #undef EFX_MAE_MV_DESC }; @@ -961,6 +963,32 @@ efx_mae_match_spec_mport_set( fail2: EFSYS_PROBE(fail2); +fail1: + EFSYS_PROBE1(fail1, efx_rc_t, rc); + return (rc); +} + + __checkReturn efx_rc_t +efx_mae_match_spec_recirc_id_set( + __in efx_mae_match_spec_t *spec, + __in uint8_t recirc_id) +{ + uint8_t full_mask = UINT8_MAX; + const uint8_t *vp; + const uint8_t *mp; + efx_rc_t rc; + + vp = (const uint8_t *)&recirc_id; + mp = (const uint8_t *)&full_mask; + + rc = efx_mae_match_spec_field_set(spec, EFX_MAE_FIELD_RECIRC_ID, + sizeof (recirc_id), vp, + sizeof (full_mask), mp); + if (rc != 0) + goto fail1; + + return (0); + fail1: EFSYS_PROBE1(fail1, efx_rc_t, rc); return (rc); diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map index d4878dfb9a..f59f2d8de0 100644 --- a/drivers/common/sfc_efx/version.map +++ b/drivers/common/sfc_efx/version.map @@ -122,6 +122,7 @@ INTERNAL { efx_mae_match_spec_is_valid; efx_mae_match_spec_mport_set; efx_mae_match_spec_outer_rule_id_set; + efx_mae_match_spec_recirc_id_set; efx_mae_match_specs_class_cmp; efx_mae_match_specs_equal; efx_mae_mport_by_pcie_function; From patchwork Wed Sep 29 20:57:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100025 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0197A0032; Wed, 29 Sep 2021 22:58:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E7DF4113D; Wed, 29 Sep 2021 22:57:52 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id A6352410EE for ; Wed, 29 Sep 2021 22:57:45 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 4F1C67F6D3; Wed, 29 Sep 2021 23:57:45 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 4F1C67F6D3 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949065; bh=b8rcClhjSRFMHuUIKA7LXApQ6pVqteM9C5XL5GDYIqg=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=eGVTYmOu6H0qBmyOq0KS2M0vzGCt1yTEX4SR9xn8gjtffQELupaQdGwBxEbb3/VIK C5SVz5GGkUOP7PKiXgxnQ4u6ON+NsgqVRUEWqzAeJrPT7YXiplD7edgyLaS8ZBVCDj l5ufc5V2FyK2TfUH8p+pyYmJrWGEnpL3HM3nQEGQ= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:25 +0300 Message-Id: <20210929205730.775-6-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 05/10] net/sfc: support GROUP flows in tunnel offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" GROUP is an in-house term for so-called "tunnel_match" flows. On parsing, they are detected by virtue of PMD-internal item MARK. It associates a given flow with its tunnel context. Such a flow is represented by a MAE action rule which is chained with the corresponding JUMP rule's outer rule by virtue of matching on its recirculation ID. GROUP flows do narrower match than JUMP flows do and decapsulate matching packets (full offload). Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_flow.h | 2 + drivers/net/sfc/sfc_flow_tunnel.h | 6 ++ drivers/net/sfc/sfc_mae.c | 151 ++++++++++++++++++++++++++++++ 3 files changed, 159 insertions(+) diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h index ada3d563ad..efdecc97ab 100644 --- a/drivers/net/sfc/sfc_flow.h +++ b/drivers/net/sfc/sfc_flow.h @@ -69,6 +69,8 @@ enum sfc_flow_tunnel_rule_type { SFC_FT_RULE_NONE = 0, /* The flow represents a JUMP rule */ SFC_FT_RULE_JUMP, + /* The flow represents a GROUP rule */ + SFC_FT_RULE_GROUP, }; /* MAE-specific flow specification */ diff --git a/drivers/net/sfc/sfc_flow_tunnel.h b/drivers/net/sfc/sfc_flow_tunnel.h index 6a81b29438..27a8fa5ae7 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.h +++ b/drivers/net/sfc/sfc_flow_tunnel.h @@ -39,6 +39,12 @@ typedef uint8_t sfc_ft_id_t; #define SFC_FT_ID_TO_TUNNEL_MARK(_id) \ ((_id) + 1) +#define SFC_FT_ID_TO_MARK(_id) \ + (SFC_FT_ID_TO_TUNNEL_MARK(_id) << SFC_FT_USER_MARK_BITS) + +#define SFC_FT_GET_USER_MARK(_mark) \ + ((_mark) & SFC_FT_USER_MARK_MASK) + #define SFC_FT_MAX_NTUNNELS \ (RTE_LEN2MASK(SFC_FT_TUNNEL_MARK_BITS, uint8_t) - 1) diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 57a999d895..63ec2b02b3 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1048,6 +1048,36 @@ sfc_mae_rule_process_pattern_data(struct sfc_mae_parse_ctx *ctx, "Failed to process pattern data"); } +static int +sfc_mae_rule_parse_item_mark(const struct rte_flow_item *item, + struct sfc_flow_parse_ctx *ctx, + struct rte_flow_error *error) +{ + const struct rte_flow_item_mark *spec = item->spec; + struct sfc_mae_parse_ctx *ctx_mae = ctx->mae; + + if (spec == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "NULL spec in item MARK"); + } + + /* + * This item is used in tunnel offload support only. + * It must go before any network header items. This + * way, sfc_mae_rule_preparse_item_mark() must have + * already parsed it. Only one item MARK is allowed. + */ + if (ctx_mae->ft_rule_type != SFC_FT_RULE_GROUP || + spec->id != (uint32_t)SFC_FT_ID_TO_MARK(ctx_mae->ft->id)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "invalid item MARK"); + } + + return 0; +} + static int sfc_mae_rule_parse_item_port_id(const struct rte_flow_item *item, struct sfc_flow_parse_ctx *ctx, @@ -1996,6 +2026,14 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item, } static const struct sfc_flow_item sfc_flow_items[] = { + { + .type = RTE_FLOW_ITEM_TYPE_MARK, + .name = "MARK", + .prev_layer = SFC_FLOW_ITEM_ANY_LAYER, + .layer = SFC_FLOW_ITEM_ANY_LAYER, + .ctx_type = SFC_FLOW_PARSE_CTX_MAE, + .parse = sfc_mae_rule_parse_item_mark, + }, { .type = RTE_FLOW_ITEM_TYPE_PORT_ID, .name = "PORT_ID", @@ -2164,6 +2202,19 @@ sfc_mae_rule_process_outer(struct sfc_adapter *sa, case SFC_FT_RULE_JUMP: /* No action rule */ return 0; + case SFC_FT_RULE_GROUP: + /* + * Match on recirculation ID rather than + * on the outer rule allocation handle. + */ + rc = efx_mae_match_spec_recirc_id_set(ctx->match_spec_action, + SFC_FT_ID_TO_TUNNEL_MARK(ctx->ft->id)); + if (rc != 0) { + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: GROUP: AR: failed to request match on RECIRC_ID"); + } + return 0; default: SFC_ASSERT(B_FALSE); } @@ -2198,6 +2249,44 @@ sfc_mae_rule_process_outer(struct sfc_adapter *sa, return 0; } +static int +sfc_mae_rule_preparse_item_mark(const struct rte_flow_item_mark *spec, + struct sfc_mae_parse_ctx *ctx) +{ + struct sfc_flow_tunnel *ft; + uint32_t user_mark; + + if (spec == NULL) { + sfc_err(ctx->sa, "tunnel offload: GROUP: NULL spec in item MARK"); + return EINVAL; + } + + ft = sfc_flow_tunnel_pick(ctx->sa, spec->id); + if (ft == NULL) { + sfc_err(ctx->sa, "tunnel offload: GROUP: invalid tunnel"); + return EINVAL; + } + + if (ft->refcnt == 0) { + sfc_err(ctx->sa, "tunnel offload: GROUP: tunnel=%u does not exist", + ft->id); + return ENOENT; + } + + user_mark = SFC_FT_GET_USER_MARK(spec->id); + if (user_mark != 0) { + sfc_err(ctx->sa, "tunnel offload: GROUP: invalid item MARK"); + return EINVAL; + } + + sfc_dbg(ctx->sa, "tunnel offload: GROUP: detected"); + + ctx->ft_rule_type = SFC_FT_RULE_GROUP; + ctx->ft = ft; + + return 0; +} + static int sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, const struct rte_flow_item pattern[], @@ -2217,6 +2306,16 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, for (;;) { switch (pattern->type) { + case RTE_FLOW_ITEM_TYPE_MARK: + rc = sfc_mae_rule_preparse_item_mark(pattern->spec, + ctx); + if (rc != 0) { + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "tunnel offload: GROUP: invalid item MARK"); + } + ++pattern; + continue; case RTE_FLOW_ITEM_TYPE_VXLAN: ctx->encap_type = EFX_TUNNEL_PROTOCOL_VXLAN; ctx->tunnel_def_mask = &rte_flow_item_vxlan_mask; @@ -2258,6 +2357,17 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, } ctx->encap_type = ctx->ft->encap_type; break; + case SFC_FT_RULE_GROUP: + if (pattern->type == RTE_FLOW_ITEM_TYPE_END) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "tunnel offload: GROUP: missing tunnel item"); + } else if (ctx->encap_type != ctx->ft->encap_type) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "tunnel offload: GROUP: tunnel type mismatch"); + } + break; default: SFC_ASSERT(B_FALSE); break; @@ -2306,6 +2416,14 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, "OR: failed to initialise RECIRC_ID"); } break; + case SFC_FT_RULE_GROUP: + /* Outermost items -> "ENC" match fields in the action rule. */ + ctx->field_ids_remap = field_ids_remap_to_encap; + ctx->match_spec = ctx->match_spec_action; + + /* No own outer rule; match on JUMP OR's RECIRC_ID is used. */ + ctx->encap_type = EFX_TUNNEL_PROTOCOL_NONE; + break; default: SFC_ASSERT(B_FALSE); break; @@ -2345,6 +2463,8 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, case SFC_FT_RULE_JUMP: /* No action rule */ break; + case SFC_FT_RULE_GROUP: + /* FALLTHROUGH */ case SFC_FT_RULE_NONE: rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION, spec->priority, @@ -2379,6 +2499,13 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, if (rc != 0) goto fail_encap_parse_init; + /* + * sfc_mae_rule_encap_parse_init() may have detected tunnel offload + * GROUP rule. Remember its properties for later use. + */ + spec->ft_rule_type = ctx_mae.ft_rule_type; + spec->ft = ctx_mae.ft; + rc = sfc_flow_parse_pattern(sa, sfc_flow_items, RTE_DIM(sfc_flow_items), pattern, &ctx, error); if (rc != 0) @@ -3215,6 +3342,13 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, if (rc != 0) goto fail_action_set_spec_init; + if (spec_mae->ft_rule_type == SFC_FT_RULE_GROUP) { + /* JUMP rules don't decapsulate packets. GROUP rules do. */ + rc = efx_mae_action_set_populate_decap(spec); + if (rc != 0) + goto fail_enforce_ft_decap; + } + /* Cleanup after previous encap. header bounce buffer usage. */ sfc_mae_bounce_eh_invalidate(&mae->bounce_eh); @@ -3245,6 +3379,22 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, goto fail_nb_count; } + switch (spec_mae->ft_rule_type) { + case SFC_FT_RULE_NONE: + break; + case SFC_FT_RULE_GROUP: + /* + * Packets that go to the rule's AR have FT mark set (from the + * JUMP rule OR's RECIRC_ID). Remove this mark in matching + * packets. The user may have provided their own action + * MARK above, so don't check the return value here. + */ + (void)efx_mae_action_set_populate_mark(spec, 0); + break; + default: + SFC_ASSERT(B_FALSE); + } + spec_mae->action_set = sfc_mae_action_set_attach(sa, encap_header, n_count, spec); if (spec_mae->action_set != NULL) { @@ -3268,6 +3418,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, fail_rule_parse_action: efx_mae_action_set_spec_fini(sa->nic, spec); +fail_enforce_ft_decap: fail_action_set_spec_init: if (rc > 0 && rte_errno == 0) { rc = rte_flow_error_set(error, rc, From patchwork Wed Sep 29 20:57:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100026 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 031C0A0032; Wed, 29 Sep 2021 22:58:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C1F1141142; Wed, 29 Sep 2021 22:57:53 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id E7A93410EE for ; Wed, 29 Sep 2021 22:57:45 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 8F2667F6D5; Wed, 29 Sep 2021 23:57:45 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 8F2667F6D5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949065; bh=Edq0zT95StPv7UQHppqoKThQYklQinck9GxNYneodfs=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Eyo7SmC0fUFEOH8+Jy1y0C8ni8lf2dBTJypUUFcjVS+/XSkdj7bicHghk0unV+b9K M/M5BChwYEleM6R0VhxtWeDmYEE5u86Fv8mBXCM9S2gmlaHoCEITWofL3z77KrCQFk 7FN9vvC3hImb6Uz1oN4EdPk6IS6QZf+bg23SR1A0= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:26 +0300 Message-Id: <20210929205730.775-7-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 06/10] net/sfc: implement control path operations in tunnel offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support generic callbacks which callers will invoke to get PMD-specific actions and items used to produce JUMP and GROUP flows and to detect tunnel information. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_dp.c | 48 +++++ drivers/net/sfc/sfc_dp.h | 9 + drivers/net/sfc/sfc_ef100_rx.c | 12 ++ drivers/net/sfc/sfc_flow.c | 5 + drivers/net/sfc/sfc_flow_tunnel.c | 316 ++++++++++++++++++++++++++++++ drivers/net/sfc/sfc_flow_tunnel.h | 37 ++++ 6 files changed, 427 insertions(+) diff --git a/drivers/net/sfc/sfc_dp.c b/drivers/net/sfc/sfc_dp.c index 24ed0898c8..509c95890d 100644 --- a/drivers/net/sfc/sfc_dp.c +++ b/drivers/net/sfc/sfc_dp.c @@ -12,6 +12,7 @@ #include #include +#include #include "sfc_dp.h" #include "sfc_log.h" @@ -77,3 +78,50 @@ sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry) return 0; } + +int sfc_dp_ft_id_offset = -1; +uint64_t sfc_dp_ft_id_valid; + +int +sfc_dp_ft_id_register(void) +{ + static const struct rte_mbuf_dynfield ft_id = { + .name = "rte_net_sfc_dynfield_ft_id", + .size = sizeof(uint8_t), + .align = __alignof__(uint8_t), + }; + static const struct rte_mbuf_dynflag ft_id_valid = { + .name = "rte_net_sfc_dynflag_ft_id_valid", + }; + + int field_offset; + int flag; + + SFC_GENERIC_LOG(INFO, "%s() entry", __func__); + + if (sfc_dp_ft_id_valid != 0) { + SFC_GENERIC_LOG(INFO, "%s() already registered", __func__); + return 0; + } + + field_offset = rte_mbuf_dynfield_register(&ft_id); + if (field_offset < 0) { + SFC_GENERIC_LOG(ERR, "%s() failed to register ft_id dynfield", + __func__); + return -1; + } + + flag = rte_mbuf_dynflag_register(&ft_id_valid); + if (flag < 0) { + SFC_GENERIC_LOG(ERR, "%s() failed to register ft_id dynflag", + __func__); + return -1; + } + + sfc_dp_ft_id_offset = field_offset; + sfc_dp_ft_id_valid = UINT64_C(1) << flag; + + SFC_GENERIC_LOG(INFO, "%s() done", __func__); + + return 0; +} diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h index 7fd8f34b0f..b27420d4fc 100644 --- a/drivers/net/sfc/sfc_dp.h +++ b/drivers/net/sfc/sfc_dp.h @@ -126,6 +126,15 @@ struct sfc_dp *sfc_dp_find_by_caps(struct sfc_dp_list *head, unsigned int avail_caps); int sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry); +/** Dynamically registered mbuf "ft_id" validity flag (as a bitmask). */ +extern uint64_t sfc_dp_ft_id_valid; + +/** Dynamically registered mbuf field "ft_id" (mbuf byte offset). */ +extern int sfc_dp_ft_id_offset; + +/** Register dynamic mbuf field "ft_id" and its validity flag. */ +int sfc_dp_ft_id_register(void); + #ifdef __cplusplus } #endif diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c index 3219c972db..a4f4500d74 100644 --- a/drivers/net/sfc/sfc_ef100_rx.c +++ b/drivers/net/sfc/sfc_ef100_rx.c @@ -422,6 +422,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq, } if (rxq->flags & SFC_EF100_RXQ_USER_MARK) { + uint8_t tunnel_mark; uint32_t user_mark; uint32_t mark; @@ -434,6 +435,17 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq, ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID; m->hash.fdir.hi = user_mark; } + + tunnel_mark = SFC_FT_GET_TUNNEL_MARK(mark); + if (tunnel_mark != SFC_FT_TUNNEL_MARK_INVALID) { + sfc_ft_id_t ft_id; + + ft_id = SFC_FT_TUNNEL_MARK_TO_ID(tunnel_mark); + + ol_flags |= sfc_dp_ft_id_valid; + *RTE_MBUF_DYNFIELD(m, sfc_dp_ft_id_offset, + sfc_ft_id_t *) = ft_id; + } } m->ol_flags = ol_flags; diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index fe726afc9c..c3e75bae84 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -2926,6 +2926,11 @@ const struct rte_flow_ops sfc_flow_ops = { .flush = sfc_flow_flush, .query = sfc_flow_query, .isolate = sfc_flow_isolate, + .tunnel_decap_set = sfc_flow_tunnel_decap_set, + .tunnel_match = sfc_flow_tunnel_match, + .tunnel_action_decap_release = sfc_flow_tunnel_action_decap_release, + .tunnel_item_release = sfc_flow_tunnel_item_release, + .get_restore_info = sfc_flow_tunnel_get_restore_info, }; void diff --git a/drivers/net/sfc/sfc_flow_tunnel.c b/drivers/net/sfc/sfc_flow_tunnel.c index b03c90c9a4..2de401148e 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.c +++ b/drivers/net/sfc/sfc_flow_tunnel.c @@ -6,7 +6,10 @@ #include #include +#include + #include "sfc.h" +#include "sfc_dp.h" #include "sfc_flow.h" #include "sfc_dp_rx.h" #include "sfc_flow_tunnel.h" @@ -143,3 +146,316 @@ sfc_flow_tunnel_detect_jump_rule(struct sfc_adapter *sa, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "tunnel offload: JUMP: preparsing failed"); } + +static int +sfc_flow_tunnel_attach(struct sfc_adapter *sa, + struct rte_flow_tunnel *tunnel, + struct sfc_flow_tunnel **ftp) +{ + struct sfc_flow_tunnel *ft; + const char *ft_status; + int ft_id_free = -1; + sfc_ft_id_t ft_id; + int rc; + + SFC_ASSERT(sfc_adapter_is_locked(sa)); + + rc = sfc_dp_ft_id_register(); + if (rc != 0) + return rc; + + if (tunnel->type != RTE_FLOW_ITEM_TYPE_VXLAN) { + sfc_err(sa, "tunnel offload: unsupported tunnel (encapsulation) type"); + return ENOTSUP; + } + + for (ft_id = 0; ft_id < SFC_FT_MAX_NTUNNELS; ++ft_id) { + ft = &sa->flow_tunnels[ft_id]; + + if (ft->refcnt == 0) { + if (ft_id_free == -1) + ft_id_free = ft_id; + + continue; + } + + if (memcmp(tunnel, &ft->rte_tunnel, sizeof(*tunnel)) == 0) { + ft_status = "existing"; + goto attach; + } + } + + if (ft_id_free == -1) { + sfc_err(sa, "tunnel offload: no free slot for the new tunnel"); + return ENOBUFS; + } + + ft_id = ft_id_free; + ft = &sa->flow_tunnels[ft_id]; + + memcpy(&ft->rte_tunnel, tunnel, sizeof(*tunnel)); + + ft->encap_type = EFX_TUNNEL_PROTOCOL_VXLAN; + + ft->action_mark.id = SFC_FT_ID_TO_MARK(ft_id_free); + ft->action.type = RTE_FLOW_ACTION_TYPE_MARK; + ft->action.conf = &ft->action_mark; + + ft->item.type = RTE_FLOW_ITEM_TYPE_MARK; + ft->item_mark_v.id = ft->action_mark.id; + ft->item.spec = &ft->item_mark_v; + ft->item.mask = &ft->item_mark_m; + ft->item_mark_m.id = UINT32_MAX; + + ft->jump_rule_is_set = B_FALSE; + + ft->refcnt = 0; + + ft_status = "newly added"; + +attach: + sfc_dbg(sa, "tunnel offload: attaching to %s tunnel=%u", + ft_status, ft_id); + + ++(ft->refcnt); + *ftp = ft; + + return 0; +} + +static int +sfc_flow_tunnel_detach(struct sfc_adapter *sa, + uint32_t ft_mark) +{ + struct sfc_flow_tunnel *ft; + + SFC_ASSERT(sfc_adapter_is_locked(sa)); + + ft = sfc_flow_tunnel_pick(sa, ft_mark); + if (ft == NULL) { + sfc_err(sa, "tunnel offload: invalid tunnel"); + return EINVAL; + } + + if (ft->refcnt == 0) { + sfc_err(sa, "tunnel offload: tunnel=%u does not exist", ft->id); + return ENOENT; + } + + --(ft->refcnt); + + return 0; +} + +int +sfc_flow_tunnel_decap_set(struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **pmd_actions, + uint32_t *num_of_actions, + struct rte_flow_error *err) +{ + struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); + struct sfc_flow_tunnel *ft; + int rc; + + sfc_adapter_lock(sa); + + if (!sfc_flow_tunnel_is_active(sa)) { + rc = ENOTSUP; + goto fail; + } + + rc = sfc_flow_tunnel_attach(sa, tunnel, &ft); + if (rc != 0) + goto fail; + + *pmd_actions = &ft->action; + *num_of_actions = 1; + + sfc_adapter_unlock(sa); + + return 0; + +fail: + sfc_adapter_unlock(sa); + + return rte_flow_error_set(err, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: decap_set failed"); +} + +int +sfc_flow_tunnel_match(struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **pmd_items, + uint32_t *num_of_items, + struct rte_flow_error *err) +{ + struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); + struct sfc_flow_tunnel *ft; + int rc; + + sfc_adapter_lock(sa); + + if (!sfc_flow_tunnel_is_active(sa)) { + rc = ENOTSUP; + goto fail; + } + + rc = sfc_flow_tunnel_attach(sa, tunnel, &ft); + if (rc != 0) + goto fail; + + *pmd_items = &ft->item; + *num_of_items = 1; + + sfc_adapter_unlock(sa); + + return 0; + +fail: + sfc_adapter_unlock(sa); + + return rte_flow_error_set(err, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: tunnel_match failed"); +} + +int +sfc_flow_tunnel_item_release(struct rte_eth_dev *dev, + struct rte_flow_item *pmd_items, + uint32_t num_items, + struct rte_flow_error *err) +{ + struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); + const struct rte_flow_item_mark *item_mark; + struct rte_flow_item *item = pmd_items; + int rc; + + sfc_adapter_lock(sa); + + if (!sfc_flow_tunnel_is_active(sa)) { + rc = ENOTSUP; + goto fail; + } + + if (num_items != 1 || item == NULL || item->spec == NULL || + item->type != RTE_FLOW_ITEM_TYPE_MARK) { + sfc_err(sa, "tunnel offload: item_release: wrong input"); + rc = EINVAL; + goto fail; + } + + item_mark = item->spec; + + rc = sfc_flow_tunnel_detach(sa, item_mark->id); + if (rc != 0) + goto fail; + + sfc_adapter_unlock(sa); + + return 0; + +fail: + sfc_adapter_unlock(sa); + + return rte_flow_error_set(err, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: item_release failed"); +} + +int +sfc_flow_tunnel_action_decap_release(struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, + uint32_t num_actions, + struct rte_flow_error *err) +{ + struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); + const struct rte_flow_action_mark *action_mark; + struct rte_flow_action *action = pmd_actions; + int rc; + + sfc_adapter_lock(sa); + + if (!sfc_flow_tunnel_is_active(sa)) { + rc = ENOTSUP; + goto fail; + } + + if (num_actions != 1 || action == NULL || action->conf == NULL || + action->type != RTE_FLOW_ACTION_TYPE_MARK) { + sfc_err(sa, "tunnel offload: action_decap_release: wrong input"); + rc = EINVAL; + goto fail; + } + + action_mark = action->conf; + + rc = sfc_flow_tunnel_detach(sa, action_mark->id); + if (rc != 0) + goto fail; + + sfc_adapter_unlock(sa); + + return 0; + +fail: + sfc_adapter_unlock(sa); + + return rte_flow_error_set(err, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: item_release failed"); +} + +int +sfc_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, + struct rte_mbuf *m, + struct rte_flow_restore_info *info, + struct rte_flow_error *err) +{ + struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev); + const struct sfc_flow_tunnel *ft; + sfc_ft_id_t ft_id; + int rc; + + sfc_adapter_lock(sa); + + if ((m->ol_flags & sfc_dp_ft_id_valid) == 0) { + sfc_dbg(sa, "tunnel offload: get_restore_info: no tunnel mark in the packet"); + rc = EINVAL; + goto fail; + } + + ft_id = *RTE_MBUF_DYNFIELD(m, sfc_dp_ft_id_offset, sfc_ft_id_t *); + ft = &sa->flow_tunnels[ft_id]; + + if (ft->refcnt == 0) { + sfc_err(sa, "tunnel offload: get_restore_info: tunnel=%u does not exist", + ft_id); + rc = ENOENT; + goto fail; + } + + memcpy(&info->tunnel, &ft->rte_tunnel, sizeof(info->tunnel)); + + /* + * The packet still has encapsulation header; JUMP rules never + * strip it. Therefore, set RTE_FLOW_RESTORE_INFO_ENCAPSULATED. + */ + info->flags = RTE_FLOW_RESTORE_INFO_ENCAPSULATED | + RTE_FLOW_RESTORE_INFO_GROUP_ID | + RTE_FLOW_RESTORE_INFO_TUNNEL; + + info->group_id = 0; + + sfc_adapter_unlock(sa); + + return 0; + +fail: + sfc_adapter_unlock(sa); + + return rte_flow_error_set(err, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: get_restore_info failed"); +} diff --git a/drivers/net/sfc/sfc_flow_tunnel.h b/drivers/net/sfc/sfc_flow_tunnel.h index 27a8fa5ae7..573585ca80 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.h +++ b/drivers/net/sfc/sfc_flow_tunnel.h @@ -10,6 +10,8 @@ #include #include +#include + #include "efx.h" #ifdef __cplusplus @@ -51,8 +53,16 @@ typedef uint8_t sfc_ft_id_t; struct sfc_flow_tunnel { bool jump_rule_is_set; efx_tunnel_protocol_t encap_type; + struct rte_flow_tunnel rte_tunnel; unsigned int refcnt; sfc_ft_id_t id; + + struct rte_flow_action_mark action_mark; + struct rte_flow_action action; + + struct rte_flow_item_mark item_mark_v; + struct rte_flow_item_mark item_mark_m; + struct rte_flow_item item; }; struct sfc_adapter; @@ -69,6 +79,33 @@ int sfc_flow_tunnel_detect_jump_rule(struct sfc_adapter *sa, struct sfc_flow_spec_mae *spec, struct rte_flow_error *error); +int sfc_flow_tunnel_decap_set(struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + struct rte_flow_action **pmd_actions, + uint32_t *num_of_actions, + struct rte_flow_error *err); + +int sfc_flow_tunnel_match(struct rte_eth_dev *dev, + struct rte_flow_tunnel *tunnel, + struct rte_flow_item **pmd_items, + uint32_t *num_of_items, + struct rte_flow_error *err); + +int sfc_flow_tunnel_item_release(struct rte_eth_dev *dev, + struct rte_flow_item *pmd_items, + uint32_t num_items, + struct rte_flow_error *err); + +int sfc_flow_tunnel_action_decap_release(struct rte_eth_dev *dev, + struct rte_flow_action *pmd_actions, + uint32_t num_actions, + struct rte_flow_error *err); + +int sfc_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, + struct rte_mbuf *m, + struct rte_flow_restore_info *info, + struct rte_flow_error *err); + #ifdef __cplusplus } #endif From patchwork Wed Sep 29 20:57:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100027 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34FC4A0032; Wed, 29 Sep 2021 22:58:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E0C1B41145; Wed, 29 Sep 2021 22:57:54 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 0A98C410F5 for ; Wed, 29 Sep 2021 22:57:46 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id CA3A17F6D6; Wed, 29 Sep 2021 23:57:45 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru CA3A17F6D6 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949065; bh=yRKRxwpIAcN62kgmO5qwgmaDq0ynmw9/HAM23+pNpOA=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=qyS+kk0M0Qo2rAumZP+NO2VzIIuGVcQCIvHi/sSpkOgWNDWBn1n1pPjXp3LE4+hfO UGiAgR0D4yvJDuwrbqOsWLWOoGp5De5YEPnLhB61td11d/ECRKa1i/ZCw8/b7ahFby Hj4kVnitblDaK+LufiNaz2qv7RKvZozQ6xD43rrk= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:27 +0300 Message-Id: <20210929205730.775-8-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 07/10] net/sfc: override match on ETH in tunnel offload JUMP rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The current HW/FW doesn't allow to match on MAC addresses in outer rules. One day this will change for sure, but right now a workaround is needed. Match on VLAN presence in outer rules is also unsupported. Ignore it. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_mae.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 63ec2b02b3..374ef29d71 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1395,6 +1395,7 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item, struct rte_flow_error *error) { struct sfc_mae_parse_ctx *ctx_mae = ctx->mae; + struct rte_flow_item_eth override_mask; struct rte_flow_item_eth supp_mask; const uint8_t *spec = NULL; const uint8_t *mask = NULL; @@ -1412,6 +1413,22 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item, if (rc != 0) return rc; + if (ctx_mae->ft_rule_type == SFC_FT_RULE_JUMP && mask != NULL) { + /* + * The HW/FW hasn't got support for match on MAC addresses in + * outer rules yet (this will change). Match on VLAN presence + * isn't supported either. Ignore these match criteria. + */ + memcpy(&override_mask, mask, sizeof(override_mask)); + memset(&override_mask.hdr.d_addr, 0, + sizeof(override_mask.hdr.d_addr)); + memset(&override_mask.hdr.s_addr, 0, + sizeof(override_mask.hdr.s_addr)); + override_mask.has_vlan = 0; + + mask = (const uint8_t *)&override_mask; + } + if (spec != NULL) { struct sfc_mae_pattern_data *pdata = &ctx_mae->pattern_data; struct sfc_mae_ethertype *ethertypes = pdata->ethertypes; From patchwork Wed Sep 29 20:57:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100028 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1547DA0032; Wed, 29 Sep 2021 22:58:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B48B4114B; Wed, 29 Sep 2021 22:57:56 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 63754410F5 for ; Wed, 29 Sep 2021 22:57:46 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 10D067F553; Wed, 29 Sep 2021 23:57:46 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 10D067F553 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949066; bh=FnoO6VMZKhJ/HQri2dLDih5KKfTTN1Y3iRRvtUfwYso=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=hq7OrGj/uLvRRjp+OJadwkgyNSr7V6MCo/yZ8pzPiwdeQxf+EWKbkHcHqHBrNb+MY m6XjQhnUiPkHAgPir9kgKRNoOKSD8QvCuFncU1ApwHtvkKhWnPPMLMxNrDdtiSATqv cHgPY5X79ujgvuyzRhn4sfSUL6d5v7T+ZnYi4Lg0= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:28 +0300 Message-Id: <20210929205730.775-9-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 08/10] net/sfc: use action rules in tunnel offload JUMP rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" By design, JUMP flows should be represented solely by the outer rules. But the HW/FW hasn't got support for setting Rx mark from RECIRC_ID on outer rule lookup yet. Neither does it support outer rule counters. As a workaround, an action rule of lower priority is used to do the job. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_flow.c | 11 +++++--- drivers/net/sfc/sfc_mae.c | 55 ++++++++++++++++++++++++++++++++------ 2 files changed, 54 insertions(+), 12 deletions(-) diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index c3e75bae84..b0dd7d7b6c 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -2561,17 +2561,20 @@ sfc_flow_parse_rte_to_mae(struct rte_eth_dev *dev, if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) { /* - * This flow is represented solely by the outer rule. - * It is supposed to mark and count matching packets. + * By design, this flow should be represented solely by the + * outer rule. But the HW/FW hasn't got support for setting + * Rx mark from RECIRC_ID on outer rule lookup yet. Neither + * does it support outer rule counters. As a workaround, an + * action rule of lower priority is used to do the job. + * + * So don't skip sfc_mae_rule_parse_actions() below. */ - goto skip_action_rule; } rc = sfc_mae_rule_parse_actions(sa, actions, spec_mae, error); if (rc != 0) goto fail; -skip_action_rule: if (spec_mae->ft != NULL) { if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) spec_mae->ft->jump_rule_is_set = B_TRUE; diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 374ef29d71..faf3be522d 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -2467,6 +2467,7 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, struct rte_flow_error *error) { struct sfc_mae_parse_ctx ctx_mae; + unsigned int priority_shift = 0; struct sfc_flow_parse_ctx ctx; int rc; @@ -2478,13 +2479,32 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, switch (ctx_mae.ft_rule_type) { case SFC_FT_RULE_JUMP: - /* No action rule */ - break; + /* + * By design, this flow should be represented solely by the + * outer rule. But the HW/FW hasn't got support for setting + * Rx mark from RECIRC_ID on outer rule lookup yet. Neither + * does it support outer rule counters. As a workaround, an + * action rule of lower priority is used to do the job. + */ + priority_shift = 1; + + /* FALLTHROUGH */ case SFC_FT_RULE_GROUP: + if (ctx_mae.priority != 0) { + /* + * Because of the above workaround, deny the + * use of priorities to JUMP and GROUP rules. + */ + rc = rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL, + "tunnel offload: priorities are not supported"); + goto fail_priority_check; + } + /* FALLTHROUGH */ case SFC_FT_RULE_NONE: rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION, - spec->priority, + spec->priority + priority_shift, &ctx_mae.match_spec_action); if (rc != 0) { rc = rte_flow_error_set(error, rc, @@ -2559,6 +2579,7 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, efx_mae_match_spec_fini(sa->nic, ctx_mae.match_spec_action); fail_init_match_spec_action: +fail_priority_check: return rc; } @@ -3008,11 +3029,14 @@ sfc_mae_rule_parse_action_vxlan_encap( static int sfc_mae_rule_parse_action_mark(struct sfc_adapter *sa, const struct rte_flow_action_mark *conf, + const struct sfc_flow_spec_mae *spec_mae, efx_mae_actions_t *spec) { int rc; - if (conf->id > SFC_FT_USER_MARK_MASK) { + if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) { + /* Workaround. See sfc_flow_parse_rte_to_mae() */ + } else if (conf->id > SFC_FT_USER_MARK_MASK) { sfc_err(sa, "the mark value is too large"); return EINVAL; } @@ -3182,11 +3206,12 @@ static const char * const action_names[] = { static int sfc_mae_rule_parse_action(struct sfc_adapter *sa, const struct rte_flow_action *action, - const struct sfc_mae_outer_rule *outer_rule, + const struct sfc_flow_spec_mae *spec_mae, struct sfc_mae_actions_bundle *bundle, efx_mae_actions_t *spec, struct rte_flow_error *error) { + const struct sfc_mae_outer_rule *outer_rule = spec_mae->outer_rule; const uint64_t rx_meta = sa->negotiated_rx_meta; bool custom_error = B_FALSE; int rc = 0; @@ -3250,9 +3275,10 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa, case RTE_FLOW_ACTION_TYPE_MARK: SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_MARK, bundle->actions_mask); - if ((rx_meta & RTE_ETH_RX_META_USER_MARK) != 0) { + if ((rx_meta & RTE_ETH_RX_META_USER_MARK) != 0 || + spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) { rc = sfc_mae_rule_parse_action_mark(sa, action->conf, - spec); + spec_mae, spec); } else { rc = rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -3286,6 +3312,12 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa, bundle->actions_mask); rc = efx_mae_action_set_populate_drop(spec); break; + case RTE_FLOW_ACTION_TYPE_JUMP: + if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) { + /* Workaround. See sfc_flow_parse_rte_to_mae() */ + break; + } + /* FALLTHROUGH */ default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, @@ -3375,7 +3407,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, if (rc != 0) goto fail_rule_parse_action; - rc = sfc_mae_rule_parse_action(sa, action, spec_mae->outer_rule, + rc = sfc_mae_rule_parse_action(sa, action, spec_mae, &bundle, spec, error); if (rc != 0) goto fail_rule_parse_action; @@ -3399,6 +3431,12 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, switch (spec_mae->ft_rule_type) { case SFC_FT_RULE_NONE: break; + case SFC_FT_RULE_JUMP: + /* Workaround. See sfc_flow_parse_rte_to_mae() */ + rc = sfc_mae_rule_parse_action_pf_vf(sa, NULL, spec); + if (rc != 0) + goto fail_workaround_jump_delivery; + break; case SFC_FT_RULE_GROUP: /* * Packets that go to the rule's AR have FT mark set (from the @@ -3428,6 +3466,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, return 0; fail_action_set_add: +fail_workaround_jump_delivery: fail_nb_count: sfc_mae_encap_header_del(sa, encap_header); From patchwork Wed Sep 29 20:57:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100029 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27CA7A0032; Wed, 29 Sep 2021 22:58:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F8CA41151; Wed, 29 Sep 2021 22:57:57 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id A2D96410F5 for ; Wed, 29 Sep 2021 22:57:46 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 581E47F6D7; Wed, 29 Sep 2021 23:57:46 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 581E47F6D7 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949066; bh=VrxkcJZKRZXciLG0/tBGp/x07mNHL0iiF5ROBOdjbrg=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=MhJwpR1gXKG61UNgSGQErZjFQtg+Mes63ok1O0fc7XA2cr1Kmiz551voLqzjSEjrO 6hOYzKt6ivmJjXVUOXMSkRjxCwMAzYkTR51WiKQlivJGtX0HwH9PL0704K2kpxfers cVTab3IbGEqOPY/bmb6TPMC9K7q/wlPh+HVclpss= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:29 +0300 Message-Id: <20210929205730.775-10-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 09/10] net/sfc: support counters in tunnel offload JUMP rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Such a counter will only report the number of hits, which is actually a sum of two contributions (the JUMP rule's own counter + indirect increments issued by counters of the associated GROUP rules. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_flow.c | 2 ++ drivers/net/sfc/sfc_flow_tunnel.c | 18 ++++++++++ drivers/net/sfc/sfc_flow_tunnel.h | 5 +++ drivers/net/sfc/sfc_mae.c | 58 +++++++++++++++++++++++++++---- drivers/net/sfc/sfc_mae.h | 9 +++++ drivers/net/sfc/sfc_mae_counter.c | 41 +++++++++++++++++++--- drivers/net/sfc/sfc_mae_counter.h | 3 ++ 7 files changed, 125 insertions(+), 11 deletions(-) diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index b0dd7d7b6c..689b479d87 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -2991,6 +2991,8 @@ sfc_flow_start(struct sfc_adapter *sa) SFC_ASSERT(sfc_adapter_is_locked(sa)); + sfc_flow_tunnel_reset_hit_counters(sa); + TAILQ_FOREACH(flow, &sa->flow_list, entries) { rc = sfc_flow_insert(sa, flow, NULL); if (rc != 0) diff --git a/drivers/net/sfc/sfc_flow_tunnel.c b/drivers/net/sfc/sfc_flow_tunnel.c index 2de401148e..399ce55cd1 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.c +++ b/drivers/net/sfc/sfc_flow_tunnel.c @@ -87,6 +87,8 @@ sfc_flow_tunnel_detect_jump_rule(struct sfc_adapter *sa, } switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + break; case RTE_FLOW_ACTION_TYPE_MARK: if (action_mark == NULL) { action_mark = actions->conf; @@ -459,3 +461,19 @@ sfc_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "tunnel offload: get_restore_info failed"); } + +void +sfc_flow_tunnel_reset_hit_counters(struct sfc_adapter *sa) +{ + unsigned int i; + + SFC_ASSERT(sfc_adapter_is_locked(sa)); + SFC_ASSERT(sa->state != SFC_ETHDEV_STARTED); + + for (i = 0; i < RTE_DIM(sa->flow_tunnels); ++i) { + struct sfc_flow_tunnel *ft = &sa->flow_tunnels[i]; + + ft->reset_jump_hit_counter = 0; + ft->group_hit_counter = 0; + } +} diff --git a/drivers/net/sfc/sfc_flow_tunnel.h b/drivers/net/sfc/sfc_flow_tunnel.h index 573585ca80..997811809a 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.h +++ b/drivers/net/sfc/sfc_flow_tunnel.h @@ -63,6 +63,9 @@ struct sfc_flow_tunnel { struct rte_flow_item_mark item_mark_v; struct rte_flow_item_mark item_mark_m; struct rte_flow_item item; + + uint64_t reset_jump_hit_counter; + uint64_t group_hit_counter; }; struct sfc_adapter; @@ -106,6 +109,8 @@ int sfc_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, struct rte_flow_restore_info *info, struct rte_flow_error *err); +void sfc_flow_tunnel_reset_hit_counters(struct sfc_adapter *sa); + #ifdef __cplusplus } #endif diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index faf3be522d..73fb40a02d 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -607,6 +607,8 @@ sfc_mae_action_set_add(struct sfc_adapter *sa, const struct rte_flow_action actions[], efx_mae_actions_t *spec, struct sfc_mae_encap_header *encap_header, + uint64_t *ft_group_hit_counter, + struct sfc_flow_tunnel *ft, unsigned int n_counters, struct sfc_mae_action_set **action_setp) { @@ -633,6 +635,16 @@ sfc_mae_action_set_add(struct sfc_adapter *sa, return ENOMEM; } + for (i = 0; i < n_counters; ++i) { + action_set->counters[i].rte_id_valid = B_FALSE; + action_set->counters[i].mae_id.id = + EFX_MAE_RSRC_ID_INVALID; + + action_set->counters[i].ft_group_hit_counter = + ft_group_hit_counter; + action_set->counters[i].ft = ft; + } + for (action = actions, i = 0; action->type != RTE_FLOW_ACTION_TYPE_END && i < n_counters; ++action) { @@ -643,8 +655,7 @@ sfc_mae_action_set_add(struct sfc_adapter *sa, conf = action->conf; - action_set->counters[i].mae_id.id = - EFX_MAE_RSRC_ID_INVALID; + action_set->counters[i].rte_id_valid = B_TRUE; action_set->counters[i].rte_id = conf->id; i++; } @@ -3373,10 +3384,12 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, { struct sfc_mae_encap_header *encap_header = NULL; struct sfc_mae_actions_bundle bundle = {0}; + struct sfc_flow_tunnel *counter_ft = NULL; + uint64_t *ft_group_hit_counter = NULL; const struct rte_flow_action *action; struct sfc_mae *mae = &sa->mae; + unsigned int n_count = 0; efx_mae_actions_t *spec; - unsigned int n_count; int rc; rte_errno = 0; @@ -3391,11 +3404,31 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, if (rc != 0) goto fail_action_set_spec_init; + for (action = actions; + action->type != RTE_FLOW_ACTION_TYPE_END; ++action) { + if (action->type == RTE_FLOW_ACTION_TYPE_COUNT) + ++n_count; + } + if (spec_mae->ft_rule_type == SFC_FT_RULE_GROUP) { /* JUMP rules don't decapsulate packets. GROUP rules do. */ rc = efx_mae_action_set_populate_decap(spec); if (rc != 0) goto fail_enforce_ft_decap; + + if (n_count == 0 && sfc_mae_counter_stream_enabled(sa)) { + /* + * The user opted not to use action COUNT in this rule, + * but the counter should be enabled implicitly because + * packets hitting this rule contribute to the tunnel's + * total number of hits. See sfc_mae_counter_get(). + */ + rc = efx_mae_action_set_populate_count(spec); + if (rc != 0) + goto fail_enforce_ft_count; + + n_count = 1; + } } /* Cleanup after previous encap. header bounce buffer usage. */ @@ -3421,7 +3454,6 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, if (rc != 0) goto fail_process_encap_header; - n_count = efx_mae_action_set_get_nb_count(spec); if (n_count > 1) { rc = ENOTSUP; sfc_err(sa, "too many count actions requested: %u", n_count); @@ -3436,6 +3468,8 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, rc = sfc_mae_rule_parse_action_pf_vf(sa, NULL, spec); if (rc != 0) goto fail_workaround_jump_delivery; + + counter_ft = spec_mae->ft; break; case SFC_FT_RULE_GROUP: /* @@ -3445,6 +3479,8 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, * MARK above, so don't check the return value here. */ (void)efx_mae_action_set_populate_mark(spec, 0); + + ft_group_hit_counter = &spec_mae->ft->group_hit_counter; break; default: SFC_ASSERT(B_FALSE); @@ -3458,7 +3494,8 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, return 0; } - rc = sfc_mae_action_set_add(sa, actions, spec, encap_header, n_count, + rc = sfc_mae_action_set_add(sa, actions, spec, encap_header, + ft_group_hit_counter, counter_ft, n_count, &spec_mae->action_set); if (rc != 0) goto fail_action_set_add; @@ -3474,6 +3511,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, fail_rule_parse_action: efx_mae_action_set_spec_fini(sa->nic, spec); +fail_enforce_ft_count: fail_enforce_ft_decap: fail_action_set_spec_init: if (rc > 0 && rte_errno == 0) { @@ -3621,6 +3659,11 @@ sfc_mae_flow_insert(struct sfc_adapter *sa, goto fail_outer_rule_enable; } + if (spec_mae->ft_rule_type == SFC_FT_RULE_JUMP) { + spec_mae->ft->reset_jump_hit_counter = + spec_mae->ft->group_hit_counter; + } + if (action_set == NULL) { sfc_dbg(sa, "enabled flow=%p (no AR)", flow); return 0; @@ -3720,7 +3763,8 @@ sfc_mae_query_counter(struct sfc_adapter *sa, for (i = 0; i < action_set->n_counters; i++) { /* * Get the first available counter of the flow rule if - * counter ID is not specified. + * counter ID is not specified, provided that this + * counter is not an automatic (implicit) one. */ if (conf != NULL && action_set->counters[i].rte_id != conf->id) continue; @@ -3738,7 +3782,7 @@ sfc_mae_query_counter(struct sfc_adapter *sa, return rte_flow_error_set(error, ENOENT, RTE_FLOW_ERROR_TYPE_ACTION, action, - "No such flow rule action count ID"); + "no such flow rule action or such count ID"); } int diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h index 907e292dd1..b2a62fc10b 100644 --- a/drivers/net/sfc/sfc_mae.h +++ b/drivers/net/sfc/sfc_mae.h @@ -62,6 +62,13 @@ struct sfc_mae_counter_id { efx_counter_t mae_id; /* ID of a counter in RTE */ uint32_t rte_id; + /* RTE counter ID validity status */ + bool rte_id_valid; + + /* Flow Tunnel (FT) GROUP hit counter (or NULL) */ + uint64_t *ft_group_hit_counter; + /* Flow Tunnel (FT) context (for JUMP rules; otherwise, NULL) */ + struct sfc_flow_tunnel *ft; }; /** Action set registry entry */ @@ -101,6 +108,8 @@ struct sfc_mae_counter { uint32_t generation_count; union sfc_pkts_bytes value; union sfc_pkts_bytes reset; + + uint64_t *ft_group_hit_counter; }; struct sfc_mae_counters_xstats { diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c index 5afd450a11..418caffe59 100644 --- a/drivers/net/sfc/sfc_mae_counter.c +++ b/drivers/net/sfc/sfc_mae_counter.c @@ -99,6 +99,8 @@ sfc_mae_counter_enable(struct sfc_adapter *sa, &p->value.pkts_bytes.int128, __ATOMIC_RELAXED); p->generation_count = generation_count; + p->ft_group_hit_counter = counterp->ft_group_hit_counter; + /* * The flag is set at the very end of add operation and reset * at the beginning of delete operation. Release ordering is @@ -210,6 +212,14 @@ sfc_mae_counter_increment(struct sfc_adapter *sa, __atomic_store(&p->value.pkts_bytes, &cnt_val.pkts_bytes, __ATOMIC_RELAXED); + if (p->ft_group_hit_counter != NULL) { + uint64_t ft_group_hit_counter; + + ft_group_hit_counter = *p->ft_group_hit_counter + pkts; + __atomic_store_n(p->ft_group_hit_counter, ft_group_hit_counter, + __ATOMIC_RELAXED); + } + sfc_info(sa, "update MAE counter #%u: pkts+%" PRIu64 "=%" PRIu64 ", bytes+%" PRIu64 "=%" PRIu64, mae_counter_id, pkts, cnt_val.pkts, bytes, cnt_val.bytes); @@ -799,6 +809,8 @@ sfc_mae_counter_get(struct sfc_mae_counters *counters, const struct sfc_mae_counter_id *counter, struct rte_flow_query_count *data) { + struct sfc_flow_tunnel *ft = counter->ft; + uint64_t non_reset_jump_hit_counter; struct sfc_mae_counter *p; union sfc_pkts_bytes value; @@ -814,14 +826,35 @@ sfc_mae_counter_get(struct sfc_mae_counters *counters, __ATOMIC_RELAXED); data->hits_set = 1; - data->bytes_set = 1; data->hits = value.pkts - p->reset.pkts; - data->bytes = value.bytes - p->reset.bytes; + + if (ft != NULL) { + data->hits += ft->group_hit_counter; + non_reset_jump_hit_counter = data->hits; + data->hits -= ft->reset_jump_hit_counter; + } else { + data->bytes_set = 1; + data->bytes = value.bytes - p->reset.bytes; + } if (data->reset != 0) { - p->reset.pkts = value.pkts; - p->reset.bytes = value.bytes; + if (ft != NULL) { + ft->reset_jump_hit_counter = non_reset_jump_hit_counter; + } else { + p->reset.pkts = value.pkts; + p->reset.bytes = value.bytes; + } } return 0; } + +bool +sfc_mae_counter_stream_enabled(struct sfc_adapter *sa) +{ + if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0 || + sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE) + return B_FALSE; + else + return B_TRUE; +} diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h index 2c953c2968..28d70f7d69 100644 --- a/drivers/net/sfc/sfc_mae_counter.h +++ b/drivers/net/sfc/sfc_mae_counter.h @@ -52,6 +52,9 @@ int sfc_mae_counter_get(struct sfc_mae_counters *counters, int sfc_mae_counter_start(struct sfc_adapter *sa); void sfc_mae_counter_stop(struct sfc_adapter *sa); +/* Check whether MAE Counter-on-Queue (CoQ) prerequisites are satisfied */ +bool sfc_mae_counter_stream_enabled(struct sfc_adapter *sa); + #ifdef __cplusplus } #endif From patchwork Wed Sep 29 20:57:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Malov X-Patchwork-Id: 100030 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BDD92A0032; Wed, 29 Sep 2021 22:58:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A2F841157; Wed, 29 Sep 2021 22:57:58 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id BC6C1410F8 for ; Wed, 29 Sep 2021 22:57:46 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 924B67F6D8; Wed, 29 Sep 2021 23:57:46 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 924B67F6D8 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949066; bh=hg3UBoA69wIe4zJIvV0h8Ses6H1YzwAh4iJImnHM6tE=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=s/NE/tGZtxeqLp7y1bR89HK9rDwq+POTUylVUJ/m1eqMqScPpo6Vf1MEni7j1XGuI sxI2ZNi+Si66ypP1HDPWSz51hzRodNGDgka7teEDRdiW5iBSmcbqT3UAOz/CAA1bjj dIetQIKBiGo3Q28HhueKOkKts2p3FCtTVpUggz8U= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:30 +0300 Message-Id: <20210929205730.775-11-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 10/10] net/sfc: refine pattern of GROUP flows in tunnel offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" By design, in a GROUP flow, outer match criteria go to "ENC" fields of the action rule match specification. The current HW/FW hasn't got support for these fields (except the VXLAN VNI) yet. As a workaround, start parsing the pattern from the tunnel item. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_mae.c | 37 ++++++++++++++++++++++++++----------- drivers/net/sfc/sfc_mae.h | 1 + 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 73fb40a02d..eb2133f167 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1991,14 +1991,21 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item, const uint8_t *mask = NULL; int rc; - /* - * We're about to start processing inner frame items. - * Process pattern data that has been deferred so far - * and reset pattern data storage. - */ - rc = sfc_mae_rule_process_pattern_data(ctx_mae, error); - if (rc != 0) - return rc; + if (ctx_mae->ft_rule_type == SFC_FT_RULE_GROUP) { + /* + * As a workaround, pattern processing has started from + * this (tunnel) item. No pattern data to process yet. + */ + } else { + /* + * We're about to start processing inner frame items. + * Process pattern data that has been deferred so far + * and reset pattern data storage. + */ + rc = sfc_mae_rule_process_pattern_data(ctx_mae, error); + if (rc != 0) + return rc; + } memset(&ctx_mae->pattern_data, 0, sizeof(ctx_mae->pattern_data)); @@ -2317,10 +2324,10 @@ sfc_mae_rule_preparse_item_mark(const struct rte_flow_item_mark *spec, static int sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, - const struct rte_flow_item pattern[], struct sfc_mae_parse_ctx *ctx, struct rte_flow_error *error) { + const struct rte_flow_item *pattern = ctx->pattern; struct sfc_mae *mae = &sa->mae; uint8_t recirc_id = 0; int rc; @@ -2395,6 +2402,13 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, RTE_FLOW_ERROR_TYPE_ITEM, pattern, "tunnel offload: GROUP: tunnel type mismatch"); } + + /* + * The HW/FW hasn't got support for the use of "ENC" fields in + * action rules (except the VNET_ID one) yet. As a workaround, + * start parsing the pattern from the tunnel item. + */ + ctx->pattern = pattern; break; default: SFC_ASSERT(B_FALSE); @@ -2539,11 +2553,12 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, ctx_mae.encap_type = EFX_TUNNEL_PROTOCOL_NONE; ctx_mae.match_spec = ctx_mae.match_spec_action; ctx_mae.field_ids_remap = field_ids_no_remap; + ctx_mae.pattern = pattern; ctx.type = SFC_FLOW_PARSE_CTX_MAE; ctx.mae = &ctx_mae; - rc = sfc_mae_rule_encap_parse_init(sa, pattern, &ctx_mae, error); + rc = sfc_mae_rule_encap_parse_init(sa, &ctx_mae, error); if (rc != 0) goto fail_encap_parse_init; @@ -2555,7 +2570,7 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, spec->ft = ctx_mae.ft; rc = sfc_flow_parse_pattern(sa, sfc_flow_items, RTE_DIM(sfc_flow_items), - pattern, &ctx, error); + ctx_mae.pattern, &ctx, error); if (rc != 0) goto fail_parse_pattern; diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h index b2a62fc10b..53959d568f 100644 --- a/drivers/net/sfc/sfc_mae.h +++ b/drivers/net/sfc/sfc_mae.h @@ -297,6 +297,7 @@ struct sfc_mae_parse_ctx { enum sfc_flow_tunnel_rule_type ft_rule_type; struct sfc_mae_pattern_data pattern_data; efx_tunnel_protocol_t encap_type; + const struct rte_flow_item *pattern; unsigned int priority; struct sfc_flow_tunnel *ft; };