From patchwork Sun Jun 30 18:05:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55713 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C81371BBD4; Sun, 30 Jun 2019 20:10:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C12171BABD for ; Sun, 30 Jun 2019 20:08:48 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5FMg027215; Sun, 30 Jun 2019 11:08:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7zhVbL+BrmU9w49eb3n4Ak+UBhKFz/xL5IVwDo2ruY0=; b=CuPeKW11AxxHCbGeoMNFuboYQ/r6GE2+HrLzQBIlb7YalunyXcbI8oD5GM6C3ZfMTYjS X4qseYenX/AEindODlQKQsuM4R+RIy3KGRlkf4QOelnKtBmuMerk+sJpK/oqoy5mQFct +quVOIdv39er2j5/M6sDjRKOr5K+wy+sDTaDaT6SAfQe81W+LJ4jBRUnG1U5PAF05pzy r/vTQYoaN9DN1EcI+AYrLhLzVWHWvAEQD2itOzXoUiyQLstONJq7Vf1dCohdr9xx3sQp Bpw33fSxucC9ASEfoRwRdL1BDs4r5eGzXJVP1u4ZE0X5SBvl+ZXNLejkApKPF+V/sv5D 6Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gnb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:47 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:46 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 6275D3F703F; Sun, 30 Jun 2019 11:08:44 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:56 +0530 Message-ID: <20190630180609.36705-45-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 44/57] net/octeontx2: support VLAN offloads X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Support configuring VLAN offloads for an ethernet device and dynamic promiscuous mode configuration for VLAN filters where filters are updated according to promiscuous mode of the device. Signed-off-by: Vivek Sharma --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 3 + drivers/net/octeontx2/otx2_ethdev_ops.c | 1 + drivers/net/octeontx2/otx2_rx.h | 1 + drivers/net/octeontx2/otx2_vlan.c | 523 ++++++++++++++++++++- 9 files changed, 527 insertions(+), 9 deletions(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 33d2f2785..ac4712b0c 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 980a4daf9..e54c1babe 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 330534a90..769ab16ee 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -18,6 +18,8 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 0f1756932..a53b71a6d 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -24,6 +24,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Receiver Side Scaling (RSS) - MAC filtering - Generic flow API +- VLAN/QinQ stripping and insertion - Port hardware statistics - Link state information - Link flow control diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 48c2e8f57..55a5cdc48 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1345,6 +1345,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .timesync_adjust_time = otx2_nix_timesync_adjust_time, .timesync_read_time = otx2_nix_timesync_read_time, .timesync_write_time = otx2_nix_timesync_write_time, + .vlan_offload_set = otx2_nix_vlan_offload_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8577272b4..50fd18b6e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -221,6 +221,7 @@ struct otx2_vlan_info { uint8_t filter_on; uint8_t strip_on; uint8_t qinq_on; + uint8_t promisc_on; }; struct otx2_eth_dev { @@ -447,6 +448,8 @@ int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev); /* VLAN */ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); +int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask); +void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable); /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index e55acd4e0..690d8ac0c 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -40,6 +40,7 @@ otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en) otx2_mbox_process(mbox); eth_dev->data->promiscuous = en; + otx2_nix_vlan_update_promisc(eth_dev, en); } void diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index e18e04658..7dc34d705 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -16,6 +16,7 @@ sizeof(uint16_t)) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index b3136d2cf..7cf4f3136 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -14,6 +14,7 @@ #define MAC_ADDR_MATCH 0x4 #define QINQ_F_MATCH 0x8 #define VLAN_DROP 0x10 +#define DEF_F_ENTRY 0x20 enum vtag_cfg_dir { VTAG_TX, @@ -39,8 +40,50 @@ __rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, return rc; } +static void +nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, bool qinq, bool drop) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int pcifunc = otx2_pfvf_func(dev->pf, dev->vf); + uint64_t action = 0, vtag_action = 0; + + action = NIX_RX_ACTIONOP_UCAST; + + if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) { + action = NIX_RX_ACTIONOP_RSS; + action |= (uint64_t)(dev->rss_info.alg_idx) << 56; + } + + action |= (uint64_t)pcifunc << 4; + entry->action = action; + + if (drop) { + entry->action &= ~((uint64_t)0xF); + entry->action |= NIX_RX_ACTIONOP_DROP; + return; + } + + if (!qinq) { + /* VTAG0 fields denote CTAG in single vlan case */ + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); + vtag_action |= (NPC_LID_LB << 8); + vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR; + } else { + /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */ + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); + vtag_action |= (NPC_LID_LB << 8); + vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR; + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47); + vtag_action |= ((uint64_t)(NPC_LID_LB) << 40); + vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32); + } + + entry->vtag_action = vtag_action; +} + static int -__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) +nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) { struct npc_mcam_free_entry_req *req; struct otx2_mbox *mbox = dev->mbox; @@ -54,8 +97,8 @@ __rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) } static int -__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, - struct mcam_entry *entry, uint8_t intf) +nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, + struct mcam_entry *entry, uint8_t intf, uint8_t ena) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct npc_mcam_write_entry_req *req; @@ -67,7 +110,7 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, req->entry = ent_idx; req->intf = intf; - req->enable_entry = 1; + req->enable_entry = ena; memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); rc = otx2_mbox_process_msg(mbox, (void *)&rsp); @@ -75,9 +118,9 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, } static int -__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, - struct mcam_entry *entry, - uint8_t intf, bool drop) +nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, + uint8_t intf, bool drop) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct npc_mcam_alloc_and_write_entry_req *req; @@ -114,6 +157,443 @@ __rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, return rsp->entry; } +static void +nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index, + int enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + volatile uint8_t *key_data, *key_mask; + struct npc_mcam_read_entry_req *req; + struct npc_mcam_read_entry_rsp *rsp; + struct otx2_mbox *mbox = dev->mbox; + uint64_t mcam_data, mcam_mask; + struct mcam_entry entry; + uint8_t intf, mcam_ena; + int idx, rc = -EINVAL; + uint8_t *mac_addr; + + memset(&entry, 0, sizeof(struct mcam_entry)); + + /* Read entry first */ + req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox); + + req->entry = mcam_index; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to read entry %d", mcam_index); + return; + } + + entry = rsp->entry_data; + intf = rsp->intf; + mcam_ena = rsp->enable; + + /* Update mcam address */ + key_data = (volatile uint8_t *)entry.kw; + key_mask = (volatile uint8_t *)entry.kw_mask; + + if (enable) { + mcam_mask = 0; + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + + } else { + mcam_data = 0ULL; + mac_addr = dev->mac_addr; + for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--) + mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx); + + mcam_mask = BIT_ULL(48) - 1; + + otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off, + &mcam_data, mkex->la_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + } + + /* Write back the mcam entry */ + rc = nix_vlan_mcam_write(eth_dev, mcam_index, + &entry, intf, mcam_ena); + if (rc) { + otx2_err("Failed to write entry %d", mcam_index); + return; + } +} + +void +otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; + + /* Already in required mode */ + if (enable == vlan->promisc_on) + return; + + /* Update default rx entry */ + if (vlan->def_rx_mcam_idx) + nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable); + + /* Update all other rx filter entries */ + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) + nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable); + + vlan->promisc_on = enable; +} + +/* Configure mcam entry with required MCAM search rules */ +static int +nix_vlan_mcam_config(struct rte_eth_dev *eth_dev, + uint16_t vlan_id, uint16_t flags) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + volatile uint8_t *key_data, *key_mask; + uint64_t mcam_data, mcam_mask; + struct mcam_entry entry; + uint8_t *mac_addr; + int idx, kwi = 0; + + memset(&entry, 0, sizeof(struct mcam_entry)); + key_data = (volatile uint8_t *)entry.kw; + key_mask = (volatile uint8_t *)entry.kw_mask; + + /* Channel base extracted to KW0[11:0] */ + entry.kw[kwi] = dev->rx_chan_base; + entry.kw_mask[kwi] = BIT_ULL(12) - 1; + + /* Adds vlan_id & LB CTAG flag to MCAM KW */ + if (flags & VLAN_ID_MATCH) { + entry.kw[kwi] |= NPC_LT_LB_CTAG << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset; + + mcam_data = (vlan_id << 16); + mcam_mask = (BIT_ULL(16) - 1) << 16; + otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off, + &mcam_data, mkex->lb_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off, + &mcam_mask, mkex->lb_xtract.len + 1); + } + + /* Adds LB STAG flag to MCAM KW */ + if (flags & QINQ_F_MATCH) { + entry.kw[kwi] |= NPC_LT_LB_STAG << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset; + } + + /* Adds LB CTAG & LB STAG flags to MCAM KW */ + if (flags & VTAG_F_MATCH) { + entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG) + << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= (NPC_LT_LB_CTAG & NPC_LT_LB_STAG) + << mkex->lb_lt_offset; + } + + /* Adds port MAC address to MCAM KW */ + if (flags & MAC_ADDR_MATCH) { + mcam_data = 0ULL; + mac_addr = dev->mac_addr; + for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--) + mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx); + + mcam_mask = BIT_ULL(48) - 1; + otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off, + &mcam_data, mkex->la_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + } + + /* VLAN_DROP: for drop action for all vlan packets when filter is on. + * For QinQ, enable vtag action for both outer & inner tags + */ + if (flags & VLAN_DROP) + nix_set_rx_vlan_action(eth_dev, &entry, false, true); + else if (flags & QINQ_F_MATCH) + nix_set_rx_vlan_action(eth_dev, &entry, true, false); + else + nix_set_rx_vlan_action(eth_dev, &entry, false, false); + + if (flags & DEF_F_ENTRY) + dev->vlan_info.def_rx_mcam_ent = entry; + + return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX, + flags & VLAN_DROP); +} + +/* Installs/Removes/Modifies default rx entry */ +static int +nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip, + bool filter, bool enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + uint16_t flags = 0; + int mcam_idx, rc; + + /* Use default mcam entry to either drop vlan traffic when + * vlan filter is on or strip vtag when strip is enabled. + * Allocate default entry which matches port mac address + * and vtag(ctag/stag) flags with drop action. + */ + if (!vlan->def_rx_mcam_idx) { + if (!eth_dev->data->promiscuous) + flags = MAC_ADDR_MATCH; + + if (filter && enable) + flags |= VTAG_F_MATCH | VLAN_DROP; + else if (strip && enable) + flags |= VTAG_F_MATCH; + else + return 0; + + flags |= DEF_F_ENTRY; + + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags); + if (mcam_idx < 0) { + otx2_err("Failed to config vlan mcam"); + return -mcam_idx; + } + + vlan->def_rx_mcam_idx = mcam_idx; + return 0; + } + + /* Filter is already enabled, so packets would be dropped anyways. No + * processing needed for enabling strip wrt mcam entry. + */ + + /* Filter disable request */ + if (vlan->filter_on && filter && !enable) { + vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF); + + /* Free default rx entry only when + * 1. strip is not on and + * 2. qinq entry is allocated before default entry. + */ + if (vlan->strip_on || + (vlan->qinq_on && !vlan->qinq_before_def)) { + if (eth_dev->data->dev_conf.rxmode.mq_mode == + ETH_MQ_RX_RSS) + vlan->def_rx_mcam_ent.action |= + NIX_RX_ACTIONOP_RSS; + else + vlan->def_rx_mcam_ent.action |= + NIX_RX_ACTIONOP_UCAST; + return nix_vlan_mcam_write(eth_dev, + vlan->def_rx_mcam_idx, + &vlan->def_rx_mcam_ent, + NIX_INTF_RX, 1); + } else { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + } + + /* Filter enable request */ + if (!vlan->filter_on && filter && enable) { + vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF); + vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP; + return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx, + &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1); + } + + /* Strip disable request */ + if (vlan->strip_on && strip && !enable) { + if (!vlan->filter_on && + !(vlan->qinq_on && !vlan->qinq_before_def)) { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + } + + return 0; +} + +/* Configure vlan stripping on or off */ +static int +nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_vtag_config *vtag_cfg; + int rc = -EINVAL; + + rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable); + if (rc) { + otx2_err("Failed to config default rx entry"); + return rc; + } + + vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox); + /* cfg_type = 1 for rx vlan cfg */ + vtag_cfg->cfg_type = VTAG_RX; + + if (enable) + vtag_cfg->rx.strip_vtag = 1; + else + vtag_cfg->rx.strip_vtag = 0; + + /* Always capture */ + vtag_cfg->rx.capture_vtag = 1; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + /* Use rx vtag type index[0] for now */ + vtag_cfg->rx.vtag_type = 0; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + dev->vlan_info.strip_on = enable; + return rc; +} + +/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */ +static int +nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, + uint16_t vlan_id) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc = -EINVAL; + + if (!vlan_id && enable) { + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, + enable); + if (rc) { + otx2_err("Failed to config vlan mcam"); + return rc; + } + dev->vlan_info.filter_on = enable; + return 0; + } + + if (!vlan_id && !enable) { + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, + enable); + if (rc) { + otx2_err("Failed to config vlan mcam"); + return rc; + } + dev->vlan_info.filter_on = enable; + return 0; + } + + return 0; +} + +/* Configure double vlan(qinq) on or off */ +static int +otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev, + const uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan_info; + int mcam_idx; + int rc; + + vlan_info = &dev->vlan_info; + + if (!enable) { + if (!vlan_info->qinq_mcam_idx) + return 0; + + rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx); + if (rc) + return rc; + + vlan_info->qinq_mcam_idx = 0; + dev->vlan_info.qinq_on = 0; + vlan_info->qinq_before_def = 0; + return 0; + } + + if (eth_dev->data->promiscuous) + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH); + else + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, + QINQ_F_MATCH | MAC_ADDR_MATCH); + if (mcam_idx < 0) + return mcam_idx; + + if (!vlan_info->def_rx_mcam_idx) + vlan_info->qinq_before_def = 1; + + vlan_info->qinq_mcam_idx = mcam_idx; + dev->vlan_info.qinq_on = 1; + return 0; +} + +int +otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t offloads = dev->rx_offloads; + struct rte_eth_rxmode *rxmode; + int rc; + + rxmode = ð_dev->data->dev_conf.rxmode; + + if (mask & ETH_VLAN_EXTEND_MASK) { + otx2_err("Extend offload not supported"); + return -ENOTSUP; + } + + if (mask & ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + rc = nix_vlan_hw_strip(eth_dev, true); + } else { + offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + rc = nix_vlan_hw_strip(eth_dev, false); + } + if (rc) + goto done; + } + + if (mask & ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) { + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; + rc = nix_vlan_hw_filter(eth_dev, true, 0); + } else { + offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER; + rc = nix_vlan_hw_filter(eth_dev, false, 0); + } + if (rc) + goto done; + } + + if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) { + if (!dev->vlan_info.qinq_on) { + offloads |= DEV_RX_OFFLOAD_QINQ_STRIP; + rc = otx2_nix_config_double_vlan(eth_dev, true); + if (rc) + goto done; + } + } else { + if (dev->vlan_info.qinq_on) { + offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP; + rc = otx2_nix_config_double_vlan(eth_dev, false); + if (rc) + goto done; + } + } + + if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP)) { + dev->rx_offloads |= offloads; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + } + +done: + return rc; +} + static int nix_vlan_rx_mkex_offset(uint64_t mask) { @@ -170,7 +650,7 @@ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); - int rc; + int rc, mask; /* Port initialized for first time or restarted */ if (!dev->configured) { @@ -179,12 +659,37 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) otx2_err("Failed to get vlan mkex info rc=%d", rc); return rc; } + + TAILQ_INIT(&dev->vlan_info.fltr_tbl); } + + mask = + ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK; + rc = otx2_nix_vlan_offload_set(eth_dev, mask); + if (rc) { + otx2_err("Failed to set vlan offload rc=%d", rc); + return rc; + } + return 0; } int -otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev) +otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + int rc; + + if (!dev->configured) { + if (vlan->def_rx_mcam_idx) { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + } + } + + otx2_nix_config_double_vlan(eth_dev, false); + vlan->def_rx_mcam_idx = 0; return 0; }