From patchwork Thu Sep 21 02:15:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 131753 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4628D42601; Thu, 21 Sep 2023 04:18:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7768402CE; Thu, 21 Sep 2023 04:18:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 84743402C9 for ; Thu, 21 Sep 2023 04:18:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38KIMC43010099 for ; Wed, 20 Sep 2023 19:18:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=JvVNBosCOcrEzEWcH0TCGp5LkFg4gjpxtcb04WQyt5M=; b=REU//m8CkdiDnxh34noKGJ4V1RC+inXi8IES6flLhtnfFi6pFPx6mXKffQghkqoHNHFk yUDey+/uwIELoFzT9J2hzDdvjW7e1CWB129RqybVO95tG5UGCXr3RvthdqxiibKQCg/u S5Y5YzdvhSmFFbRwkJV0GhRNR8y5lP3gYaZmJDnavjpwBMvtjdBqWl8gJC0mK78SJ6k+ vyVgJncNLT+J1dg2NaWMvs+cVBg0TvT1B06W4VaKN5N7EUOJkDnrVkmSLUivcl72oQvf 8eoJCpe5F4IIfyJPkfd/FZdwd+8Jx714GLGuwjznk9lZBnDcSlKFrjzJdyHH3qIuBhWg Wg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t85ptsbbg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 19:18:12 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 19:18:10 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 19:18:10 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id BAF865B693B; Wed, 20 Sep 2023 19:18:07 -0700 (PDT) From: Nithin Dabilpuram To: , Pavan Nikhilesh , "Shijith Thotton" , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Subject: [PATCH v2 2/3] net/cnxk: support inline ingress out of place session Date: Thu, 21 Sep 2023 07:45:47 +0530 Message-ID: <20230921021548.1196858-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230921021548.1196858-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> <20230921021548.1196858-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 5cW85Ntu1qwpkXCfTgdcWXc0mRGQRphU X-Proofpoint-ORIG-GUID: 5cW85Ntu1qwpkXCfTgdcWXc0mRGQRphU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_14,2023-09-20_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for inline ingress session with out-of-place support. Signed-off-by: Nithin Dabilpuram --- drivers/event/cnxk/cn10k_worker.h | 12 +- drivers/net/cnxk/cn10k_ethdev.c | 13 +- drivers/net/cnxk/cn10k_ethdev_sec.c | 43 +++++++ drivers/net/cnxk/cn10k_rx.h | 181 ++++++++++++++++++++++++---- drivers/net/cnxk/cn10k_rxtx.h | 1 + drivers/net/cnxk/cnxk_ethdev.h | 9 ++ 6 files changed, 229 insertions(+), 30 deletions(-) diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index b4ee023723..46bfa9dd9d 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -59,9 +59,9 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc uint16_t lmt_id, d_off; struct rte_mbuf **wqe; struct rte_mbuf *mbuf; + uint64_t sa_base = 0; uintptr_t cpth = 0; uint8_t loff = 0; - uint64_t sa_base; int i; mbuf_init |= ((uint64_t)port_id) << 48; @@ -125,6 +125,11 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc cpth = ((uintptr_t)mbuf + (uint16_t)d_off); + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + mbuf->pool = mp; + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, mbuf, d_off, flags, mbuf_init); @@ -199,6 +204,11 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64, mp = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, lookup_mem); meta_aura = mp ? mp->pool_id : m->pool->pool_id; + /* Update mempool pointer for full mode pkt */ + if (mp && (flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + ((struct rte_mbuf *)mbuf)->pool = mp; + mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc( cq_w1, cq_w5, sa_base, (uintptr_t)&iova, &loff, (struct rte_mbuf *)mbuf, d_off, flags, diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index 4c4acc7cf0..f1504a6873 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -355,11 +355,13 @@ cn10k_nix_rx_queue_meta_aura_update(struct rte_eth_dev *eth_dev) rq = &dev->rqs[i]; rxq = eth_dev->data->rx_queues[i]; rxq->meta_aura = rq->meta_aura_handle; + rxq->meta_pool = dev->nix.meta_mempool; /* Assume meta packet from normal aura if meta aura is not setup */ if (!rxq->meta_aura) { rxq_sp = cnxk_eth_rxq_to_sp(rxq); rxq->meta_aura = rxq_sp->qconf.mp->pool_id; + rxq->meta_pool = (uintptr_t)rxq_sp->qconf.mp; } } /* Store mempool in lookup mem */ @@ -639,14 +641,17 @@ cn10k_nix_reassembly_conf_set(struct rte_eth_dev *eth_dev, if (!conf->flags) { /* Clear offload flags on disable */ - dev->rx_offload_flags &= ~NIX_RX_REAS_F; + if (!dev->inb.nb_oop) + dev->rx_offload_flags &= ~NIX_RX_REAS_F; + dev->inb.reass_en = false; return 0; } - rc = roc_nix_reassembly_configure(conf->timeout_ms, - conf->max_frags); - if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) + rc = roc_nix_reassembly_configure(conf->timeout_ms, conf->max_frags); + if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) { dev->rx_offload_flags |= NIX_RX_REAS_F; + dev->inb.reass_en = true; + } return rc; } diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c index a7473922af..9a831634da 100644 --- a/drivers/net/cnxk/cn10k_ethdev_sec.c +++ b/drivers/net/cnxk/cn10k_ethdev_sec.c @@ -9,6 +9,7 @@ #include #include +#include #include #include #include @@ -324,6 +325,7 @@ static const struct rte_security_capability cn10k_eth_sec_ipsec_capabilities[] = .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -373,6 +375,7 @@ static const struct rte_security_capability cn10k_eth_sec_ipsec_capabilities[] = .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -396,6 +399,7 @@ static const struct rte_security_capability cn10k_eth_sec_ipsec_capabilities[] = .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -746,6 +750,20 @@ cn10k_eth_sec_session_create(void *device, return -rte_errno; } + if (conf->ipsec.options.ingress_oop && + rte_security_oop_dynfield_offset < 0) { + /* Register for security OOP dynfield if required */ + if (rte_security_oop_dynfield_register() < 0) + return -rte_errno; + } + + /* We cannot support inbound reassembly and OOP together */ + if (conf->ipsec.options.ip_reassembly_en && + conf->ipsec.options.ingress_oop) { + plt_err("Cannot support Inbound reassembly and OOP together"); + return -ENOTSUP; + } + ipsec = &conf->ipsec; crypto = conf->crypto_xform; inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS); @@ -832,6 +850,12 @@ cn10k_eth_sec_session_create(void *device, inb_sa_dptr->w0.s.count_mib_bytes = 1; inb_sa_dptr->w0.s.count_mib_pkts = 1; } + + /* Enable out-of-place processing */ + if (ipsec->options.ingress_oop) + inb_sa_dptr->w0.s.pkt_format = + ROC_IE_OT_SA_PKT_FMT_FULL; + /* Prepare session priv */ sess_priv.inb_sa = 1; sess_priv.sa_idx = ipsec->spi & spi_mask; @@ -843,6 +867,7 @@ cn10k_eth_sec_session_create(void *device, eth_sec->spi = ipsec->spi; eth_sec->inl_dev = !!dev->inb.inl_dev; eth_sec->inb = true; + eth_sec->inb_oop = !!ipsec->options.ingress_oop; TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry); dev->inb.nb_sess++; @@ -858,6 +883,15 @@ cn10k_eth_sec_session_create(void *device, inb_priv->reass_dynflag_bit = dev->reass_dynflag_bit; } + if (ipsec->options.ingress_oop) + dev->inb.nb_oop++; + + /* Update function pointer to handle OOP sessions */ + if (dev->inb.nb_oop && + !(dev->rx_offload_flags & NIX_RX_REAS_F)) { + dev->rx_offload_flags |= NIX_RX_REAS_F; + cn10k_eth_set_rx_function(eth_dev); + } } else { struct roc_ot_ipsec_outb_sa *outb_sa, *outb_sa_dptr; struct cn10k_outb_priv_data *outb_priv; @@ -1007,6 +1041,15 @@ cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess) sizeof(struct roc_ot_ipsec_inb_sa)); TAILQ_REMOVE(&dev->inb.list, eth_sec, entry); dev->inb.nb_sess--; + if (eth_sec->inb_oop) + dev->inb.nb_oop--; + + /* Clear offload flags if was used by OOP */ + if (!dev->inb.nb_oop && !dev->inb.reass_en && + dev->rx_offload_flags & NIX_RX_REAS_F) { + dev->rx_offload_flags &= ~NIX_RX_REAS_F; + cn10k_eth_set_rx_function(eth_dev); + } } else { /* Disable SA */ sa_dptr = dev->outb.sa_dptr; diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index 3bf89b8c6c..6533804827 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -402,6 +402,41 @@ nix_sec_reassemble_frags(const struct cpt_parse_hdr_s *hdr, struct rte_mbuf *hea return head; } +static inline struct rte_mbuf * +nix_sec_oop_process(const struct cpt_parse_hdr_s *hdr, struct rte_mbuf *mbuf, uint64_t *mbuf_init) +{ + uintptr_t wqe = rte_be_to_cpu_64(hdr->wqe_ptr); + union nix_rx_parse_u *inner_rx; + struct rte_mbuf *inner; + uint16_t data_off; + + inner = ((struct rte_mbuf *)wqe) - 1; + + inner_rx = (union nix_rx_parse_u *)(wqe + 8); + inner->pkt_len = inner_rx->pkt_lenm1 + 1; + inner->data_len = inner_rx->pkt_lenm1 + 1; + + /* Mark inner mbuf as get */ + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, + (void **)&inner, 1, 1); + /* Update rearm data for full mbuf as it has + * cpt parse header that needs to be skipped. + * + * Since meta pool will not have private area while + * ethdev RQ's first skip would be considering private area + * calculate actual data off and update in meta mbuf. + */ + data_off = (uintptr_t)hdr - (uintptr_t)mbuf->buf_addr; + data_off += sizeof(struct cpt_parse_hdr_s); + data_off += hdr->w0.pad_len; + *mbuf_init &= ~0xFFFFUL; + *mbuf_init |= (uint64_t)data_off; + + *rte_security_oop_dynfield(mbuf) = inner; + /* Return outer instead of inner mbuf as inner mbuf would have original encrypted packet */ + return mbuf; +} + static __rte_always_inline struct rte_mbuf * nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, uintptr_t laddr, uint8_t *loff, struct rte_mbuf *mbuf, @@ -422,14 +457,18 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, if (!(cq_w1 & BIT(11))) return mbuf; - inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) - - sizeof(struct rte_mbuf)); + if (flags & NIX_RX_REAS_F && hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + inner = nix_sec_oop_process(hdr, mbuf, &mbuf_init); + } else { + inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) - + sizeof(struct rte_mbuf)); - /* Store meta in lmtline to free - * Assume all meta's from same aura. - */ - *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf; - *loff = *loff + 1; + /* Store meta in lmtline to free + * Assume all meta's from same aura. + */ + *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf; + *loff = *loff + 1; + } /* Get SPI from CPT_PARSE_S's cookie(already swapped) */ w0 = hdr->w0.u64; @@ -471,11 +510,13 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, & 0xFF) << 1 : RTE_MBUF_F_RX_IP_CKSUM_GOOD; } - /* Mark meta mbuf as put */ - RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); + if (!(flags & NIX_RX_REAS_F) || hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) { + /* Mark meta mbuf as put */ + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); - /* Mark inner mbuf as get */ - RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + /* Mark inner mbuf as get */ + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + } /* Skip reassembly processing when multi-seg is enabled */ if (!(flags & NIX_RX_MULTI_SEG_F) && (flags & NIX_RX_REAS_F) && hdr->w0.num_frags) { @@ -522,7 +563,9 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa, *rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata; /* Mark inner mbuf as get */ - RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + if (!(flags & NIX_RX_REAS_F) || + hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); if (!(flags & NIX_RX_MULTI_SEG_F) && flags & NIX_RX_REAS_F && hdr->w0.num_frags) { if ((!(hdr->w0.err_sum) || roc_ie_ot_ucc_is_success(hdr->w3.uc_ccode)) && @@ -552,6 +595,19 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa, nix_sec_attach_frags(hdr, inner, inb_priv, mbuf_init); *ol_flags |= inner->ol_flags; } + } else if (flags & NIX_RX_REAS_F) { + /* Without fragmentation but may have to handle OOP session */ + if (hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + uint64_t mbuf_init = 0; + + /* Caller has already prepared to return second pass + * mbuf and inner mbuf is actually outer. + * Store original buffer pointer in dynfield. + */ + nix_sec_oop_process(hdr, inner, &mbuf_init); + /* Clear and update lower 16 bit of data offset */ + *rearm = (*rearm & ~(BIT_ULL(16) - 1)) | mbuf_init; + } } } #endif @@ -628,6 +684,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, uint64_t cq_w1; int64_t len; uint64_t sg; + uintptr_t p; cq_w1 = *(const uint64_t *)rx; if (flags & NIX_RX_REAS_F) @@ -635,7 +692,9 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, /* Use inner rx parse for meta pkts sg list */ if (cq_w1 & BIT(11) && flags & NIX_RX_OFFLOAD_SECURITY_F) { const uint64_t *wqe = (const uint64_t *)(mbuf + 1); - rx = (const union nix_rx_parse_u *)(wqe + 1); + + if (hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) + rx = (const union nix_rx_parse_u *)(wqe + 1); } sg = *(const uint64_t *)(rx + 1); @@ -761,6 +820,31 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, num_frags--; frag_i++; goto again; + } else if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && !reas_success && + hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + uintptr_t wqe = rte_be_to_cpu_64(hdr->wqe_ptr); + + /* Process OOP packet inner buffer mseg. reas_success flag is used here only + * to avoid looping. + */ + mbuf = ((struct rte_mbuf *)wqe) - 1; + rx = (const union nix_rx_parse_u *)(wqe + 8); + eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1)); + sg = *(const uint64_t *)(rx + 1); + nb_segs = (sg >> 48) & 0x3; + + + len = mbuf->pkt_len; + p = (uintptr_t)&mbuf->rearm_data; + *(uint64_t *)p = rearm; + mbuf->data_len = (sg & 0xFFFF) - + (flags & NIX_RX_OFFLOAD_TSTAMP_F ? + CNXK_NIX_TIMESYNC_RX_OFFSET : 0); + head = mbuf; + head->nb_segs = nb_segs; + /* Using this flag to avoid looping in case of OOP */ + reas_success = true; + goto again; } /* Update for last failure fragment */ @@ -899,6 +983,7 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, const uint64_t mbuf_init = rxq->mbuf_initializer; const void *lookup_mem = rxq->lookup_mem; const uint64_t data_off = rxq->data_off; + struct rte_mempool *meta_pool = NULL; const uintptr_t desc = rxq->desc; const uint64_t wdata = rxq->wdata; const uint32_t qmask = rxq->qmask; @@ -923,6 +1008,8 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, ROC_LMT_BASE_ID_GET(lbase, lmt_id); laddr = lbase; laddr += 8; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; } while (packets < nb_pkts) { @@ -943,6 +1030,11 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, cpth = ((uintptr_t)mbuf + (uint16_t)data_off); + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + mbuf->pool = meta_pool; + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, mbuf, data_off, flags, mbuf_init); @@ -1047,6 +1139,7 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer); struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3; uint8_t loff = 0, lnum = 0, shft = 0; + struct rte_mempool *meta_pool = NULL; uint8x16_t f0, f1, f2, f3; uint16_t lmt_id, d_off; uint64_t lbase, laddr; @@ -1099,6 +1192,9 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, /* Get SA Base from lookup tbl using port_id */ port = mbuf_initializer >> 48; sa_base = cnxk_nix_sa_base_get(port, lookup_mem); + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, + lookup_mem); lbase = lmt_base; } else { @@ -1106,6 +1202,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, d_off = rxq->data_off; sa_base = rxq->sa_base; lbase = rxq->lmt_base; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; } sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); ROC_LMT_BASE_ID_GET(lbase, lmt_id); @@ -1510,10 +1608,19 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 0); cpth0 = (uintptr_t)mbuf0 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf0, laddr, &loff); - mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0); - mbuf0 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth0 & BIT_ULL(15)) { + /* Free meta to aura */ + NIX_PUSH_META_TO_FREE(mbuf0, laddr, + &loff); + mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0); + mbuf0 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf0->pool = meta_pool; + } /* Update pkt_len and data_len */ f0 = vsetq_lane_u16(len, f0, 2); @@ -1535,10 +1642,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 1); cpth1 = (uintptr_t)mbuf1 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf1, laddr, &loff); - mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1); - mbuf1 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth1 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf1, laddr, + &loff); + mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1); + mbuf1 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf1->pool = meta_pool; + } /* Update pkt_len and data_len */ f1 = vsetq_lane_u16(len, f1, 2); @@ -1559,10 +1674,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 2); cpth2 = (uintptr_t)mbuf2 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf2, laddr, &loff); - mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0); - mbuf2 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth2 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf2, laddr, + &loff); + mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0); + mbuf2 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf2->pool = meta_pool; + } /* Update pkt_len and data_len */ f2 = vsetq_lane_u16(len, f2, 2); @@ -1583,10 +1706,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 3); cpth3 = (uintptr_t)mbuf3 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf3, laddr, &loff); - mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1); - mbuf3 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth3 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf3, laddr, + &loff); + mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1); + mbuf3 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf3->pool = meta_pool; + } /* Update pkt_len and data_len */ f3 = vsetq_lane_u16(len, f3, 2); diff --git a/drivers/net/cnxk/cn10k_rxtx.h b/drivers/net/cnxk/cn10k_rxtx.h index b4287e2864..aeffc4ac92 100644 --- a/drivers/net/cnxk/cn10k_rxtx.h +++ b/drivers/net/cnxk/cn10k_rxtx.h @@ -78,6 +78,7 @@ struct cn10k_eth_rxq { uint64_t sa_base; uint64_t lmt_base; uint64_t meta_aura; + uintptr_t meta_pool; uint16_t rq; struct cnxk_timesync_info *tstamp; } __plt_cache_aligned; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index ed531fb277..2b9ff11a6a 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -219,6 +219,9 @@ struct cnxk_eth_sec_sess { /* Inbound session on inl dev */ bool inl_dev; + + /* Out-Of-Place processing */ + bool inb_oop; }; TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess); @@ -246,6 +249,12 @@ struct cnxk_eth_dev_sec_inb { /* DPTR for WRITE_SA microcode op */ void *sa_dptr; + /* Number of oop sessions */ + uint16_t nb_oop; + + /* Reassembly enabled */ + bool reass_en; + /* Lock to synchronize sa setup/release */ rte_spinlock_t lock; };