From patchwork Thu Jul 4 02:19:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 56055 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A2FA33195; Thu, 4 Jul 2019 04:19:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 96FE41E20 for ; Thu, 4 Jul 2019 04:19:51 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x642Fprd017023; Wed, 3 Jul 2019 19:19:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=FVDoyBjTzcIDfWljTTosc5PfQ8Ui6KgEDzT/z3UOC2o=; b=lGanfUzUDI5BMQbXXFOln/WWjTCrsb+23Myu8vTLjZ2+jkR5hONHh1gjXAQYH4fCXhGM tIRfCQ2p6cmt5+vZ2sk2VOmA/UhrBldBiBOBQMmgz397qMBlcVemYLwc16C1lcQ11iLc C1NxPdwsS0v7OwFeMcjcP/MFjuKvoPgZVPupx/p+qa6jo8kgKVHoaYNdVNdCf8HMr3DU SMw+xs/BdnmRLWbhPecVH+L0HUXkRycGkgM+oP6klT+UEbmbIQvPQ7BNWJ7N1OWq+jB8 ZgzlDn+HVkt3TxFivZt+ifQDHwGnCFa6JxPxH4eD9BChFzToinl9BtdJS/XRjbBxjjzB Ig== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2tgtf73bau-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 03 Jul 2019 19:19:50 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 19:19:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 19:19:46 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.42]) by maili.marvell.com (Postfix) with ESMTP id 71C893F703F; Wed, 3 Jul 2019 19:19:44 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Thu, 4 Jul 2019 07:49:35 +0530 Message-ID: <20190704021940.3023-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190704021940.3023-1-pbhagavatula@marvell.com> References: <20190704021940.3023-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-04_01:, , signatures=0 Subject: [dpdk-dev] [PATCH v4 1/5] event/octeontx2: add event eth Rx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event eth Rx adapter capabilities, queue add and delete functions. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- doc/guides/eventdevs/octeontx2.rst | 6 + drivers/event/octeontx2/Makefile | 2 +- drivers/event/octeontx2/meson.build | 2 +- drivers/event/octeontx2/otx2_evdev.c | 4 + drivers/event/octeontx2/otx2_evdev.h | 15 ++ drivers/event/octeontx2/otx2_evdev_adptr.c | 254 +++++++++++++++++++++ 6 files changed, 281 insertions(+), 2 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index e5624ba23..fad84cf42 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -32,6 +32,12 @@ Features of the OCTEON TX2 SSO PMD are: time granularity of 2.5us. - Up to 256 TIM rings aka event timer adapters. - Up to 8 rings traversed in parallel. +- HW managed packets enqueued from ethdev to eventdev exposed through event eth + RX adapter. +- N:1 ethernet device Rx queue to Event queue mapping. +- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE`` + capability while maintaining receive packet order. +- Full Rx/Tx offload support defined through ethdev queue config. Prerequisites and Compilation procedure --------------------------------------- diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index aac238bff..8289f617a 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -43,7 +43,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs -LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf +LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf -lrte_ethdev LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index bdb5beed6..e94bc5944 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -26,4 +26,4 @@ foreach flag: extra_flags endif endforeach -deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2'] +deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2', 'pmd_octeontx2'] diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 914869b6c..2956a572d 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1123,6 +1123,10 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_unlink = otx2_sso_port_unlink, .timeout_ticks = otx2_sso_timeout_ticks, + .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get, + .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add, + .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del, + .timer_adapter_caps_get = otx2_tim_caps_get, .xstats_get = otx2_sso_xstats_get, diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index ba3aae5ba..8bee8c737 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -6,9 +6,12 @@ #define __OTX2_EVDEV_H__ #include +#include +#include #include "otx2_common.h" #include "otx2_dev.h" +#include "otx2_ethdev.h" #include "otx2_mempool.h" #define EVENTDEV_NAME_OCTEONTX2_PMD otx2_eventdev @@ -249,6 +252,18 @@ void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type); int sso_xae_reconfigure(struct rte_eventdev *event_dev); void sso_fastpath_fns_set(struct rte_eventdev *event_dev); + +int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + uint32_t *caps); +int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t rx_queue_id, + const struct rte_event_eth_rx_adapter_queue_conf *queue_conf); +int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t rx_queue_id); + /* Clean up API's */ typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c index 810722f89..ce5621f37 100644 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -4,6 +4,197 @@ #include "otx2_evdev.h" +int +otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, uint32_t *caps) +{ + int rc; + + RTE_SET_USED(event_dev); + rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); + if (rc) + *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; + else + *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + +static inline int +sso_rxq_enable(struct otx2_eth_dev *dev, uint16_t qid, uint8_t tt, uint8_t ggrp, + uint16_t eth_port_id) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + int rc; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->cq.ena = 0; + aq->cq.caching = 0; + + otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s)); + aq->cq_mask.ena = ~(aq->cq_mask.ena); + aq->cq_mask.caching = ~(aq->cq_mask.caching); + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to disable cq context"); + goto fail; + } + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.sso_ena = 1; + aq->rq.sso_tt = tt; + aq->rq.sso_grp = ggrp; + aq->rq.ena_wqwd = 1; + /* Mbuf Header generation : + * > FIRST_SKIP is a super set of WQE_SKIP, dont modify first skip as + * it already has data related to mbuf size, headroom, private area. + * > Using WQE_SKIP we can directly assign + * mbuf = wqe - sizeof(struct mbuf); + * so that mbuf header will not have unpredicted values while headroom + * and private data starts at the beginning of wqe_data. + */ + aq->rq.wqe_skip = 1; + aq->rq.wqe_caching = 1; + aq->rq.spb_ena = 0; + aq->rq.flow_tagw = 20; /* 20-bits */ + + /* Flow Tag calculation : + * + * rq_tag <31:24> = good/bad_tag<8:0>; + * rq_tag <23:0> = [ltag] + * + * flow_tag_mask<31:0> = (1 << flow_tagw) - 1; <31:20> + * tag<31:0> = (~flow_tag_mask & rq_tag) | (flow_tag_mask & flow_tag); + * + * Setup : + * ltag<23:0> = (eth_port_id & 0xF) << 20; + * good/bad_tag<8:0> = + * ((eth_port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4); + * + * TAG<31:0> on getwork = <31:28>(RTE_EVENT_TYPE_ETHDEV) | + * <27:20> (eth_port_id) | <20:0> [TAG] + */ + + aq->rq.ltag = (eth_port_id & 0xF) << 20; + aq->rq.good_utag = ((eth_port_id >> 4) & 0xF) | + (RTE_EVENT_TYPE_ETHDEV << 4); + aq->rq.bad_utag = aq->rq.good_utag; + + aq->rq.ena = 0; /* Don't enable RQ yet */ + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + + otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s)); + /* mask the bits to write. */ + aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena); + aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt); + aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp); + aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd); + aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip); + aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching); + aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena); + aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw); + aq->rq_mask.ltag = ~(aq->rq_mask.ltag); + aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag); + aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag); + aq->rq_mask.ena = ~(aq->rq_mask.ena); + aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching); + aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size); + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to init rx adapter context"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static inline int +sso_rxq_disable(struct otx2_eth_dev *dev, uint16_t qid) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + int rc; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_INIT; + + aq->cq.ena = 1; + aq->cq.caching = 1; + + otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s)); + aq->cq_mask.ena = ~(aq->cq_mask.ena); + aq->cq_mask.caching = ~(aq->cq_mask.caching); + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to init cq context"); + goto fail; + } + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.sso_ena = 0; + aq->rq.sso_tt = SSO_TT_UNTAGGED; + aq->rq.sso_grp = 0; + aq->rq.ena_wqwd = 0; + aq->rq.wqe_caching = 0; + aq->rq.wqe_skip = 0; + aq->rq.spb_ena = 0; + aq->rq.flow_tagw = 0x20; + aq->rq.ltag = 0; + aq->rq.good_utag = 0; + aq->rq.bad_utag = 0; + aq->rq.ena = 1; + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + + otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s)); + /* mask the bits to write. */ + aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena); + aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt); + aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp); + aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd); + aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching); + aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip); + aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena); + aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw); + aq->rq_mask.ltag = ~(aq->rq_mask.ltag); + aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag); + aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag); + aq->rq_mask.ena = ~(aq->rq_mask.ena); + aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching); + aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size); + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to clear rx adapter context"); + goto fail; + } + + return 0; +fail: + return rc; +} + void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type) { @@ -17,3 +208,66 @@ sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type) break; } } + +int +otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t rx_queue_id, + const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) +{ + struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; + uint16_t port = eth_dev->data->port_id; + int i, rc; + + RTE_SET_USED(event_dev); + rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); + if (rc) + return -EINVAL; + + if (rx_queue_id < 0) { + for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) { + rc |= sso_rxq_enable(otx2_eth_dev, i, + queue_conf->ev.sched_type, + queue_conf->ev.queue_id, port); + } + } else { + rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id, + queue_conf->ev.sched_type, + queue_conf->ev.queue_id, port); + } + + if (rc < 0) { + otx2_err("Failed to configure Rx adapter port=%d, q=%d", port, + queue_conf->ev.queue_id); + return rc; + } + + return 0; +} + +int +otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t rx_queue_id) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + int i, rc; + + RTE_SET_USED(event_dev); + rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); + if (rc) + return -EINVAL; + + if (rx_queue_id < 0) { + for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) + rc = sso_rxq_disable(dev, i); + } else { + rc = sso_rxq_disable(dev, (uint16_t)rx_queue_id); + } + + if (rc < 0) + otx2_err("Failed to clear Rx adapter config port=%d, q=%d", + eth_dev->data->port_id, rx_queue_id); + + return rc; +} From patchwork Thu Jul 4 02:19:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 56056 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 743BA5587; Thu, 4 Jul 2019 04:19:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 88AE01E20 for ; Thu, 4 Jul 2019 04:19:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x642GYAl017267 for ; Wed, 3 Jul 2019 19:19:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=O51MAMJ/5eCxPUOIcOr94t3mb7156lniFz7tzetBBqg=; b=ha09bEKu0Ml+y48QqudsFODCnJyOo0U5VLiBA/7sWRLRrc3wwNRf3snM2NKVWc9AexFp Y3p4DPUuMKp+t4tbHLpmssE7r6y46aoR8udjrDxl39lm19kDSsr7ki9fSZ+haMhpa0Ym ujZwxsDNTHHOL58RAq/C/q9NB/+DuWaL7JphBTIoBUXroLKiUQ9Icug8ZTWMDuUi3bto /greIRaW8t/1jhc14wiAiiPNQryNkxwBzy2pD+FtFiCaq3hSCyWWrJgQL05aoxhPViNE rJ5NfPNahYWGJaZ+sXtyJxSCSA5LkJfEAJSPdGpSmn/r44vPik1mW8oAGZqk3Bo992tv Fw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2tgtf73bb5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 03 Jul 2019 19:19:51 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 19:19:49 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 19:19:49 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.42]) by maili.marvell.com (Postfix) with ESMTP id 9CE3D3F703F; Wed, 3 Jul 2019 19:19:47 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Thu, 4 Jul 2019 07:49:36 +0530 Message-ID: <20190704021940.3023-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190704021940.3023-1-pbhagavatula@marvell.com> References: <20190704021940.3023-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-04_01:, , signatures=0 Subject: [dpdk-dev] [PATCH v4 2/5] event/octeontx2: resize SSO inflight event buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Resize SSO internal in-flight buffer count based on the Rx queues mempool size connected to event queues. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.h | 2 ++ drivers/event/octeontx2/otx2_evdev_adptr.c | 34 +++++++++++++++++++++- 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 8bee8c737..107aa79d4 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -132,7 +132,9 @@ struct otx2_sso_evdev { uint64_t nb_xaq_cfg; rte_iova_t fc_iova; struct rte_mempool *xaq_pool; + uint16_t rx_adptr_pool_cnt; uint32_t adptr_xae_cnt; + uint64_t *rx_adptr_pools; /* Dev args */ uint8_t dual_ws; uint8_t selftest; diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c index ce5621f37..12469fade 100644 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -199,6 +199,29 @@ void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type) { switch (event_type) { + case RTE_EVENT_TYPE_ETHDEV: + { + struct otx2_eth_rxq *rxq = data; + int i, match = false; + + for (i = 0; i < dev->rx_adptr_pool_cnt; i++) { + if ((uint64_t)rxq->pool == dev->rx_adptr_pools[i]) + match = true; + } + + if (!match) { + dev->rx_adptr_pool_cnt++; + dev->rx_adptr_pools = rte_realloc(dev->rx_adptr_pools, + sizeof(uint64_t) * + dev->rx_adptr_pool_cnt + , 0); + dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] = + (uint64_t)rxq->pool; + + dev->adptr_xae_cnt += rxq->pool->size; + } + break; + } case RTE_EVENT_TYPE_TIMER: { dev->adptr_xae_cnt += (*(uint64_t *)data); @@ -216,21 +239,30 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) { struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); uint16_t port = eth_dev->data->port_id; + struct otx2_eth_rxq *rxq; int i, rc; - RTE_SET_USED(event_dev); rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); if (rc) return -EINVAL; if (rx_queue_id < 0) { for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); + rc = sso_xae_reconfigure((struct rte_eventdev *) + (uintptr_t)event_dev); rc |= sso_rxq_enable(otx2_eth_dev, i, queue_conf->ev.sched_type, queue_conf->ev.queue_id, port); } } else { + rxq = eth_dev->data->rx_queues[rx_queue_id]; + sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); + rc = sso_xae_reconfigure((struct rte_eventdev *) + (uintptr_t)event_dev); rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id, queue_conf->ev.sched_type, queue_conf->ev.queue_id, port); From patchwork Thu Jul 4 02:19:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 56057 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7A40F5B34; Thu, 4 Jul 2019 04:19:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BF3ED44C3 for ; Thu, 4 Jul 2019 04:19:55 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x642Js28007942 for ; Wed, 3 Jul 2019 19:19:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Pox15T/Ok8q+FDeOyoYBlM2BNKOK9lTc2GOoZyU6Oww=; b=bnnUuXxhJPiVODwJG4u5KnRq62JAMMnGqJXiVJ+s4RNG7kGjBahOHbKpM/6Ne8S5CmMs BH6oIUHLGNVQbGX65ZaCQLhMJU37SoI1jnyrrF4QK2vUEeVC+4TuFOm80q1It4zaLw7X ZZS5ctDpGAn5z5lFkletxXpT+YVdzh17GbrBjKbHdSVerBjAYAyJYpxmaolFYo60ATrG nFcKWYfdI53gnialGBorFGfg8I43PPOdF0MhX6btYV+LtCd6ZaNPmUY8hdDcMlBUl4HY APZdYhi3IGT8nKBYVIa/6I9c6BkzqCNgVdC3Gl70QLgYi4bmgdSuvw2OYfzsq5t6rZr2 /Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tgrv1bvd7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 03 Jul 2019 19:19:54 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 19:19:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 19:19:52 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.42]) by maili.marvell.com (Postfix) with ESMTP id 7901E3F703F; Wed, 3 Jul 2019 19:19:50 -0700 (PDT) From: To: , Pavan Nikhilesh CC: , Nithin Dabilpuram Date: Thu, 4 Jul 2019 07:49:37 +0530 Message-ID: <20190704021940.3023-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190704021940.3023-1-pbhagavatula@marvell.com> References: <20190704021940.3023-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-04_01:, , signatures=0 Subject: [dpdk-dev] [PATCH v4 3/5] event/octeontx2: add event eth Rx adapter fastpath ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support for event eth Rx adapter fastpath operations. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh Signed-off-by: Nithin Dabilpuram --- drivers/event/octeontx2/otx2_evdev.c | 308 ++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 108 ++++++-- drivers/event/octeontx2/otx2_evdev_adptr.c | 45 +++ drivers/event/octeontx2/otx2_worker.c | 178 ++++++++---- drivers/event/octeontx2/otx2_worker.h | 43 ++- drivers/event/octeontx2/otx2_worker_dual.c | 220 ++++++++++----- drivers/event/octeontx2/otx2_worker_dual.h | 22 +- 7 files changed, 767 insertions(+), 157 deletions(-) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 2956a572d..f45fc008d 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -43,17 +43,199 @@ void sso_fastpath_fns_set(struct rte_eventdev *event_dev) { struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + /* Single WS modes */ + const event_dequeue_t ssogws_deq[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t + ssogws_deq_timeout_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = \ + otx2_ssogws_deq_timeout_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t ssogws_deq_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_timeout_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t + ssogws_deq_seg_timeout_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = \ + otx2_ssogws_deq_seg_timeout_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + + /* Dual WS modes */ + const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t ssogws_dual_deq_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_timeout_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t + ssogws_dual_deq_timeout_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_timeout_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t + ssogws_dual_deq_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = \ + otx2_ssogws_dual_deq_seg_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_t ssogws_dual_deq_seg_timeout[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = \ + otx2_ssogws_dual_deq_seg_timeout_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; + + const event_dequeue_burst_t + ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = \ + otx2_ssogws_dual_deq_seg_timeout_burst_ ##name, +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R + }; event_dev->enqueue = otx2_ssogws_enq; event_dev->enqueue_burst = otx2_ssogws_enq_burst; event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst; event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst; - - event_dev->dequeue = otx2_ssogws_deq; - event_dev->dequeue_burst = otx2_ssogws_deq_burst; - if (dev->is_timeout_deq) { - event_dev->dequeue = otx2_ssogws_deq_timeout; - event_dev->dequeue_burst = otx2_ssogws_deq_timeout_burst; + if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { + event_dev->dequeue = ssogws_deq_seg + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = ssogws_deq_seg_burst + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + if (dev->is_timeout_deq) { + event_dev->dequeue = ssogws_deq_seg_timeout + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = + ssogws_deq_seg_timeout_burst + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + } + } else { + event_dev->dequeue = ssogws_deq + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = ssogws_deq_burst + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + if (dev->is_timeout_deq) { + event_dev->dequeue = ssogws_deq_timeout + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = + ssogws_deq_timeout_burst + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + } } if (dev->dual_ws) { @@ -63,12 +245,112 @@ sso_fastpath_fns_set(struct rte_eventdev *event_dev) otx2_ssogws_dual_enq_new_burst; event_dev->enqueue_forward_burst = otx2_ssogws_dual_enq_fwd_burst; - event_dev->dequeue = otx2_ssogws_dual_deq; - event_dev->dequeue_burst = otx2_ssogws_dual_deq_burst; - if (dev->is_timeout_deq) { - event_dev->dequeue = otx2_ssogws_dual_deq_timeout; - event_dev->dequeue_burst = - otx2_ssogws_dual_deq_timeout_burst; + + if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { + event_dev->dequeue = ssogws_dual_deq_seg + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = ssogws_dual_deq_seg_burst + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + if (dev->is_timeout_deq) { + event_dev->dequeue = + ssogws_dual_deq_seg_timeout + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = + ssogws_dual_deq_seg_timeout_burst + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_RSS_F)]; + } + } else { + event_dev->dequeue = ssogws_dual_deq + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = ssogws_dual_deq_burst + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + if (dev->is_timeout_deq) { + event_dev->dequeue = + ssogws_dual_deq_timeout + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_RSS_F)]; + event_dev->dequeue_burst = + ssogws_dual_deq_timeout_burst + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_RSS_F)]; + } } } rte_mb(); @@ -1126,6 +1408,8 @@ static struct rte_eventdev_ops otx2_sso_ops = { .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get, .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add, .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del, + .eth_rx_adapter_start = otx2_sso_rx_adapter_start, + .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop, .timer_adapter_caps_get = otx2_tim_caps_get, diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 107aa79d4..a81a8be6f 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -132,6 +132,7 @@ struct otx2_sso_evdev { uint64_t nb_xaq_cfg; rte_iova_t fc_iova; struct rte_mempool *xaq_pool; + uint64_t rx_offloads; uint16_t rx_adptr_pool_cnt; uint32_t adptr_xae_cnt; uint64_t *rx_adptr_pools; @@ -166,6 +167,7 @@ struct otx2_ssogws { /* Get Work Fastpath data */ OTX2_SSOGWS_OPS; uint8_t swtag_req; + void *lookup_mem; uint8_t port; /* Add Work Fastpath data */ uint64_t xaq_lmt __rte_cache_aligned; @@ -182,6 +184,7 @@ struct otx2_ssogws_dual { struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */ uint8_t swtag_req; uint8_t vws; /* Ping pong bit */ + void *lookup_mem; uint8_t port; /* Add Work Fastpath data */ uint64_t xaq_lmt __rte_cache_aligned; @@ -195,6 +198,28 @@ sso_pmd_priv(const struct rte_eventdev *event_dev) return event_dev->data->dev_private; } +static const union mbuf_initializer mbuf_init = { + .fields = { + .data_off = RTE_PKTMBUF_HEADROOM, + .refcnt = 1, + .nb_segs = 1, + .port = 0 + } +}; + +static __rte_always_inline void +otx2_wqe_to_mbuf(uint64_t get_work1, const uint64_t mbuf, uint8_t port_id, + const uint32_t tag, const uint32_t flags, + const void * const lookup_mem) +{ + struct nix_wqe_hdr_s *wqe = (struct nix_wqe_hdr_s *)get_work1; + + otx2_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag, + (struct rte_mbuf *)mbuf, lookup_mem, + mbuf_init.value | (uint64_t)port_id << 48, flags); + +} + static inline int parse_kvargs_flag(const char *key, const char *value, void *opaque) { @@ -213,6 +238,9 @@ parse_kvargs_value(const char *key, const char *value, void *opaque) return 0; } +#define SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_FASTPATH_MODES +#define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES + /* Single WS API's */ uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev); uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], @@ -222,15 +250,6 @@ uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); -uint16_t otx2_ssogws_deq(void *port, struct rte_event *ev, - uint64_t timeout_ticks); -uint16_t otx2_ssogws_deq_burst(void *port, struct rte_event ev[], - uint16_t nb_events, uint64_t timeout_ticks); -uint16_t otx2_ssogws_deq_timeout(void *port, struct rte_event *ev, - uint64_t timeout_ticks); -uint16_t otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[], - uint16_t nb_events, - uint64_t timeout_ticks); /* Dual WS API's */ uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev); uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], @@ -240,15 +259,63 @@ uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); -uint16_t otx2_ssogws_dual_deq(void *port, struct rte_event *ev, - uint64_t timeout_ticks); -uint16_t otx2_ssogws_dual_deq_burst(void *port, struct rte_event ev[], - uint16_t nb_events, uint64_t timeout_ticks); -uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev, - uint64_t timeout_ticks); -uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[], - uint16_t nb_events, - uint64_t timeout_ticks); +/* Auto generated API's */ +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ +uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ + \ +uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ +uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks);\ + +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type); @@ -265,7 +332,10 @@ int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev, int32_t rx_queue_id); - +int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev); +int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev); /* Clean up API's */ typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c index 12469fade..e605fd1d4 100644 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -232,6 +232,25 @@ sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type) } } +static inline void +sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + int i; + + for (i = 0; i < dev->nb_event_ports; i++) { + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = event_dev->data->ports[i]; + + ws->lookup_mem = lookup_mem; + } else { + struct otx2_ssogws *ws = event_dev->data->ports[i]; + + ws->lookup_mem = lookup_mem; + } + } +} + int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev, @@ -258,6 +277,8 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, queue_conf->ev.sched_type, queue_conf->ev.queue_id, port); } + rxq = eth_dev->data->rx_queues[0]; + sso_updt_lookup_mem(event_dev, rxq->lookup_mem); } else { rxq = eth_dev->data->rx_queues[rx_queue_id]; sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); @@ -266,6 +287,7 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id, queue_conf->ev.sched_type, queue_conf->ev.queue_id, port); + sso_updt_lookup_mem(event_dev, rxq->lookup_mem); } if (rc < 0) { @@ -274,6 +296,9 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, return rc; } + dev->rx_offloads |= otx2_eth_dev->rx_offload_flags; + sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + return 0; } @@ -303,3 +328,23 @@ otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, return rc; } + +int +otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(event_dev); + RTE_SET_USED(eth_dev); + + return 0; +} + +int +otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(event_dev); + RTE_SET_USED(eth_dev); + + return 0; +} diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c index 7a6d4cad2..ea2d0b5a4 100644 --- a/drivers/event/octeontx2/otx2_worker.c +++ b/drivers/event/octeontx2/otx2_worker.c @@ -81,60 +81,132 @@ otx2_ssogws_release_event(struct otx2_ssogws *ws) otx2_ssogws_swtag_flush(ws); } -uint16_t __hot -otx2_ssogws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks) -{ - struct otx2_ssogws *ws = port; - - RTE_SET_USED(timeout_ticks); - - if (ws->swtag_req) { - ws->swtag_req = 0; - otx2_ssogws_swtag_wait(ws); - return 1; - } - - return otx2_ssogws_get_work(ws, ev); +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ +uint16_t __hot \ +otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + otx2_ssogws_swtag_wait(ws); \ + return 1; \ + } \ + \ + return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws *ws = port; \ + uint16_t ret = 1; \ + uint64_t iter; \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + otx2_ssogws_swtag_wait(ws); \ + return ret; \ + } \ + \ + ret = otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \ + for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \ + ret = otx2_ssogws_get_work(ws, ev, flags, \ + ws->lookup_mem); \ + \ + return ret; \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + otx2_ssogws_swtag_wait(ws); \ + return 1; \ + } \ + \ + return otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \ + ws->lookup_mem); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws *ws = port; \ + uint16_t ret = 1; \ + uint64_t iter; \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + otx2_ssogws_swtag_wait(ws); \ + return ret; \ + } \ + \ + ret = otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \ + ws->lookup_mem); \ + for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \ + ret = otx2_ssogws_get_work(ws, ev, \ + flags | NIX_RX_MULTI_SEG_F, \ + ws->lookup_mem); \ + \ + return ret; \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \ + timeout_ticks); \ } -uint16_t __hot -otx2_ssogws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events, - uint64_t timeout_ticks) -{ - RTE_SET_USED(nb_events); - - return otx2_ssogws_deq(port, ev, timeout_ticks); -} - -uint16_t __hot -otx2_ssogws_deq_timeout(void *port, struct rte_event *ev, - uint64_t timeout_ticks) -{ - struct otx2_ssogws *ws = port; - uint16_t ret = 1; - uint64_t iter; - - if (ws->swtag_req) { - ws->swtag_req = 0; - otx2_ssogws_swtag_wait(ws); - return ret; - } - - ret = otx2_ssogws_get_work(ws, ev); - for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) - ret = otx2_ssogws_get_work(ws, ev); - - return ret; -} - -uint16_t __hot -otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[], - uint16_t nb_events, uint64_t timeout_ticks) -{ - RTE_SET_USED(nb_events); - - return otx2_ssogws_deq_timeout(port, ev, timeout_ticks); -} +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R uint16_t __hot otx2_ssogws_enq(void *port, const struct rte_event *ev) @@ -221,7 +293,7 @@ ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base, while (aq_cnt || cq_ds_cnt || ds_cnt) { otx2_write64(val, ws->getwrk_op); - otx2_ssogws_get_work_empty(ws, &ev); + otx2_ssogws_get_work_empty(ws, &ev, 0); if (fn != NULL && ev.u64 != 0) fn(arg, ev); if (ev.sched_type != SSO_TT_EMPTY) diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index f06ff064e..accf7f956 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -14,15 +14,19 @@ /* SSO Operations */ static __rte_always_inline uint16_t -otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev) +otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev, + const uint32_t flags, const void * const lookup_mem) { union otx2_sso_event event; uint64_t get_work1; + uint64_t mbuf; otx2_write64(BIT_ULL(16) | /* wait for work. */ 1, /* Use Mask set 0. */ ws->getwrk_op); + if (flags & NIX_RX_OFFLOAD_PTYPE_F) + rte_prefetch_non_temporal(lookup_mem); #ifdef RTE_ARCH_ARM64 asm volatile( " ldr %[tag], [%[tag_loc]] \n" @@ -34,9 +38,12 @@ otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev) " ldr %[wqp], [%[wqp_loc]] \n" " tbnz %[tag], 63, rty%= \n" "done%=: dmb ld \n" - " prfm pldl1keep, [%[wqp]] \n" + " prfm pldl1keep, [%[wqp], #8] \n" + " sub %[mbuf], %[wqp], #0x80 \n" + " prfm pldl1keep, [%[mbuf]] \n" : [tag] "=&r" (event.get_work0), - [wqp] "=&r" (get_work1) + [wqp] "=&r" (get_work1), + [mbuf] "=&r" (mbuf) : [tag_loc] "r" (ws->tag_op), [wqp_loc] "r" (ws->wqp_op) ); @@ -47,6 +54,8 @@ otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev) get_work1 = otx2_read64(ws->wqp_op); rte_prefetch0((const void *)get_work1); + mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf)); + rte_prefetch0((const void *)mbuf); #endif event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | @@ -55,6 +64,12 @@ otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev) ws->cur_tt = event.sched_type; ws->cur_grp = event.queue_id; + if (event.sched_type != SSO_TT_EMPTY && + event.event_type == RTE_EVENT_TYPE_ETHDEV) { + otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, + (uint32_t) event.get_work0, flags, lookup_mem); + get_work1 = mbuf; + } ev->event = event.get_work0; ev->u64 = get_work1; @@ -64,10 +79,12 @@ otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev) /* Used in cleaning up workslot. */ static __rte_always_inline uint16_t -otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev) +otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev, + const uint32_t flags) { union otx2_sso_event event; uint64_t get_work1; + uint64_t mbuf; #ifdef RTE_ARCH_ARM64 asm volatile( @@ -80,9 +97,12 @@ otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev) " ldr %[wqp], [%[wqp_loc]] \n" " tbnz %[tag], 63, rty%= \n" "done%=: dmb ld \n" - " prfm pldl1keep, [%[wqp]] \n" + " prfm pldl1keep, [%[wqp], #8] \n" + " sub %[mbuf], %[wqp], #0x80 \n" + " prfm pldl1keep, [%[mbuf]] \n" : [tag] "=&r" (event.get_work0), - [wqp] "=&r" (get_work1) + [wqp] "=&r" (get_work1), + [mbuf] "=&r" (mbuf) : [tag_loc] "r" (ws->tag_op), [wqp_loc] "r" (ws->wqp_op) ); @@ -92,7 +112,9 @@ otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev) event.get_work0 = otx2_read64(ws->tag_op); get_work1 = otx2_read64(ws->wqp_op); - rte_prefetch0((const void *)get_work1); + rte_prefetch_non_temporal((const void *)get_work1); + mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf)); + rte_prefetch_non_temporal((const void *)mbuf); #endif event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | @@ -101,6 +123,13 @@ otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev) ws->cur_tt = event.sched_type; ws->cur_grp = event.queue_id; + if (event.sched_type != SSO_TT_EMPTY && + event.event_type == RTE_EVENT_TYPE_ETHDEV) { + otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, + (uint32_t) event.get_work0, flags, NULL); + get_work1 = mbuf; + } + ev->event = event.get_work0; ev->u64 = get_work1; diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index 58fd588f6..b5cf9ac12 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -140,68 +140,162 @@ otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } -uint16_t __hot -otx2_ssogws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks) -{ - struct otx2_ssogws_dual *ws = port; - uint8_t gw; - - RTE_SET_USED(timeout_ticks); - if (ws->swtag_req) { - otx2_ssogws_swtag_wait((struct otx2_ssogws *) - &ws->ws_state[!ws->vws]); - ws->swtag_req = 0; - return 1; - } - - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], - &ws->ws_state[!ws->vws], ev); - ws->vws = !ws->vws; - - return gw; -} - -uint16_t __hot -otx2_ssogws_dual_deq_burst(void *port, struct rte_event ev[], - uint16_t nb_events, uint64_t timeout_ticks) -{ - RTE_SET_USED(nb_events); - - return otx2_ssogws_dual_deq(port, ev, timeout_ticks); -} - -uint16_t __hot -otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev, - uint64_t timeout_ticks) -{ - struct otx2_ssogws_dual *ws = port; - uint64_t iter; - uint8_t gw; - - if (ws->swtag_req) { - otx2_ssogws_swtag_wait((struct otx2_ssogws *) - &ws->ws_state[!ws->vws]); - ws->swtag_req = 0; - return 1; - } - - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], - &ws->ws_state[!ws->vws], ev); - ws->vws = !ws->vws; - for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], - &ws->ws_state[!ws->vws], ev); - ws->vws = !ws->vws; - } - - return gw; +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ +uint16_t __hot \ +otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws_dual *ws = port; \ + uint8_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (ws->swtag_req) { \ + otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ + &ws->ws_state[!ws->vws]); \ + ws->swtag_req = 0; \ + return 1; \ + } \ + \ + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ + &ws->ws_state[!ws->vws], ev, \ + flags, ws->lookup_mem); \ + ws->vws = !ws->vws; \ + \ + return gw; \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws_dual *ws = port; \ + uint64_t iter; \ + uint8_t gw; \ + \ + if (ws->swtag_req) { \ + otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ + &ws->ws_state[!ws->vws]); \ + ws->swtag_req = 0; \ + return 1; \ + } \ + \ + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ + &ws->ws_state[!ws->vws], ev, \ + flags, ws->lookup_mem); \ + ws->vws = !ws->vws; \ + for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \ + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ + &ws->ws_state[!ws->vws], \ + ev, flags, \ + ws->lookup_mem); \ + ws->vws = !ws->vws; \ + } \ + \ + return gw; \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \ + timeout_ticks); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws_dual *ws = port; \ + uint8_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (ws->swtag_req) { \ + otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ + &ws->ws_state[!ws->vws]); \ + ws->swtag_req = 0; \ + return 1; \ + } \ + \ + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ + &ws->ws_state[!ws->vws], ev, \ + flags | NIX_RX_MULTI_SEG_F, \ + ws->lookup_mem); \ + ws->vws = !ws->vws; \ + \ + return gw; \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \ + timeout_ticks); \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ + struct rte_event *ev, \ + uint64_t timeout_ticks) \ +{ \ + struct otx2_ssogws_dual *ws = port; \ + uint64_t iter; \ + uint8_t gw; \ + \ + if (ws->swtag_req) { \ + otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ + &ws->ws_state[!ws->vws]); \ + ws->swtag_req = 0; \ + return 1; \ + } \ + \ + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ + &ws->ws_state[!ws->vws], ev, \ + flags | NIX_RX_MULTI_SEG_F, \ + ws->lookup_mem); \ + ws->vws = !ws->vws; \ + for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \ + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ + &ws->ws_state[!ws->vws], \ + ev, flags | \ + NIX_RX_MULTI_SEG_F, \ + ws->lookup_mem); \ + ws->vws = !ws->vws; \ + } \ + \ + return gw; \ +} \ + \ +uint16_t __hot \ +otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events, \ + uint64_t timeout_ticks) \ +{ \ + RTE_SET_USED(nb_events); \ + \ + return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \ + timeout_ticks); \ } -uint16_t __hot -otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[], - uint16_t nb_events, uint64_t timeout_ticks) -{ - RTE_SET_USED(nb_events); - - return otx2_ssogws_dual_deq_timeout(port, ev, timeout_ticks); -} +SSO_RX_ADPTR_ENQ_FASTPATH_FUNC +#undef R diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index d8453d1f7..32fe61b44 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -15,12 +15,16 @@ static __rte_always_inline uint16_t otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, struct otx2_ssogws_state *ws_pair, - struct rte_event *ev) + struct rte_event *ev, const uint32_t flags, + const void * const lookup_mem) { const uint64_t set_gw = BIT_ULL(16) | 1; union otx2_sso_event event; uint64_t get_work1; + uint64_t mbuf; + if (flags & NIX_RX_OFFLOAD_PTYPE_F) + rte_prefetch_non_temporal(lookup_mem); #ifdef RTE_ARCH_ARM64 asm volatile( " ldr %[tag], [%[tag_loc]] \n" @@ -33,9 +37,12 @@ otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, " tbnz %[tag], 63, rty%= \n" "done%=: str %[gw], [%[pong]] \n" " dmb ld \n" - " prfm pldl1keep, [%[wqp]] \n" + " prfm pldl1keep, [%[wqp], #8]\n" + " sub %[mbuf], %[wqp], #0x80 \n" + " prfm pldl1keep, [%[mbuf]] \n" : [tag] "=&r" (event.get_work0), - [wqp] "=&r" (get_work1) + [wqp] "=&r" (get_work1), + [mbuf] "=&r" (mbuf) : [tag_loc] "r" (ws->tag_op), [wqp_loc] "r" (ws->wqp_op), [gw] "r" (set_gw), @@ -49,6 +56,8 @@ otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, otx2_write64(set_gw, ws_pair->getwrk_op); rte_prefetch0((const void *)get_work1); + mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf)); + rte_prefetch0((const void *)mbuf); #endif event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | (event.get_work0 & (0x3FFull << 36)) << 4 | @@ -56,6 +65,13 @@ otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, ws->cur_tt = event.sched_type; ws->cur_grp = event.queue_id; + if (event.sched_type != SSO_TT_EMPTY && + event.event_type == RTE_EVENT_TYPE_ETHDEV) { + otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, + (uint32_t) event.get_work0, flags, lookup_mem); + get_work1 = mbuf; + } + ev->event = event.get_work0; ev->u64 = get_work1; From patchwork Thu Jul 4 02:19:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 56058 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6C30D1B197; Thu, 4 Jul 2019 04:20:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B5FA95B3A for ; Thu, 4 Jul 2019 04:19:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x642FiFQ016481 for ; Wed, 3 Jul 2019 19:19:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=rUFPglhdbcGGh5bgER3YQ++pxy5JWQDra9D0yJigX68=; b=B9bOLtpZZppt1Dyvr7BrELuQB94KFmghgZ/8nBhEzp1GReBdS9dYpdbWpc+uYvJJZpZe kEjEYv9gr4XmFRS9PWrCEb7XmSTvPnWr7OxThd2ixtUkjORxN92jt1a2jMRcJd/4Z8EU WN1xda/nPnIKMUTKiat0mU0Alg9acLz+UuY/iL1z2joZ/6R9S1KeO8pdvJaQxvcOrUsd ZOdE02+4kpvB4aTrCxj0YNHXUxytmYUviqnzLJoj7oZ19slLz+cuP+v3+g2s8KXYwaOf 7UKWjAQQ+0RZrwTw1cJQAnToZy+oD3czrNvO307zSceSupkcVn73Xx3KDeNOi8BtbOjV Zw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tgtf73bbr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 03 Jul 2019 19:19:58 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 19:19:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 19:19:56 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.42]) by maili.marvell.com (Postfix) with ESMTP id 3B3453F703F; Wed, 3 Jul 2019 19:19:53 -0700 (PDT) From: To: , Pavan Nikhilesh CC: , Harman Kalra , Nithin Dabilpuram Date: Thu, 4 Jul 2019 07:49:38 +0530 Message-ID: <20190704021940.3023-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190704021940.3023-1-pbhagavatula@marvell.com> References: <20190704021940.3023-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-04_01:, , signatures=0 Subject: [dpdk-dev] [PATCH v4 4/5] event/octeontx2: add PTP support for SSO X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Harman Kalra Add PTP support for SSO based on rx_offloads of the queue connected to it. Signed-off-by: Harman Kalra Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 2 ++ drivers/event/octeontx2/otx2_evdev.h | 6 ++++++ drivers/event/octeontx2/otx2_evdev_adptr.c | 1 + drivers/event/octeontx2/otx2_worker.h | 6 ++++++ drivers/event/octeontx2/otx2_worker_dual.c | 18 ++++++++++++------ drivers/event/octeontx2/otx2_worker_dual.h | 5 ++++- 6 files changed, 31 insertions(+), 7 deletions(-) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index f45fc008d..ca75e4215 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1095,6 +1095,7 @@ otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); ws->fc_mem = dev->fc_mem; ws->xaq_lmt = dev->xaq_lmt; + ws->tstamp = dev->tstamp; otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR( ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM); otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR( @@ -1107,6 +1108,7 @@ otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); ws->fc_mem = dev->fc_mem; ws->xaq_lmt = dev->xaq_lmt; + ws->tstamp = dev->tstamp; otx2_write64(val, base + SSOW_LF_GWS_NW_TIM); } diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index a81a8be6f..2df9ec468 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -149,6 +149,8 @@ struct otx2_sso_evdev { /* MSIX offsets */ uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP]; uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS]; + /* PTP timestamp */ + struct otx2_timesync_info *tstamp; } __rte_cache_aligned; #define OTX2_SSOGWS_OPS \ @@ -173,6 +175,8 @@ struct otx2_ssogws { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; + /* PTP timestamp */ + struct otx2_timesync_info *tstamp; } __rte_cache_aligned; struct otx2_ssogws_state { @@ -190,6 +194,8 @@ struct otx2_ssogws_dual { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; + /* PTP timestamp */ + struct otx2_timesync_info *tstamp; } __rte_cache_aligned; static inline struct otx2_sso_evdev * diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c index e605fd1d4..e5aaa67b6 100644 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -297,6 +297,7 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, } dev->rx_offloads |= otx2_eth_dev->rx_offload_flags; + dev->tstamp = &otx2_eth_dev->tstamp; sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); return 0; diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index accf7f956..1e1e947ef 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -68,6 +68,9 @@ otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev, event.event_type == RTE_EVENT_TYPE_ETHDEV) { otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, (uint32_t) event.get_work0, flags, lookup_mem); + /* Extracting tstamp, if PTP enabled*/ + otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp, + flags); get_work1 = mbuf; } @@ -127,6 +130,9 @@ otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev, event.event_type == RTE_EVENT_TYPE_ETHDEV) { otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, (uint32_t) event.get_work0, flags, NULL); + /* Extracting tstamp, if PTP enabled*/ + otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp, + flags); get_work1 = mbuf; } diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index b5cf9ac12..cbe03c1bb 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -158,7 +158,8 @@ otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ \ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ &ws->ws_state[!ws->vws], ev, \ - flags, ws->lookup_mem); \ + flags, ws->lookup_mem, \ + ws->tstamp); \ ws->vws = !ws->vws; \ \ return gw; \ @@ -191,13 +192,15 @@ otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \ \ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ &ws->ws_state[!ws->vws], ev, \ - flags, ws->lookup_mem); \ + flags, ws->lookup_mem, \ + ws->tstamp); \ ws->vws = !ws->vws; \ for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ &ws->ws_state[!ws->vws], \ ev, flags, \ - ws->lookup_mem); \ + ws->lookup_mem, \ + ws->tstamp); \ ws->vws = !ws->vws; \ } \ \ @@ -234,7 +237,8 @@ otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ &ws->ws_state[!ws->vws], ev, \ flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem); \ + ws->lookup_mem, \ + ws->tstamp); \ ws->vws = !ws->vws; \ \ return gw; \ @@ -271,14 +275,16 @@ otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ &ws->ws_state[!ws->vws], ev, \ flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem); \ + ws->lookup_mem, \ + ws->tstamp); \ ws->vws = !ws->vws; \ for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ &ws->ws_state[!ws->vws], \ ev, flags | \ NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem); \ + ws->lookup_mem, \ + ws->tstamp); \ ws->vws = !ws->vws; \ } \ \ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 32fe61b44..4a72f424d 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -16,7 +16,8 @@ static __rte_always_inline uint16_t otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, struct otx2_ssogws_state *ws_pair, struct rte_event *ev, const uint32_t flags, - const void * const lookup_mem) + const void * const lookup_mem, + struct otx2_timesync_info * const tstamp) { const uint64_t set_gw = BIT_ULL(16) | 1; union otx2_sso_event event; @@ -69,6 +70,8 @@ otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, event.event_type == RTE_EVENT_TYPE_ETHDEV) { otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, (uint32_t) event.get_work0, flags, lookup_mem); + /* Extracting tstamp, if PTP enabled*/ + otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, tstamp, flags); get_work1 = mbuf; } From patchwork Thu Jul 4 02:19:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 56059 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 35EE85B3E; Thu, 4 Jul 2019 04:20:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id A023F1B942 for ; Thu, 4 Jul 2019 04:20:01 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x642K0SV008370 for ; Wed, 3 Jul 2019 19:20:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=FFQs8IryJJGjVLFIhOkqnzRLDwe66FjnFFrr9WIiSWc=; b=t4+8z99UT2/EGhBK3p+/6Imd57qc0Fu8/a6djY7FsDDZgW9uBkJcrcoB0KjSMPz/rev3 +Od3ii8Tw4yzEBKuMkAX5H9i1SmPVrLWzKvO38cbzNcSyczSgsTKcGpOHNSmypzgQjPk Pwzq3uKNKG03KTw/DKuje+W1U0BQTZtcScL2avW+jkZSRbYVo6k2u0Bdouxv4nYBvdPG B9ioJhDhwSmVYSQ1Ezpe2NNvocaNyDcckgZZyD2Ev8Qy2U8AlyL+pjHASr3Z4TnyEBPj 8r8mOjFSgdqg3aIBiFTsxY1gASwnsjBUkKCZlY+seIsEc6PGI7TEPsEDvRRCxgg9m+fu Rg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tgrv1bvdg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 03 Jul 2019 19:20:00 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 19:19:59 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 19:19:59 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.42]) by maili.marvell.com (Postfix) with ESMTP id 68D993F703F; Wed, 3 Jul 2019 19:19:57 -0700 (PDT) From: To: , Pavan Nikhilesh CC: , Nithin Dabilpuram Date: Thu, 4 Jul 2019 07:49:39 +0530 Message-ID: <20190704021940.3023-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190704021940.3023-1-pbhagavatula@marvell.com> References: <20190704021940.3023-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-04_01:, , signatures=0 Subject: [dpdk-dev] [PATCH v4 5/5] event/octeontx2: add Tx adadpter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event eth Tx adapter support to octeontx2 SSO. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh Signed-off-by: Nithin Dabilpuram --- drivers/event/octeontx2/otx2_evdev.c | 79 ++++++++++++++++++++ drivers/event/octeontx2/otx2_evdev.h | 32 ++++++++ drivers/event/octeontx2/otx2_evdev_adptr.c | 86 ++++++++++++++++++++++ drivers/event/octeontx2/otx2_worker.c | 29 ++++++++ drivers/event/octeontx2/otx2_worker.h | 49 ++++++++++++ drivers/event/octeontx2/otx2_worker_dual.c | 35 +++++++++ 6 files changed, 310 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index ca75e4215..56716c2ac 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -168,6 +168,39 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; + /* Tx modes */ + const event_tx_adapter_enqueue ssogws_tx_adptr_enq[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_ssogws_tx_adptr_enq_ ## name, +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + }; + + const event_tx_adapter_enqueue + ssogws_tx_adptr_enq_seg[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_ssogws_tx_adptr_enq_seg_ ## name, +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + }; + + const event_tx_adapter_enqueue + ssogws_dual_tx_adptr_enq[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_ssogws_dual_tx_adptr_enq_ ## name, +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + }; + + const event_tx_adapter_enqueue + ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = \ + otx2_ssogws_dual_tx_adptr_enq_seg_ ## name, +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + }; + event_dev->enqueue = otx2_ssogws_enq; event_dev->enqueue_burst = otx2_ssogws_enq_burst; event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst; @@ -238,6 +271,23 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC } } + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { + /* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ + event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } else { + event_dev->txa_enqueue = ssogws_tx_adptr_enq + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } + if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst; @@ -352,6 +402,31 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_OFFLOAD_RSS_F)]; } } + + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { + /* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ + event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } else { + event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } } rte_mb(); } @@ -1413,6 +1488,10 @@ static struct rte_eventdev_ops otx2_sso_ops = { .eth_rx_adapter_start = otx2_sso_rx_adapter_start, .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop, + .eth_tx_adapter_caps_get = otx2_sso_tx_adapter_caps_get, + .eth_tx_adapter_queue_add = otx2_sso_tx_adapter_queue_add, + .eth_tx_adapter_queue_del = otx2_sso_tx_adapter_queue_del, + .timer_adapter_caps_get = otx2_tim_caps_get, .xstats_get = otx2_sso_xstats_get, diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 2df9ec468..9c9718f6f 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -8,6 +8,7 @@ #include #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" @@ -21,6 +22,7 @@ #define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV #define OTX2_SSO_MAX_VHWS (UINT8_MAX) #define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc" +#define OTX2_SSO_SQB_LIMIT (0x180) #define OTX2_SSO_XAQ_SLACK (8) #define OTX2_SSO_XAQ_CACHE_CNT (0x7) @@ -133,6 +135,7 @@ struct otx2_sso_evdev { rte_iova_t fc_iova; struct rte_mempool *xaq_pool; uint64_t rx_offloads; + uint64_t tx_offloads; uint16_t rx_adptr_pool_cnt; uint32_t adptr_xae_cnt; uint64_t *rx_adptr_pools; @@ -323,6 +326,22 @@ uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\ + uint16_t nb_events); \ +uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events); \ +uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events); \ +uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events); \ + +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type); int sso_xae_reconfigure(struct rte_eventdev *event_dev); @@ -342,6 +361,19 @@ int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev); int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev); +int otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, + uint32_t *caps); +int otx2_sso_tx_adapter_queue_add(uint8_t id, + const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id); + +int otx2_sso_tx_adapter_queue_del(uint8_t id, + const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id); + /* Clean up API's */ typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c index e5aaa67b6..0c4f18ec2 100644 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -349,3 +349,89 @@ otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, return 0; } + +int +otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, uint32_t *caps) +{ + int ret; + + RTE_SET_USED(dev); + ret = strncmp(eth_dev->device->driver->name, "net_octeontx2,", 13); + if (ret) + *caps = 0; + else + *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + +static int +sso_sqb_aura_limit_edit(struct rte_mempool *mp, uint16_t nb_sqb_bufs) +{ + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf; + struct npa_aq_enq_req *aura_req; + + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + + aura_req->aura.limit = nb_sqb_bufs; + aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit); + + return otx2_mbox_process(npa_lf->mbox); +} + +int +otx2_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_eth_txq *txq; + int i; + + RTE_SET_USED(id); + if (tx_queue_id < 0) { + for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) { + txq = eth_dev->data->tx_queues[i]; + sso_sqb_aura_limit_edit(txq->sqb_pool, + OTX2_SSO_SQB_LIMIT); + } + } else { + txq = eth_dev->data->tx_queues[tx_queue_id]; + sso_sqb_aura_limit_edit(txq->sqb_pool, OTX2_SSO_SQB_LIMIT); + } + + dev->tx_offloads |= otx2_eth_dev->tx_offload_flags; + sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return 0; +} + +int +otx2_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + struct otx2_eth_txq *txq; + int i; + + RTE_SET_USED(id); + RTE_SET_USED(eth_dev); + RTE_SET_USED(event_dev); + if (tx_queue_id < 0) { + for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) { + txq = eth_dev->data->tx_queues[i]; + sso_sqb_aura_limit_edit(txq->sqb_pool, + txq->nb_sqb_bufs); + } + } else { + txq = eth_dev->data->tx_queues[tx_queue_id]; + sso_sqb_aura_limit_edit(txq->sqb_pool, txq->nb_sqb_bufs); + } + + return 0; +} diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c index ea2d0b5a4..cd14cd3d2 100644 --- a/drivers/event/octeontx2/otx2_worker.c +++ b/drivers/event/octeontx2/otx2_worker.c @@ -267,6 +267,35 @@ otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +uint16_t __hot \ +otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \ + uint16_t nb_events) \ +{ \ + struct otx2_ssogws *ws = port; \ + uint64_t cmd[sz]; \ + \ + RTE_SET_USED(nb_events); \ + return otx2_ssogws_event_tx(ws, ev, cmd, flags); \ +} +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +uint16_t __hot \ +otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\ + uint16_t nb_events) \ +{ \ + struct otx2_ssogws *ws = port; \ + uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ + \ + RTE_SET_USED(nb_events); \ + return otx2_ssogws_event_tx(ws, ev, cmd, (flags) | \ + NIX_TX_MULTI_SEG_F); \ +} +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base, otx2_handle_event_t fn, void *arg) diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 1e1e947ef..3c847d223 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -219,4 +219,53 @@ otx2_ssogws_swtag_wait(struct otx2_ssogws *ws) #endif } +static __rte_always_inline void +otx2_ssogws_head_wait(struct otx2_ssogws *ws, const uint8_t wait_flag) +{ + while (wait_flag && !(otx2_read64(ws->tag_op) & BIT_ULL(35))) + ; + + rte_cio_wmb(); +} + +static __rte_always_inline const struct otx2_eth_txq * +otx2_ssogws_xtract_meta(struct rte_mbuf *m) +{ + return rte_eth_devices[m->port].data->tx_queues[ + rte_event_eth_tx_adapter_txq_get(m)]; +} + +static __rte_always_inline void +otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m, + uint64_t *cmd, const uint32_t flags) +{ + otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags)); + otx2_nix_xmit_prepare(m, cmd, flags); +} + +static __rte_always_inline uint16_t +otx2_ssogws_event_tx(struct otx2_ssogws *ws, struct rte_event ev[], + uint64_t *cmd, const uint32_t flags) +{ + struct rte_mbuf *m = ev[0].mbuf; + const struct otx2_eth_txq *txq = otx2_ssogws_xtract_meta(m); + + otx2_ssogws_head_wait(ws, !ev->sched_type); + otx2_ssogws_prepare_pkt(txq, m, cmd, flags); + + if (flags & NIX_TX_MULTI_SEG_F) { + const uint16_t segdw = otx2_nix_prepare_mseg(m, cmd, flags); + otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], + m->ol_flags, segdw, flags); + otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr, txq->io_addr, segdw); + } else { + /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */ + otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], + m->ol_flags, 4, flags); + otx2_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr, flags); + } + + return 1; +} + #endif diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index cbe03c1bb..37c274a54 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -305,3 +305,38 @@ otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +uint16_t __hot \ +otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events) \ +{ \ + struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws *vws = \ + (struct otx2_ssogws *)&ws->ws_state[!ws->vws]; \ + uint64_t cmd[sz]; \ + \ + RTE_SET_USED(nb_events); \ + return otx2_ssogws_event_tx(vws, ev, cmd, flags); \ +} +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +uint16_t __hot \ +otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \ + struct rte_event ev[], \ + uint16_t nb_events) \ +{ \ + struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws *vws = \ + (struct otx2_ssogws *)&ws->ws_state[!ws->vws]; \ + uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ + \ + RTE_SET_USED(nb_events); \ + return otx2_ssogws_event_tx(vws, ev, cmd, (flags) | \ + NIX_TX_MULTI_SEG_F); \ +} +SSO_TX_ADPTR_ENQ_FASTPATH_FUNC +#undef T