From patchwork Wed Oct 28 12:20:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82647 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 992EEA04DD; Wed, 28 Oct 2020 13:22:05 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6E990CA95; Wed, 28 Oct 2020 13:20:48 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id F1701CA86 for ; Wed, 28 Oct 2020 13:20:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+eozRAaXOrOsvvVAQixQd0f3HHG0fyJ1UHdMiLxr4oA=; b=eoa0KuHFPGCQY2sTOBtV8kz3M6t1YYeeliHrNpM19adw35jaoF2XBZl/RUEgeFcMnMkjQD 1gOs7w6LQoWqPcgcIDpBfOLSUQEalCJUpwlKLHn7dHywmA0lO3jQsmYLsv7vMOshoWp9bU VmRFpEQpwgwtuCzJzhA8X9oKe4PLPlk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-386-dGIilbl7PgmaELgkskshLg-1; Wed, 28 Oct 2020 08:20:38 -0400 X-MC-Unique: dGIilbl7PgmaELgkskshLg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13D061882FA3; Wed, 28 Oct 2020 12:20:37 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2807810013DB; Wed, 28 Oct 2020 12:20:34 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Hemant Agrawal , Sachin Saxena , Ray Kinsella , Neil Horman , Akhil Goyal , Nipun Gupta Date: Wed, 28 Oct 2020 13:20:09 +0100 Message-Id: <20201028122013.31104-6-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 5/9] dpaa: switch sequence number to dynamic mbuf field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The dpaa drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand --- drivers/bus/dpaa/dpaa_bus.c | 16 ++++++++++++++++ drivers/bus/dpaa/rte_dpaa_bus.h | 28 ++++++++++++++++++++++++++++ drivers/bus/dpaa/version.map | 1 + drivers/crypto/dpaa_sec/dpaa_sec.c | 6 +++--- drivers/event/dpaa/dpaa_eventdev.c | 6 +++--- drivers/net/dpaa/dpaa_ethdev.h | 7 ------- drivers/net/dpaa/dpaa_rxtx.c | 6 +++--- 7 files changed, 54 insertions(+), 16 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index c94c72106f..ece6a4c424 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include @@ -55,6 +56,9 @@ unsigned int dpaa_svr_family; RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io); +#define DPAA_SEQN_DYNFIELD_NAME "dpaa_seqn_dynfield" +int dpaa_seqn_dynfield_offset = -1; + struct fm_eth_port_cfg * dpaa_get_eth_port_cfg(int dev_id) { @@ -251,6 +255,11 @@ dpaa_clean_device_list(void) int rte_dpaa_portal_init(void *arg) { + static const struct rte_mbuf_dynfield dpaa_seqn_dynfield_desc = { + .name = DPAA_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa_seqn_t), + .align = __alignof__(dpaa_seqn_t), + }; unsigned int cpu, lcore = rte_lcore_id(); int ret; @@ -264,6 +273,13 @@ int rte_dpaa_portal_init(void *arg) cpu = rte_lcore_to_cpu_id(lcore); + dpaa_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa_seqn_dynfield_desc); + if (dpaa_seqn_dynfield_offset < 0) { + DPAA_BUS_LOG(ERR, "Failed to register mbuf field for dpaa sequence number\n"); + return -rte_errno; + } + /* Initialise bman thread portals */ ret = bman_thread_init(); if (ret) { diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index fdaa63a09b..48d5cf4625 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -7,6 +7,7 @@ #define __RTE_DPAA_BUS_H__ #include +#include #include #include @@ -16,6 +17,33 @@ #include #include +/* This sequence number field is used to store event entry index for + * driver specific usage. For parallel mode queues, invalid + * index will be set and for atomic mode queues, valid value + * ranging from 1 to 16. + */ +#define DPAA_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa_seqn_t; +extern int dpaa_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa sequence number. + */ +__rte_experimental +static inline dpaa_seqn_t * +dpaa_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa_seqn_dynfield_offset, + dpaa_seqn_t *); +} + #define DPAA_MEMPOOL_OPS_NAME "dpaa" #define DEV_TO_DPAA_DEVICE(ptr) \ diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map index 9bd2601213..fe4f9ac5aa 100644 --- a/drivers/bus/dpaa/version.map +++ b/drivers/bus/dpaa/version.map @@ -14,6 +14,7 @@ INTERNAL { dpaa_get_qm_channel_pool; dpaa_get_link_status; dpaa_restart_link_autoneg; + dpaa_seqn_dynfield_offset; dpaa_update_link_speed; dpaa_intr_disable; dpaa_intr_enable; diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index 55f457ac9a..44c742738f 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -1721,8 +1721,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, DPAA_SEC_BURST : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { op = *(ops++); - if (op->sym->m_src->seqn != 0) { - index = op->sym->m_src->seqn - 1; + if (*dpaa_seqn(op->sym->m_src) != 0) { + index = *dpaa_seqn(op->sym->m_src) - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { /* QM_EQCR_DCA_IDXMASK = 0x0f */ flags[loop] = ((index & 0x0f) << 8); @@ -3212,7 +3212,7 @@ dpaa_sec_process_atomic_event(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = ctx->op->sym->m_src; ev->impl_opaque = index + 1; - ctx->op->sym->m_src->seqn = (uint32_t)index + 1; + *dpaa_seqn(ctx->op->sym->m_src) = (uint32_t)index + 1; *bufs = (void *)ctx->op; rte_mempool_put(ctx->ctx_pool, (void *)ctx); diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 07cd079768..01ddd0eb63 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -99,7 +99,7 @@ dpaa_event_enqueue_burst(void *port, const struct rte_event ev[], case RTE_EVENT_OP_RELEASE: qman_dca_index(ev[i].impl_opaque, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; break; @@ -206,7 +206,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } @@ -276,7 +276,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 1b8e120e8f..659bceb467 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -22,13 +22,6 @@ #define DPAA_MBUF_HW_ANNOTATION 64 #define DPAA_FD_PTA_SIZE 64 -/* mbuf->seqn will be used to store event entry index for - * driver specific usage. For parallel mode queues, invalid - * index will be set and for atomic mode queues, valid value - * ranging from 1 to 16. - */ -#define DPAA_INVALID_MBUF_SEQN 0 - /* we will re-use the HEADROOM for annotation in RX */ #define DPAA_HW_BUF_RESERVE 0 #define DPAA_PACKET_LAYOUT_ALIGN 64 diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index e4f012c233..e2459d9b99 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -649,7 +649,7 @@ dpaa_rx_cb_parallel(void *event, ev->queue_id = fq->ev.queue_id; ev->priority = fq->ev.priority; ev->impl_opaque = (uint8_t)DPAA_INVALID_MBUF_SEQN; - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; *bufs = mbuf; return qman_cb_dqrr_consume; @@ -683,7 +683,7 @@ dpaa_rx_cb_atomic(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = mbuf; ev->impl_opaque = index + 1; - mbuf->seqn = (uint32_t)index + 1; + *dpaa_seqn(mbuf) = (uint32_t)index + 1; *bufs = mbuf; return qman_cb_dqrr_defer; @@ -1078,7 +1078,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (dpaa_svr_family == SVR_LS1043A_FAMILY && (mbuf->data_off & 0x7F) != 0x0) realloc_mbuf = 1; - seqn = mbuf->seqn; + seqn = *dpaa_seqn(mbuf); if (seqn != DPAA_INVALID_MBUF_SEQN) { index = seqn - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) {