From patchwork Wed Oct 28 12:20:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82642 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98F4AA04DD; Wed, 28 Oct 2020 13:20:34 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6FFCBCA34; Wed, 28 Oct 2020 13:20:33 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by dpdk.org (Postfix) with ESMTP id 53783CA30 for ; Wed, 28 Oct 2020 13:20:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887630; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dnRzHayvKRea8lj8HY5t+PhmDuXTHbrbDDTRI4I02/k=; b=gBb000KJZt92ixldcBH8INfemxYiQhJKNdKIWzM1JRDPP9SUD//DU0CfcwAjs4588ZTVuO J6VT619t7EZXgS9pGLLZC4wYN2DPqTgWV+A47vc6vclxpniJzGeQr1NxITzghWcqXbv13T TzxRE8a+lGO2KMhf7IF8k3GHXRQSeko= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-493-s1UzGUPHPDCcpB2SLdQOkA-1; Wed, 28 Oct 2020 08:20:29 -0400 X-MC-Unique: s1UzGUPHPDCcpB2SLdQOkA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 21199809DE5; Wed, 28 Oct 2020 12:20:28 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 13F5110013DB; Wed, 28 Oct 2020 12:20:26 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Hemant Agrawal , Nipun Gupta Date: Wed, 28 Oct 2020 13:20:05 +0100 Message-Id: <20201028122013.31104-2-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This code has never been used since introduction. Fixes: 653242c3375a ("event/dpaa2: add self test") Signed-off-by: David Marchand Acked-by: Nipun Gupta --- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index b1f3891484..5447db8a8a 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -47,17 +47,6 @@ struct event_attr { uint8_t seq; }; -static uint32_t seqn_list_index; -static int seqn_list[NUM_PACKETS]; - -static void -seqn_list_init(void) -{ - RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); - memset(seqn_list, 0, sizeof(seqn_list)); - seqn_list_index = 0; -} - struct test_core_param { rte_atomic32_t *total_events; uint64_t dequeue_tmo_ticks; @@ -516,7 +505,7 @@ launch_workers_and_wait(int (*main_worker)(void *), return 0; rte_atomic32_set(&atomic_total_events, total_events); - seqn_list_init(); + RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) From patchwork Wed Oct 28 12:20:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82644 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0B50A04DD; Wed, 28 Oct 2020 13:21:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AD695CA4D; Wed, 28 Oct 2020 13:20:39 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id 2CF32CA41 for ; Wed, 28 Oct 2020 13:20:34 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887632; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LBFaVY/8bKeK4RmRlvjAgV9S+K2c2T2BywLx6LPiwlo=; b=OVxWN9XJAou4xdFhqUjnN9ag19uJL+FnvXgRft5xjjqzRGFKVak07957o2OLcU8Rw16wju piciugmCFSe0wq8QEz/FDFFQEG8LS87O0+w5f04Cza5KWtz0wXVmdEYwzx8I6ARgCBECjT sHnz7opvxZPXBO3PRLlHQSl6F7sB75Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-336-v2yakF9cN6qK8f50ZyzQiA-1; Wed, 28 Oct 2020 08:20:31 -0400 X-MC-Unique: v2yakF9cN6qK8f50ZyzQiA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DDA31809DE5; Wed, 28 Oct 2020 12:20:29 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id A621A10013DB; Wed, 28 Oct 2020 12:20:28 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Fan Zhang , Sergio Gonzalez Monroy , Declan Doherty Date: Wed, 28 Oct 2020 13:20:06 +0100 Message-Id: <20201028122013.31104-3-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 2/9] crypto/scheduler: remove unused internal seqn X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This field has been left behind after dropping its use. Fixes: 8a48e039432b ("crypto/scheduler: optimize crypto op ordering") Signed-off-by: David Marchand --- drivers/crypto/scheduler/scheduler_pmd_private.h | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h index adb4eb0632..4d33b9ab44 100644 --- a/drivers/crypto/scheduler/scheduler_pmd_private.h +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h @@ -59,7 +59,6 @@ struct scheduler_qp_ctx { uint32_t max_nb_objs; struct rte_ring *order_ring; - uint32_t seqn; } __rte_cache_aligned; From patchwork Wed Oct 28 12:20:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82645 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD2EFA04DD; Wed, 28 Oct 2020 13:21:29 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 75904CA5E; Wed, 28 Oct 2020 13:20:41 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id 0408FCA38 for ; Wed, 28 Oct 2020 13:20:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ofr52FRB1pXg7Sl18jqqFewPn4uWCsB9eFoh65db2FA=; b=FpzGKqvX+ZrGyKCltVUk2NIyOLPIT6ate/D2fGT23TXucI4mx4+wLowRv6Ciiimp50tQyX 1ObmZWAsOXdF2KMt+/wJVbGJJOGVQBZRFLvh0q4JdoD9sFXsBIphMTJoLk2Bb5T9HficSF FgsYz+OwH6My11WZ+FY5r5AlOAHHd8Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-180-34gyyAwdM-OrFa0IBxI-kg-1; Wed, 28 Oct 2020 08:20:32 -0400 X-MC-Unique: 34gyyAwdM-OrFa0IBxI-kg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B08501084C94; Wed, 28 Oct 2020 12:20:31 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 62E1D10013DB; Wed, 28 Oct 2020 12:20:30 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Shepard Siegel , Ed Czeck , John Miller Date: Wed, 28 Oct 2020 13:20:07 +0100 Message-Id: <20201028122013.31104-4-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 3/9] net/ark: remove use of seqn for debug X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The seqn mbuf field is deprecated. It is currently hacked for debug purpose, it could be changed to a dynamic field but I see little value in the debug info it offers. Signed-off-by: David Marchand --- drivers/net/ark/ark_ethdev_rx.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c index 825b4791be..c24cc00e2f 100644 --- a/drivers/net/ark/ark_ethdev_rx.c +++ b/drivers/net/ark/ark_ethdev_rx.c @@ -302,8 +302,6 @@ eth_ark_recv_pkts(void *rx_queue, mbuf->pkt_len = 63; meta->pkt_len = 63; } - /* seqn is only set under debug */ - mbuf->seqn = cons_index; } if (unlikely(meta->pkt_len > ARK_RX_MAX_NOCHAIN)) @@ -360,8 +358,6 @@ eth_ark_rx_jumbo(struct ark_rx_queue *queue, mbuf_prev = mbuf; mbuf->data_len = data_len; mbuf->data_off = 0; - if (ARK_DEBUG_CORE) - mbuf->seqn = cons_index; /* for debug only */ cons_index += 1; } @@ -667,8 +663,8 @@ dump_mbuf_data(struct rte_mbuf *mbuf, uint16_t lo, uint16_t hi) { uint16_t i, j; - ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d, seq: %" PRIU32 "\n", - mbuf, mbuf->pkt_len, mbuf->data_off, mbuf->seqn); + ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d\n", + mbuf, mbuf->pkt_len, mbuf->data_off); for (i = lo; i < hi; i += 16) { uint8_t *dp = RTE_PTR_ADD(mbuf->buf_addr, i); From patchwork Wed Oct 28 12:20:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82646 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 507E6A04DD; Wed, 28 Oct 2020 13:21:50 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EB510CA8B; Wed, 28 Oct 2020 13:20:46 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by dpdk.org (Postfix) with ESMTP id 80ADDCA49 for ; Wed, 28 Oct 2020 13:20:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887638; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZeeGm0FWV/hybVPypoWikKFCkFCpQNHTz3p4CI2VV58=; b=JOg20f++8ifRKaCAJecrRwBTa7Y+hNe38chmCe3zaHNskky/UYmdZX4Siq+u4V1vX/J6JW 9emqJh2weBWtJc6niaVp2SxFU3QlC1UYuUCsg+dUU8TtrBpzUDm0lek7WvvuGnupRtqGdf OS1cSGKQfDIc3RPLUnVttNBTBW1Mb+Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-105-1xBYCn-6MKOcpK_FHy2P_A-1; Wed, 28 Oct 2020 08:20:36 -0400 X-MC-Unique: 1xBYCn-6MKOcpK_FHy2P_A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9A5A31882FB4; Wed, 28 Oct 2020 12:20:34 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 484181002391; Wed, 28 Oct 2020 12:20:32 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Reshma Pattan , Ray Kinsella , Neil Horman Date: Wed, 28 Oct 2020 13:20:08 +0100 Message-Id: <20201028122013.31104-5-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The reorder library used sequence numbers stored in the deprecated field seqn. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand Reviewed-by: Andrew Rybchenko --- app/test/test_reorder.c | 8 ++++---- examples/packet_ordering/main.c | 2 +- lib/librte_reorder/rte_reorder.c | 23 ++++++++++++++++++++--- lib/librte_reorder/rte_reorder.h | 21 +++++++++++++++++++++ lib/librte_reorder/version.map | 6 ++++++ 5 files changed, 52 insertions(+), 8 deletions(-) diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c index 58fa9c71b5..1c4226da65 100644 --- a/app/test/test_reorder.c +++ b/app/test/test_reorder.c @@ -149,7 +149,7 @@ test_reorder_insert(void) for (i = 0; i < num_bufs; i++) { bufs[i] = rte_pktmbuf_alloc(p); TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); - bufs[i]->seqn = i; + *rte_reorder_seqn(bufs[i]) = i; } /* This should fill up order buffer: @@ -183,7 +183,7 @@ test_reorder_insert(void) bufs[4] = NULL; /* early packet from current sequence window - full ready buffer */ - bufs[5]->seqn = 2 * size; + *rte_reorder_seqn(bufs[5]) = 2 * size; ret = rte_reorder_insert(b, bufs[5]); if (!((ret == -1) && (rte_errno == ENOSPC))) { printf("%s:%d: No error inserting early packet with full ready buffer\n", @@ -194,7 +194,7 @@ test_reorder_insert(void) bufs[5] = NULL; /* late packet */ - bufs[6]->seqn = 3 * size; + *rte_reorder_seqn(bufs[6]) = 3 * size; ret = rte_reorder_insert(b, bufs[6]); if (!((ret == -1) && (rte_errno == ERANGE))) { printf("%s:%d: No error inserting late packet with seqn:" @@ -250,7 +250,7 @@ test_reorder_drain(void) for (i = 0; i < num_bufs; i++) { bufs[i] = rte_pktmbuf_alloc(p); TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); - bufs[i]->seqn = i; + *rte_reorder_seqn(bufs[i]) = i; } /* Insert packet with seqn 1: diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c index a79d77a321..4bea1982d5 100644 --- a/examples/packet_ordering/main.c +++ b/examples/packet_ordering/main.c @@ -451,7 +451,7 @@ rx_thread(struct rte_ring *ring_out) /* mark sequence number */ for (i = 0; i < nb_rx_pkts; ) - pkts[i++]->seqn = seqn++; + *rte_reorder_seqn(pkts[i++]) = seqn++; /* enqueue to rx_to_workers ring */ ret = rte_ring_enqueue_burst(ring_out, diff --git a/lib/librte_reorder/rte_reorder.c b/lib/librte_reorder/rte_reorder.c index 3c9f0e2d08..9445853b79 100644 --- a/lib/librte_reorder/rte_reorder.c +++ b/lib/librte_reorder/rte_reorder.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -29,6 +30,9 @@ EAL_REGISTER_TAILQ(rte_reorder_tailq) /* Macros for printing using RTE_LOG */ #define RTE_LOGTYPE_REORDER RTE_LOGTYPE_USER1 +#define RTE_REORDER_SEQN_DYNFIELD_NAME "rte_reorder_seqn_dynfield" +int rte_reorder_seqn_dynfield_offset = -1; + /* A generic circular buffer */ struct cir_buffer { unsigned int size; /**< Number of entries that can be stored */ @@ -103,6 +107,11 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) struct rte_reorder_list *reorder_list; const unsigned int bufsize = sizeof(struct rte_reorder_buffer) + (2 * size * sizeof(struct rte_mbuf *)); + static const struct rte_mbuf_dynfield reorder_seqn_dynfield_desc = { + .name = RTE_REORDER_SEQN_DYNFIELD_NAME, + .size = sizeof(rte_reorder_seqn_t), + .align = __alignof__(rte_reorder_seqn_t), + }; reorder_list = RTE_TAILQ_CAST(rte_reorder_tailq.head, rte_reorder_list); @@ -120,6 +129,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) return NULL; } + rte_reorder_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); + if (rte_reorder_seqn_dynfield_offset < 0) { + RTE_LOG(ERR, REORDER, "Failed to register mbuf field for reorder sequence number\n"); + rte_errno = ENOMEM; + return NULL; + } + rte_mcfg_tailq_write_lock(); /* guarantee there's no existing */ @@ -310,7 +327,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) order_buf = &b->order_buf; if (!b->is_initialized) { - b->min_seqn = mbuf->seqn; + b->min_seqn = *rte_reorder_seqn(mbuf); b->is_initialized = 1; } @@ -322,7 +339,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) * mbuf_seqn = 0x0010 * offset = 0x0010 - 0xFFFD = 0x13 */ - offset = mbuf->seqn - b->min_seqn; + offset = *rte_reorder_seqn(mbuf) - b->min_seqn; /* * action to take depends on offset. @@ -352,7 +369,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) rte_errno = ENOSPC; return -1; } - offset = mbuf->seqn - b->min_seqn; + offset = *rte_reorder_seqn(mbuf) - b->min_seqn; position = (order_buf->head + offset) & order_buf->mask; order_buf->entries[position] = mbuf; } else { diff --git a/lib/librte_reorder/rte_reorder.h b/lib/librte_reorder/rte_reorder.h index 6d39710088..9de0240374 100644 --- a/lib/librte_reorder/rte_reorder.h +++ b/lib/librte_reorder/rte_reorder.h @@ -16,6 +16,7 @@ */ #include +#include #ifdef __cplusplus extern "C" { @@ -23,6 +24,26 @@ extern "C" { struct rte_reorder_buffer; +typedef uint32_t rte_reorder_seqn_t; +extern int rte_reorder_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read reorder sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to reorder sequence number. + */ +__rte_experimental +static inline rte_reorder_seqn_t * +rte_reorder_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, rte_reorder_seqn_dynfield_offset, + rte_reorder_seqn_t *); +} + /** * Create a new reorder buffer instance * diff --git a/lib/librte_reorder/version.map b/lib/librte_reorder/version.map index 8c0220d324..d902a7fa12 100644 --- a/lib/librte_reorder/version.map +++ b/lib/librte_reorder/version.map @@ -11,3 +11,9 @@ DPDK_21 { local: *; }; + +EXPERIMENTAL { + global: + + rte_reorder_seqn_dynfield_offset; +}; From patchwork Wed Oct 28 12:20:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82647 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 992EEA04DD; Wed, 28 Oct 2020 13:22:05 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6E990CA95; Wed, 28 Oct 2020 13:20:48 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id F1701CA86 for ; Wed, 28 Oct 2020 13:20:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+eozRAaXOrOsvvVAQixQd0f3HHG0fyJ1UHdMiLxr4oA=; b=eoa0KuHFPGCQY2sTOBtV8kz3M6t1YYeeliHrNpM19adw35jaoF2XBZl/RUEgeFcMnMkjQD 1gOs7w6LQoWqPcgcIDpBfOLSUQEalCJUpwlKLHn7dHywmA0lO3jQsmYLsv7vMOshoWp9bU VmRFpEQpwgwtuCzJzhA8X9oKe4PLPlk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-386-dGIilbl7PgmaELgkskshLg-1; Wed, 28 Oct 2020 08:20:38 -0400 X-MC-Unique: dGIilbl7PgmaELgkskshLg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13D061882FA3; Wed, 28 Oct 2020 12:20:37 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2807810013DB; Wed, 28 Oct 2020 12:20:34 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Hemant Agrawal , Sachin Saxena , Ray Kinsella , Neil Horman , Akhil Goyal , Nipun Gupta Date: Wed, 28 Oct 2020 13:20:09 +0100 Message-Id: <20201028122013.31104-6-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 5/9] dpaa: switch sequence number to dynamic mbuf field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The dpaa drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand --- drivers/bus/dpaa/dpaa_bus.c | 16 ++++++++++++++++ drivers/bus/dpaa/rte_dpaa_bus.h | 28 ++++++++++++++++++++++++++++ drivers/bus/dpaa/version.map | 1 + drivers/crypto/dpaa_sec/dpaa_sec.c | 6 +++--- drivers/event/dpaa/dpaa_eventdev.c | 6 +++--- drivers/net/dpaa/dpaa_ethdev.h | 7 ------- drivers/net/dpaa/dpaa_rxtx.c | 6 +++--- 7 files changed, 54 insertions(+), 16 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index c94c72106f..ece6a4c424 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include @@ -55,6 +56,9 @@ unsigned int dpaa_svr_family; RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io); +#define DPAA_SEQN_DYNFIELD_NAME "dpaa_seqn_dynfield" +int dpaa_seqn_dynfield_offset = -1; + struct fm_eth_port_cfg * dpaa_get_eth_port_cfg(int dev_id) { @@ -251,6 +255,11 @@ dpaa_clean_device_list(void) int rte_dpaa_portal_init(void *arg) { + static const struct rte_mbuf_dynfield dpaa_seqn_dynfield_desc = { + .name = DPAA_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa_seqn_t), + .align = __alignof__(dpaa_seqn_t), + }; unsigned int cpu, lcore = rte_lcore_id(); int ret; @@ -264,6 +273,13 @@ int rte_dpaa_portal_init(void *arg) cpu = rte_lcore_to_cpu_id(lcore); + dpaa_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa_seqn_dynfield_desc); + if (dpaa_seqn_dynfield_offset < 0) { + DPAA_BUS_LOG(ERR, "Failed to register mbuf field for dpaa sequence number\n"); + return -rte_errno; + } + /* Initialise bman thread portals */ ret = bman_thread_init(); if (ret) { diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index fdaa63a09b..48d5cf4625 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -7,6 +7,7 @@ #define __RTE_DPAA_BUS_H__ #include +#include #include #include @@ -16,6 +17,33 @@ #include #include +/* This sequence number field is used to store event entry index for + * driver specific usage. For parallel mode queues, invalid + * index will be set and for atomic mode queues, valid value + * ranging from 1 to 16. + */ +#define DPAA_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa_seqn_t; +extern int dpaa_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa sequence number. + */ +__rte_experimental +static inline dpaa_seqn_t * +dpaa_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa_seqn_dynfield_offset, + dpaa_seqn_t *); +} + #define DPAA_MEMPOOL_OPS_NAME "dpaa" #define DEV_TO_DPAA_DEVICE(ptr) \ diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map index 9bd2601213..fe4f9ac5aa 100644 --- a/drivers/bus/dpaa/version.map +++ b/drivers/bus/dpaa/version.map @@ -14,6 +14,7 @@ INTERNAL { dpaa_get_qm_channel_pool; dpaa_get_link_status; dpaa_restart_link_autoneg; + dpaa_seqn_dynfield_offset; dpaa_update_link_speed; dpaa_intr_disable; dpaa_intr_enable; diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index 55f457ac9a..44c742738f 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -1721,8 +1721,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, DPAA_SEC_BURST : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { op = *(ops++); - if (op->sym->m_src->seqn != 0) { - index = op->sym->m_src->seqn - 1; + if (*dpaa_seqn(op->sym->m_src) != 0) { + index = *dpaa_seqn(op->sym->m_src) - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { /* QM_EQCR_DCA_IDXMASK = 0x0f */ flags[loop] = ((index & 0x0f) << 8); @@ -3212,7 +3212,7 @@ dpaa_sec_process_atomic_event(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = ctx->op->sym->m_src; ev->impl_opaque = index + 1; - ctx->op->sym->m_src->seqn = (uint32_t)index + 1; + *dpaa_seqn(ctx->op->sym->m_src) = (uint32_t)index + 1; *bufs = (void *)ctx->op; rte_mempool_put(ctx->ctx_pool, (void *)ctx); diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 07cd079768..01ddd0eb63 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -99,7 +99,7 @@ dpaa_event_enqueue_burst(void *port, const struct rte_event ev[], case RTE_EVENT_OP_RELEASE: qman_dca_index(ev[i].impl_opaque, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; break; @@ -206,7 +206,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } @@ -276,7 +276,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 1b8e120e8f..659bceb467 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -22,13 +22,6 @@ #define DPAA_MBUF_HW_ANNOTATION 64 #define DPAA_FD_PTA_SIZE 64 -/* mbuf->seqn will be used to store event entry index for - * driver specific usage. For parallel mode queues, invalid - * index will be set and for atomic mode queues, valid value - * ranging from 1 to 16. - */ -#define DPAA_INVALID_MBUF_SEQN 0 - /* we will re-use the HEADROOM for annotation in RX */ #define DPAA_HW_BUF_RESERVE 0 #define DPAA_PACKET_LAYOUT_ALIGN 64 diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index e4f012c233..e2459d9b99 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -649,7 +649,7 @@ dpaa_rx_cb_parallel(void *event, ev->queue_id = fq->ev.queue_id; ev->priority = fq->ev.priority; ev->impl_opaque = (uint8_t)DPAA_INVALID_MBUF_SEQN; - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; *bufs = mbuf; return qman_cb_dqrr_consume; @@ -683,7 +683,7 @@ dpaa_rx_cb_atomic(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = mbuf; ev->impl_opaque = index + 1; - mbuf->seqn = (uint32_t)index + 1; + *dpaa_seqn(mbuf) = (uint32_t)index + 1; *bufs = mbuf; return qman_cb_dqrr_defer; @@ -1078,7 +1078,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (dpaa_svr_family == SVR_LS1043A_FAMILY && (mbuf->data_off & 0x7F) != 0x0) realloc_mbuf = 1; - seqn = mbuf->seqn; + seqn = *dpaa_seqn(mbuf); if (seqn != DPAA_INVALID_MBUF_SEQN) { index = seqn - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { From patchwork Wed Oct 28 12:20:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82648 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31CA0A04DD; Wed, 28 Oct 2020 13:22:31 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 71B1ECAAE; Wed, 28 Oct 2020 13:20:54 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id 5637BCA86 for ; Wed, 28 Oct 2020 13:20:46 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887644; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RZ4/NRDw/Ftdnvd8KccezUk8f0lIrYKczps0laaaC+c=; b=TmtI4Nce8zTQWYpCA9YLi7OelAD3jjwuvjuByXQaC02jD3cWObQsgr8xaN9eJVgS1MVc2A 8ZLIBErWOWvm7ba8LaDgSKZkuioDcbmIitAqnG48j1bpmhLpwm6bU/ZtGBrAkzzu0pmbbV FUqgKjUiNqWNBYST5ZZoJdy/qGiSwhY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-86-mmJ3a_InOQCpEoMnS6-K5Q-1; Wed, 28 Oct 2020 08:20:40 -0400 X-MC-Unique: mmJ3a_InOQCpEoMnS6-K5Q-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 76CD2809DF2; Wed, 28 Oct 2020 12:20:39 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8C17810013DB; Wed, 28 Oct 2020 12:20:37 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Hemant Agrawal , Sachin Saxena , Ray Kinsella , Neil Horman , Akhil Goyal , Nipun Gupta Date: Wed, 28 Oct 2020 13:20:10 +0100 Message-Id: <20201028122013.31104-7-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 6/9] fslmc: switch sequence number to dynamic mbuf field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The dpaa2 drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand --- drivers/bus/fslmc/fslmc_bus.c | 17 +++++++++++ drivers/bus/fslmc/rte_fslmc.h | 23 +++++++++++++++ drivers/bus/fslmc/version.map | 1 + drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++------ drivers/event/dpaa2/dpaa2_eventdev.c | 9 +++--- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 4 ++- drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 2 -- drivers/net/dpaa2/dpaa2_rxtx.c | 28 +++++++++---------- 8 files changed, 73 insertions(+), 29 deletions(-) diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index beb3dd008f..db93669628 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -27,6 +28,9 @@ struct rte_fslmc_bus rte_fslmc_bus; uint8_t dpaa2_virt_mode; +#define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield" +int dpaa2_seqn_dynfield_offset = -1; + uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type) { @@ -374,9 +378,22 @@ rte_fslmc_probe(void) struct rte_dpaa2_device *dev; struct rte_dpaa2_driver *drv; + static const struct rte_mbuf_dynfield dpaa2_seqn_dynfield_desc = { + .name = DPAA2_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa2_seqn_t), + .align = __alignof__(dpaa2_seqn_t), + }; + if (TAILQ_EMPTY(&rte_fslmc_bus.device_list)) return 0; + dpaa2_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa2_seqn_dynfield_desc); + if (dpaa2_seqn_dynfield_offset < 0) { + DPAA2_BUS_ERR("Failed to register mbuf field for dpaa sequence number"); + return 0; + } + ret = fslmc_vfio_setup_group(); if (ret) { DPAA2_BUS_ERR("Unable to setup VFIO %d", ret); diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h index 80873fffc9..37d45dffe5 100644 --- a/drivers/bus/fslmc/rte_fslmc.h +++ b/drivers/bus/fslmc/rte_fslmc.h @@ -32,11 +32,34 @@ extern "C" { #include #include #include +#include +#include #include #define FSLMC_OBJECT_MAX_LEN 32 /**< Length of each device on bus */ +#define DPAA2_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa2_seqn_t; +extern int dpaa2_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa2 sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa2 sequence number. + */ +__rte_experimental +static inline dpaa2_seqn_t * +dpaa2_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa2_seqn_dynfield_offset, + dpaa2_seqn_t *); +} /** Device driver supports link state interrupt */ #define RTE_DPAA2_DRV_INTR_LSC 0x0008 diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map index b169f5228a..f44c1a7988 100644 --- a/drivers/bus/fslmc/version.map +++ b/drivers/bus/fslmc/version.map @@ -19,6 +19,7 @@ INTERNAL { dpaa2_free_eq_descriptors; dpaa2_get_mcp_ptr; dpaa2_io_portal; + dpaa2_seqn_dynfield_offset; dpaa2_svr_family; dpaa2_virt_mode; dpbp_disable; diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index afcd6bd063..ce1d50ce77 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -1472,13 +1472,15 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, dpaa2_eqcr_size : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { - if ((*ops)->sym->m_src->seqn) { - uint8_t dqrr_index = (*ops)->sym->m_src->seqn - 1; - - flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; - DPAA2_PER_LCORE_DQRR_SIZE--; - DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); - (*ops)->sym->m_src->seqn = DPAA2_INVALID_MBUF_SEQN; + if (*dpaa2_seqn((*ops)->sym->m_src)) { + uint8_t dqrr_index = + *dpaa2_seqn((*ops)->sym->m_src) - 1; + + flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; + DPAA2_PER_LCORE_DQRR_SIZE--; + DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); + *dpaa2_seqn((*ops)->sym->m_src) = + DPAA2_INVALID_MBUF_SEQN; } /*Clear the unused FD fields before sending*/ @@ -3714,7 +3716,7 @@ dpaa2_sec_process_atomic_event(struct qbman_swp *swp __rte_unused, ev->event_ptr = sec_fd_to_mbuf(fd); dqrr_index = qbman_get_dqrr_idx(dq); - crypto_op->sym->m_src->seqn = dqrr_index + 1; + *dpaa2_seqn(crypto_op->sym->m_src) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = crypto_op->sym->m_src; diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 95f03c8b9e..eeb2494bd0 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -131,8 +131,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], qbman_eq_desc_set_response(&eqdesc[loop], 0, 0); if (event->sched_type == RTE_SCHED_TYPE_ATOMIC - && event->mbuf->seqn) { - uint8_t dqrr_index = event->mbuf->seqn - 1; + && *dpaa2_seqn(event->mbuf)) { + uint8_t dqrr_index = + *dpaa2_seqn(event->mbuf) - 1; qbman_eq_desc_set_dca(&eqdesc[loop], 1, dqrr_index, 0); @@ -249,7 +250,7 @@ static void dpaa2_eventdev_process_atomic(struct qbman_swp *swp, rte_memcpy(ev, ev_temp, sizeof(struct rte_event)); rte_free(ev_temp); - ev->mbuf->seqn = dqrr_index + 1; + *dpaa2_seqn(ev->mbuf) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = ev->mbuf; @@ -314,7 +315,7 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[], if (DPAA2_PER_LCORE_DQRR_HELD & (1 << i)) { qbman_swp_dqrr_idx_consume(swp, i); DPAA2_PER_LCORE_DQRR_SIZE--; - DPAA2_PER_LCORE_DQRR_MBUF(i)->seqn = + *dpaa2_seqn(DPAA2_PER_LCORE_DQRR_MBUF(i)) = DPAA2_INVALID_MBUF_SEQN; } i++; diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index 5447db8a8a..cd7311a94d 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "dpaa2_eventdev.h" #include "dpaa2_eventdev_logs.h" @@ -274,7 +275,8 @@ check_excess_events(uint8_t port) valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", ev.mbuf->seqn); + "Unexpected valid event=%d", + *dpaa2_seqn(ev.mbuf)); } return 0; } diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h index 53fa1552d1..7c493b28e7 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h @@ -10,8 +10,6 @@ #define DPAA2_MAX_BUF_POOLS 8 -#define DPAA2_INVALID_MBUF_SEQN 0 - struct buf_pool_cfg { void *addr; /**< The address from where DPAA2 will carve out the buffers */ diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 4dd1d5f578..6201de4606 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -710,7 +710,7 @@ dpaa2_dev_process_atomic_event(struct qbman_swp *swp __rte_unused, ev->mbuf = eth_fd_to_mbuf(fd, rxq->eth_data->port_id); dqrr_index = qbman_get_dqrr_idx(dq); - ev->mbuf->seqn = dqrr_index + 1; + *dpaa2_seqn(ev->mbuf) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = ev->mbuf; @@ -736,9 +736,9 @@ dpaa2_dev_process_ordered_event(struct qbman_swp *swp, ev->mbuf = eth_fd_to_mbuf(fd, rxq->eth_data->port_id); - ev->mbuf->seqn = DPAA2_ENQUEUE_FLAG_ORP; - ev->mbuf->seqn |= qbman_result_DQ_odpid(dq) << DPAA2_EQCR_OPRID_SHIFT; - ev->mbuf->seqn |= qbman_result_DQ_seqnum(dq) << DPAA2_EQCR_SEQNUM_SHIFT; + *dpaa2_seqn(ev->mbuf) = DPAA2_ENQUEUE_FLAG_ORP; + *dpaa2_seqn(ev->mbuf) |= qbman_result_DQ_odpid(dq) << DPAA2_EQCR_OPRID_SHIFT; + *dpaa2_seqn(ev->mbuf) |= qbman_result_DQ_seqnum(dq) << DPAA2_EQCR_SEQNUM_SHIFT; qbman_swp_dqrr_consume(swp, dq); } @@ -1063,14 +1063,14 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) dpaa2_eqcr_size : nb_pkts; for (loop = 0; loop < frames_to_send; loop++) { - if ((*bufs)->seqn) { - uint8_t dqrr_index = (*bufs)->seqn - 1; + if (*dpaa2_seqn(*bufs)) { + uint8_t dqrr_index = *dpaa2_seqn(*bufs) - 1; flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; DPAA2_PER_LCORE_DQRR_SIZE--; DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); - (*bufs)->seqn = DPAA2_INVALID_MBUF_SEQN; + *dpaa2_seqn(*bufs) = DPAA2_INVALID_MBUF_SEQN; } if (likely(RTE_MBUF_DIRECT(*bufs))) { @@ -1230,10 +1230,10 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, qbman_eq_desc_set_fq(eqdesc, dpaa2_q->fqid); - if (m->seqn & DPAA2_ENQUEUE_FLAG_ORP) { - orpid = (m->seqn & DPAA2_EQCR_OPRID_MASK) >> + if (*dpaa2_seqn(m) & DPAA2_ENQUEUE_FLAG_ORP) { + orpid = (*dpaa2_seqn(m) & DPAA2_EQCR_OPRID_MASK) >> DPAA2_EQCR_OPRID_SHIFT; - seqnum = (m->seqn & DPAA2_EQCR_SEQNUM_MASK) >> + seqnum = (*dpaa2_seqn(m) & DPAA2_EQCR_SEQNUM_MASK) >> DPAA2_EQCR_SEQNUM_SHIFT; if (!priv->en_loose_ordered) { @@ -1255,12 +1255,12 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, qbman_eq_desc_set_orp(eqdesc, 0, orpid, seqnum, 0); } } else { - dq_idx = m->seqn - 1; + dq_idx = *dpaa2_seqn(m) - 1; qbman_eq_desc_set_dca(eqdesc, 1, dq_idx, 0); DPAA2_PER_LCORE_DQRR_SIZE--; DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dq_idx); } - m->seqn = DPAA2_INVALID_MBUF_SEQN; + *dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN; } /* Callback to handle sending ordered packets through WRIOP based interface */ @@ -1314,7 +1314,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) dpaa2_eqcr_size : nb_pkts; if (!priv->en_loose_ordered) { - if ((*bufs)->seqn & DPAA2_ENQUEUE_FLAG_ORP) { + if (*dpaa2_seqn(*bufs) & DPAA2_ENQUEUE_FLAG_ORP) { num_free_eq_desc = dpaa2_free_eq_descriptors(); if (num_free_eq_desc < frames_to_send) frames_to_send = num_free_eq_desc; @@ -1325,7 +1325,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /*Prepare enqueue descriptor*/ qbman_eq_desc_clear(&eqdesc[loop]); - if ((*bufs)->seqn) { + if (*dpaa2_seqn(*bufs)) { /* Use only queue 0 for Tx in case of atomic/ * ordered packets as packets can get unordered * when being tranmitted out from the interface From patchwork Wed Oct 28 12:20:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82649 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 54AF3A04DD; Wed, 28 Oct 2020 13:22:47 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E8AD0CABD; Wed, 28 Oct 2020 13:20:55 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by dpdk.org (Postfix) with ESMTP id 574C3CA38 for ; Wed, 28 Oct 2020 13:20:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y1m9+HDdkd0G+Nfxe6E1GAtjvn9lkOMcQbhCKIlIDxE=; b=YQZJlZzUmlBVi8GKOCb8QyNi8YmAG5QjWUqm3No7y19P7BCC1Jfa/WVhjAGOZs6edGWWsH fkRNtwyLZfZyp+SsOBkxeshL/2CbfXvKaV2D+HvdFMDWw11uFm3AOfyHsc+eGR5Y716t8G tGwwV0bgwKaVoGzs8GCMXGUrMI4hoRM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-205-NKfMCBIANpiYub1tHuRP8w-1; Wed, 28 Oct 2020 08:20:43 -0400 X-MC-Unique: NKfMCBIANpiYub1tHuRP8w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3324C809DE2; Wed, 28 Oct 2020 12:20:42 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id EE45310013DB; Wed, 28 Oct 2020 12:20:39 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Jerin Jacob , Pavan Nikhilesh , Liang Ma , Peter Mccarthy , Harry van Haaren , Ray Kinsella , Neil Horman Date: Wed, 28 Oct 2020 13:20:11 +0100 Message-Id: <20201028122013.31104-8-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 7/9] eventdev: switch sequence number to dynamic mbuf field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The eventdev drivers have been hacking the deprecated field seqn for internal test usage. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand --- Changelog since v1: - split eventdev test app changes from this patch (moved to next patch), - moved seqn as a pmd only API, fixed namespace and marked as internal, --- drivers/event/octeontx/ssovf_evdev_selftest.c | 32 +++++++++-------- drivers/event/octeontx2/otx2_evdev_selftest.c | 36 +++++++++++-------- drivers/event/opdl/opdl_test.c | 8 ++--- drivers/event/sw/sw_evdev_selftest.c | 34 +++++++++--------- lib/librte_eventdev/rte_eventdev.c | 14 +++++++- lib/librte_eventdev/rte_eventdev_pmd.h | 20 +++++++++++ lib/librte_eventdev/version.map | 6 ++++ 7 files changed, 99 insertions(+), 51 deletions(-) diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index 7a2b7ded25..528f99dd84 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -300,7 +300,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, sub_event_type, sched_type, queue, port); rte_event_enqueue_burst(evdev, port, &ev, 1); @@ -320,7 +320,8 @@ check_excess_events(uint8_t port) valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", ev.mbuf->seqn); + "Unexpected valid event=%d", + *rte_event_pmd_selftest_seqn(ev.mbuf)); } return 0; } @@ -425,8 +426,9 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", - index, ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf), + "index=%d != seqn=%d", index, + *rte_event_pmd_selftest_seqn(ev->mbuf)); return 0; } @@ -509,10 +511,10 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - ev->mbuf->seqn, index, expected_val, range, - queue_count, MAX_EVENTS); + RTE_TEST_ASSERT_EQUAL(*rte_event_pmd_selftest_seqn(ev->mbuf), expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val, range, + queue_count, MAX_EVENTS); return 0; } @@ -537,7 +539,7 @@ test_multi_queue_priority(void) m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; queue = i % queue_count; update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, 0, RTE_SCHED_TYPE_PARALLEL, queue, 0); @@ -904,7 +906,7 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_pmd_selftest_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -939,7 +941,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1059,7 +1061,7 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_pmd_selftest_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -1101,7 +1103,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1238,7 +1240,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1360,7 +1362,7 @@ worker_ordered_flow_producer(void *arg) if (m == NULL) continue; - m->seqn = counter++; + *rte_event_pmd_selftest_seqn(m) = counter++; struct rte_event ev = {.event = 0, .u64 = 0}; diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c index 334a9ccb7c..48bfaf893d 100644 --- a/drivers/event/octeontx2/otx2_evdev_selftest.c +++ b/drivers/event/octeontx2/otx2_evdev_selftest.c @@ -279,7 +279,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, sub_event_type, sched_type, queue, port); @@ -301,7 +301,7 @@ check_excess_events(uint8_t port) RTE_TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + *rte_event_pmd_selftest_seqn(ev.mbuf)); } return 0; } @@ -406,8 +406,9 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", - index, ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf), + "index=%d != seqn=%d", + index, *rte_event_pmd_selftest_seqn(ev->mbuf)); return 0; } @@ -493,10 +494,11 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - ev->mbuf->seqn, index, expected_val, range, - queue_count, MAX_EVENTS); + RTE_TEST_ASSERT_EQUAL( + *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val, + range, queue_count, MAX_EVENTS); return 0; } @@ -523,7 +525,7 @@ test_multi_queue_priority(void) m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; queue = i % queue_count; update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, 0, RTE_SCHED_TYPE_PARALLEL, @@ -888,7 +890,9 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf); + + if (seqn_list_update(seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -923,7 +927,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1043,7 +1047,9 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf); + + if (seqn_list_update(seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -1084,7 +1090,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1222,7 +1228,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1348,7 +1354,7 @@ worker_ordered_flow_producer(void *arg) if (m == NULL) continue; - m->seqn = counter++; + *rte_event_pmd_selftest_seqn(m) = counter++; struct rte_event ev = {.event = 0, .u64 = 0}; diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c index e7a32fbd31..e4fc70a440 100644 --- a/drivers/event/opdl/opdl_test.c +++ b/drivers/event/opdl/opdl_test.c @@ -256,7 +256,7 @@ ordered_basic(struct test *t) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -281,7 +281,7 @@ ordered_basic(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - seq = deq_ev[i].mbuf->seqn - MAGIC_SEQN; + seq = *rte_event_pmd_selftest_seqn(deq_ev[i].mbuf) - MAGIC_SEQN; if (seq != (i-1)) { PMD_DRV_LOG(ERR, " seq test failed ! eq is %d , " @@ -396,7 +396,7 @@ atomic_basic(struct test *t) ev.op = RTE_EVENT_OP_NEW; ev.flow_id = 1; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -625,7 +625,7 @@ single_link_w_stats(struct test *t) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = 1234 + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = 1234 + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c index ad4fc0eed7..4555f799df 100644 --- a/drivers/event/sw/sw_evdev_selftest.c +++ b/drivers/event/sw/sw_evdev_selftest.c @@ -380,7 +380,7 @@ run_prio_packet_test(struct test *t) printf("%d: gen of pkt failed\n", __LINE__); return -1; } - arp->seqn = MAGIC_SEQN[i]; + *rte_event_pmd_selftest_seqn(arp) = MAGIC_SEQN[i]; ev = (struct rte_event){ .priority = PRIORITY[i], @@ -419,7 +419,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - if (ev.mbuf->seqn != MAGIC_SEQN[1]) { + if (*rte_event_pmd_selftest_seqn(ev.mbuf) != MAGIC_SEQN[1]) { printf("%d: first packet out not highest priority\n", __LINE__); rte_event_dev_dump(evdev, stdout); @@ -433,7 +433,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - if (ev2.mbuf->seqn != MAGIC_SEQN[0]) { + if (*rte_event_pmd_selftest_seqn(ev2.mbuf) != MAGIC_SEQN[0]) { printf("%d: second packet out not lower priority\n", __LINE__); rte_event_dev_dump(evdev, stdout); @@ -477,7 +477,7 @@ test_single_directed_packet(struct test *t) } const uint32_t MAGIC_SEQN = 4711; - arp->seqn = MAGIC_SEQN; + *rte_event_pmd_selftest_seqn(arp) = MAGIC_SEQN; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, rx_enq, &ev, 1); @@ -516,7 +516,7 @@ test_single_directed_packet(struct test *t) return -1; } - if (ev.mbuf->seqn != MAGIC_SEQN) { + if (*rte_event_pmd_selftest_seqn(ev.mbuf) != MAGIC_SEQN) { printf("%d: error magic sequence number not dequeued\n", __LINE__); return -1; @@ -934,7 +934,7 @@ xstats_tests(struct test *t) ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; ev.flow_id = 7; - arp->seqn = i; + *rte_event_pmd_selftest_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1485,7 +1485,7 @@ xstats_id_reset_tests(struct test *t) ev.queue_id = t->qid[i]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; - arp->seqn = i; + *rte_event_pmd_selftest_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1873,7 +1873,7 @@ qid_priorities(struct test *t) ev.queue_id = t->qid[i]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; - arp->seqn = i; + *rte_event_pmd_selftest_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1894,7 +1894,7 @@ qid_priorities(struct test *t) return -1; } for (i = 0; i < 3; i++) { - if (ev[i].mbuf->seqn != 2-i) { + if (*rte_event_pmd_selftest_seqn(ev[i].mbuf) != 2-i) { printf( "%d: qid priority test: seqn %d incorrectly prioritized\n", __LINE__, i); @@ -2371,7 +2371,7 @@ single_packet(struct test *t) ev.mbuf = arp; ev.queue_id = 0; ev.flow_id = 3; - arp->seqn = MAGIC_SEQN; + *rte_event_pmd_selftest_seqn(arp) = MAGIC_SEQN; err = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1); if (err != 1) { @@ -2411,7 +2411,7 @@ single_packet(struct test *t) } err = test_event_dev_stats_get(evdev, &stats); - if (ev.mbuf->seqn != MAGIC_SEQN) { + if (*rte_event_pmd_selftest_seqn(ev.mbuf) != MAGIC_SEQN) { printf("%d: magic sequence number not dequeued\n", __LINE__); return -1; } @@ -2684,7 +2684,7 @@ parallel_basic(struct test *t, int check_order) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -2739,10 +2739,12 @@ parallel_basic(struct test *t, int check_order) /* Check to see if the sequence numbers are in expected order */ if (check_order) { for (j = 0 ; j < deq_pkts ; j++) { - if (deq_ev[j].mbuf->seqn != MAGIC_SEQN + j) { - printf( - "%d: Incorrect sequence number(%d) from port %d\n", - __LINE__, mbufs_out[j]->seqn, tx_port); + if (*rte_event_pmd_selftest_seqn(deq_ev[j].mbuf) != + MAGIC_SEQN + j) { + printf("%d: Incorrect sequence number(%d) from port %d\n", + __LINE__, + *rte_event_pmd_selftest_seqn(mbufs_out[j]), + tx_port); return -1; } } diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index 322453c532..994bd1eaa9 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1242,13 +1242,25 @@ int rte_event_dev_xstats_reset(uint8_t dev_id, return -ENOTSUP; } +int rte_event_pmd_selftest_seqn_dynfield_offset = -1; + int rte_event_dev_selftest(uint8_t dev_id) { RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + static const struct rte_mbuf_dynfield test_seqn_dynfield_desc = { + .name = "rte_event_pmd_selftest_seqn_dynfield", + .size = sizeof(rte_event_pmd_selftest_seqn_t), + .align = __alignof__(rte_event_pmd_selftest_seqn_t), + }; struct rte_eventdev *dev = &rte_eventdevs[dev_id]; - if (dev->dev_ops->dev_selftest != NULL) + if (dev->dev_ops->dev_selftest != NULL) { + rte_event_pmd_selftest_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&test_seqn_dynfield_desc); + if (rte_event_pmd_selftest_seqn_dynfield_offset < 0) + return -ENOMEM; return (*dev->dev_ops->dev_selftest)(); + } return -ENOTSUP; } diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h index d118b9e5ba..27be376ed1 100644 --- a/lib/librte_eventdev/rte_eventdev_pmd.h +++ b/lib/librte_eventdev/rte_eventdev_pmd.h @@ -20,10 +20,13 @@ extern "C" { #include #include +#include #include #include #include #include +#include +#include #include "rte_eventdev.h" #include "rte_event_timer_adapter_pmd.h" @@ -635,6 +638,23 @@ typedef int (*eventdev_eth_rx_adapter_stats_reset) */ typedef int (*eventdev_selftest)(void); +typedef uint32_t rte_event_pmd_selftest_seqn_t; +extern int rte_event_pmd_selftest_seqn_dynfield_offset; + +/** + * Read test sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to test sequence number. + */ +__rte_internal +static inline rte_event_pmd_selftest_seqn_t * +rte_event_pmd_selftest_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, + rte_event_pmd_selftest_seqn_dynfield_offset, + rte_event_pmd_selftest_seqn_t *); +} struct rte_cryptodev; diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 8ae8420f9b..3e5c09cfdb 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -139,3 +139,9 @@ EXPERIMENTAL { # added in 20.11 rte_event_pmd_pci_probe_named; }; + +INTERNAL { + global: + + rte_event_pmd_selftest_seqn_dynfield_offset; +}; From patchwork Wed Oct 28 12:20:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82650 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E19B2A04DD; Wed, 28 Oct 2020 13:23:04 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8EC99CAC8; Wed, 28 Oct 2020 13:20:57 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id 3CE9BCA14 for ; Wed, 28 Oct 2020 13:20:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yh98GBARPgjY8MDlwC8z8S2X8IhZYnPNVv4r4LrchCs=; b=ZzM+azY7E9n+e/3+1G8LlUipcIH5wkJHTfz91EIWWnJuvrRiU4yGMlPtHLFuD5bPhBYvqH eB4yY1FJNMVkARC87yECR+U1GsKNTj0T4kmf0mx6+5+Tk38tbg5IVWdYo+NQsbo8/Eq0PC IjfSMgSz0IKL4TVPDlB+m/DL7G+EP8k= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-372-PU8Cp5JlO2WwrmpbKyQAew-1; Wed, 28 Oct 2020 08:20:44 -0400 X-MC-Unique: PU8Cp5JlO2WwrmpbKyQAew-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 929DA6414A; Wed, 28 Oct 2020 12:20:43 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE07D1001901; Wed, 28 Oct 2020 12:20:42 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Jerin Jacob Date: Wed, 28 Oct 2020 13:20:12 +0100 Message-Id: <20201028122013.31104-9-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 8/9] app/eventdev: switch sequence number to dynamic mbuf field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The order test stored a sequence number in the deprecated mbuf field seqn. It is moved to a dynamic field in order to allow removal of seqn. Signed-off-by: David Marchand --- app/test-eventdev/test_order_common.c | 14 +++++++++++++- app/test-eventdev/test_order_common.h | 13 +++++++++++-- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index 01a44bcd75..04456d56db 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -48,7 +48,7 @@ order_producer(void *arg) const flow_id_t flow = (uintptr_t)m % nb_flows; /* Maintain seq number per flow */ - m->seqn = producer_flow_seq[flow]++; + *order_mbuf_seqn(t, m) = producer_flow_seq[flow]++; order_flow_id_save(t, flow, m, &ev); while (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) { @@ -141,6 +141,11 @@ order_test_setup(struct evt_test *test, struct evt_options *opt) .size = sizeof(flow_id_t), .align = __alignof__(flow_id_t), }; + static const struct rte_mbuf_dynfield seqn_dynfield_desc = { + .name = "test_event_dynfield_seqn", + .size = sizeof(seqn_t), + .align = __alignof__(seqn_t), + }; test_order = rte_zmalloc_socket(test->name, sizeof(struct test_order), RTE_CACHE_LINE_SIZE, opt->socket_id); @@ -158,6 +163,13 @@ order_test_setup(struct evt_test *test, struct evt_options *opt) return -rte_errno; } + t->seqn_dynfield_offset = + rte_mbuf_dynfield_register(&seqn_dynfield_desc); + if (t->seqn_dynfield_offset < 0) { + evt_err("failed to register mbuf field"); + return -rte_errno; + } + t->producer_flow_seq = rte_zmalloc_socket("test_producer_flow_seq", sizeof(*t->producer_flow_seq) * opt->nb_flows, RTE_CACHE_LINE_SIZE, opt->socket_id); diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h index 90eac96fc8..5ef8404938 100644 --- a/app/test-eventdev/test_order_common.h +++ b/app/test-eventdev/test_order_common.h @@ -22,6 +22,7 @@ #define BURST_SIZE 16 typedef uint32_t flow_id_t; +typedef uint32_t seqn_t; struct test_order; @@ -53,6 +54,7 @@ struct test_order { uint64_t nb_pkts; struct rte_mempool *pool; int flow_id_dynfield_offset; + int seqn_dynfield_offset; struct prod_data prod; struct worker_data worker[EVT_MAX_PORTS]; uint32_t *producer_flow_seq; @@ -77,6 +79,12 @@ order_flow_id_save(struct test_order *t, flow_id_t flow_id, event->mbuf = mbuf; } +static inline seqn_t * +order_mbuf_seqn(struct test_order *t, struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, t->seqn_dynfield_offset, seqn_t *); +} + static inline int order_nb_event_ports(struct evt_options *opt) { @@ -91,9 +99,10 @@ order_process_stage_1(struct test_order *const t, { const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; /* compare the seqn against expected value */ - if (ev->mbuf->seqn != expected_flow_seq[flow]) { + if (*order_mbuf_seqn(t, ev->mbuf) != expected_flow_seq[flow]) { evt_err("flow=%x seqn mismatch got=%x expected=%x", - flow, ev->mbuf->seqn, expected_flow_seq[flow]); + flow, *order_mbuf_seqn(t, ev->mbuf), + expected_flow_seq[flow]); t->err = true; rte_smp_wmb(); } From patchwork Wed Oct 28 12:20:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 82651 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A2A6A04DD; Wed, 28 Oct 2020 13:23:27 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4FA93CAD2; Wed, 28 Oct 2020 13:20:59 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id DF307CAA2 for ; Wed, 28 Oct 2020 13:20:50 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603887649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vMIv65TEhhIuhNoBTEUEBAs+ZOCqLmmVrax7LbgT+LU=; b=HbY/2DcmlkC6Tb5EXZxtNE9u+RQ94XJ7KYhX2P8nEZnMtl4mnVFNfh+wM58EDfya2vRAP1 oxAENLQD/zJQb2WmXyPeU9cKxJh+Lim7VDMfJ1lENBrNdZVxoXOe7HodM4+9uxV7vB/7gp vVUYJLik3aFpugjK6Zj+k/HxbL2YpW4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-546-ezcTQAC0PS-exwNTtNe8vw-1; Wed, 28 Oct 2020 08:20:47 -0400 X-MC-Unique: ezcTQAC0PS-exwNTtNe8vw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E659A809DE2; Wed, 28 Oct 2020 12:20:45 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 247BE10013DB; Wed, 28 Oct 2020 12:20:43 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: Andrew Rybchenko , Ray Kinsella , Neil Horman , Olivier Matz Date: Wed, 28 Oct 2020 13:20:13 +0100 Message-Id: <20201028122013.31104-10-david.marchand@redhat.com> In-Reply-To: <20201028122013.31104-1-david.marchand@redhat.com> References: <20201027221343.28551-1-david.marchand@redhat.com> <20201028122013.31104-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As announced in the deprecation note, the field seqn is removed to give more space to the dynamic fields. This is how the mbuf layout looks like (pahole-style): word type name byte size 0 void * buf_addr; /* 0 + 8 */ 1 rte_iova_t buf_iova /* 8 + 8 */ /* --- RTE_MARKER64 rearm_data; */ 2 uint16_t data_off; /* 16 + 2 */ uint16_t refcnt; /* 18 + 2 */ uint16_t nb_segs; /* 20 + 2 */ uint16_t port; /* 22 + 2 */ 3 uint64_t ol_flags; /* 24 + 8 */ /* --- RTE_MARKER rx_descriptor_fields1; */ 4 uint32_t union packet_type; /* 32 + 4 */ uint32_t pkt_len; /* 36 + 4 */ 5 uint16_t data_len; /* 40 + 2 */ uint16_t vlan_tci; /* 42 + 2 */ 5.5 uint64_t union hash; /* 44 + 8 */ 6.5 uint16_t vlan_tci_outer; /* 52 + 2 */ uint16_t buf_len; /* 54 + 2 */ 7 uint64_t timestamp; /* 56 + 8 */ /* --- RTE_MARKER cacheline1; */ 8 struct rte_mempool * pool; /* 64 + 8 */ 9 struct rte_mbuf * next; /* 72 + 8 */ 10 uint64_t union tx_offload; /* 80 + 8 */ 11 struct rte_mbuf_ext_shared_info * shinfo; /* 88 + 8 */ 12 uint16_t priv_size; /* 96 + 2 */ uint16_t timesync; /* 98 + 2 */ 12.5 uint32_t dynfield1[7]; /* 100 + 28 */ 16 /* --- END 128 */ Signed-off-by: David Marchand Reviewed-by: Andrew Rybchenko Acked-by: Thomas Monjalon --- doc/guides/rel_notes/deprecation.rst | 1 - doc/guides/rel_notes/release_20_11.rst | 3 +++ lib/librte_mbuf/rte_mbuf_core.h | 15 ++++++--------- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 0f6f1df12a..fe3fd3956c 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -87,7 +87,6 @@ Deprecation Notices The following static fields will be moved as dynamic: - ``timestamp`` - - ``seqn`` As a consequence, the layout of the ``struct rte_mbuf`` will be re-arranged, avoiding impact on vectorized implementation of the driver datapaths, diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 3d7edbfdbb..c0a9fc96aa 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -429,6 +429,9 @@ API Changes * mbuf: Removed the unioned fields ``userdata`` and ``udata64`` from the structure ``rte_mbuf``. It is replaced with dynamic fields. +* mbuf: Removed the field ``seqn`` from the structure ``rte_mbuf``. + It is replaced with dynamic fields. + * pci: Removed the ``rte_kernel_driver`` enum defined in rte_dev.h and replaced with a private enum in the PCI subsystem. diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h index a65eaaf692..3fb5abda3c 100644 --- a/lib/librte_mbuf/rte_mbuf_core.h +++ b/lib/librte_mbuf/rte_mbuf_core.h @@ -640,6 +640,11 @@ struct rte_mbuf { }; }; + /** Shared data for external buffer attached to mbuf. See + * rte_pktmbuf_attach_extbuf(). + */ + struct rte_mbuf_ext_shared_info *shinfo; + /** Size of the application private data. In case of an indirect * mbuf, it stores the direct mbuf private data size. */ @@ -648,15 +653,7 @@ struct rte_mbuf { /** Timesync flags for use with IEEE1588. */ uint16_t timesync; - /** Sequence number. See also rte_reorder_insert(). */ - uint32_t seqn; - - /** Shared data for external buffer attached to mbuf. See - * rte_pktmbuf_attach_extbuf(). - */ - struct rte_mbuf_ext_shared_info *shinfo; - - uint64_t dynfield1[3]; /**< Reserved for dynamic fields. */ + uint32_t dynfield1[7]; /**< Reserved for dynamic fields. */ } __rte_cache_aligned; /**