From patchwork Thu Nov 30 04:20:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 31770 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C77D43258; Thu, 30 Nov 2017 05:20:54 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 012D8324A for ; Thu, 30 Nov 2017 05:20:51 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Nov 2017 20:20:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.45,339,1508828400"; d="scan'208"; a="1250152212" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.111]) by fmsmga002.fm.intel.com with ESMTP; 29 Nov 2017 20:20:50 -0800 From: Gage Eads To: dev@dpdk.org Cc: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com, bruce.richardson@intel.com, hemant.agrawal@nxp.com, nipun.gupta@nxp.com, santosh.shukla@caviumnetworks.com Date: Wed, 29 Nov 2017 22:20:36 -0600 Message-Id: <1512015636-31878-3-git-send-email-gage.eads@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1512015636-31878-1-git-send-email-gage.eads@intel.com> References: <1512015636-31878-1-git-send-email-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH 2/2] event/sw: simplify credit scheme X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit modifies the sw PMD credit scheme such that credits are consumed when enqueueing a NEW event and released when an event is released -- typically, the beginning and end of a pipeline. Workers that simply forward events do not interact with the credit pool. Signed-off-by: Gage Eads Acked-by: Harry van Haaren --- drivers/event/sw/sw_evdev_worker.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/event/sw/sw_evdev_worker.c b/drivers/event/sw/sw_evdev_worker.c index 93cd29b..766c836 100644 --- a/drivers/event/sw/sw_evdev_worker.c +++ b/drivers/event/sw/sw_evdev_worker.c @@ -85,6 +85,7 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num) struct sw_port *p = port; struct sw_evdev *sw = (void *)p->sw; uint32_t sw_inflights = rte_atomic32_read(&sw->inflights); + uint32_t credit_update_quanta = sw->credit_update_quanta; int new = 0; if (num > PORT_ENQUEUE_MAX_BURST_SIZE) @@ -98,7 +99,6 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num) if (p->inflight_credits < new) { /* check if event enqueue brings port over max threshold */ - uint32_t credit_update_quanta = sw->credit_update_quanta; if (sw_inflights + credit_update_quanta > sw->nb_events_limit) return 0; @@ -109,7 +109,6 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num) return 0; } - uint32_t completions = 0; for (i = 0; i < num; i++) { int op = ev[i].op; int outstanding = p->outstanding_releases > 0; @@ -126,21 +125,16 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num) * correct usage of the API), providing very high correct * prediction rate. */ - if ((new_ops[i] & QE_FLAG_COMPLETE) && outstanding) { + if ((new_ops[i] & QE_FLAG_COMPLETE) && outstanding) p->outstanding_releases--; - completions++; - } /* error case: branch to avoid touching p->stats */ - if (unlikely(invalid_qid)) { + if (unlikely(invalid_qid && op != RTE_EVENT_OP_RELEASE)) { p->stats.rx_dropped++; p->inflight_credits++; } } - /* handle directed port forward and release credits */ - p->inflight_credits -= completions * p->is_directed; - /* returns number of events actually enqueued */ uint32_t enq = enqueue_burst_with_ops(p->rx_worker_ring, ev, i, new_ops); @@ -153,6 +147,13 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num) p->avg_pkt_ticks += burst_pkt_ticks / NUM_SAMPLES; p->last_dequeue_ticks = 0; } + + /* Replenish credits if enough releases are performed */ + if (p->inflight_credits >= credit_update_quanta * 2) { + rte_atomic32_sub(&sw->inflights, credit_update_quanta); + p->inflight_credits -= credit_update_quanta; + } + return enq; } @@ -168,16 +169,22 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num, { RTE_SET_USED(wait); struct sw_port *p = (void *)port; - struct sw_evdev *sw = (void *)p->sw; struct rte_event_ring *ring = p->cq_worker_ring; - uint32_t credit_update_quanta = sw->credit_update_quanta; /* check that all previous dequeues have been released */ - if (p->implicit_release && !p->is_directed) { + if (p->implicit_release) { + struct sw_evdev *sw = (void *)p->sw; + uint32_t credit_update_quanta = sw->credit_update_quanta; uint16_t out_rels = p->outstanding_releases; uint16_t i; for (i = 0; i < out_rels; i++) sw_event_release(p, i); + + /* Replenish credits if enough releases are performed */ + if (p->inflight_credits >= credit_update_quanta * 2) { + rte_atomic32_sub(&sw->inflights, credit_update_quanta); + p->inflight_credits -= credit_update_quanta; + } } /* returns number of events actually dequeued */ @@ -188,8 +195,6 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num, goto end; } - /* only add credits for directed ports - LB ports send RELEASEs */ - p->inflight_credits += ndeq * p->is_directed; p->outstanding_releases += ndeq; p->last_dequeue_burst_sz = ndeq; p->last_dequeue_ticks = rte_get_timer_cycles(); @@ -197,11 +202,6 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num, p->total_polls++; end: - if (p->inflight_credits >= credit_update_quanta * 2 && - p->inflight_credits > credit_update_quanta + ndeq) { - rte_atomic32_sub(&sw->inflights, credit_update_quanta); - p->inflight_credits -= credit_update_quanta; - } return ndeq; }