From patchwork Thu Mar 8 22:55:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 35794 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 179445F1D; Thu, 8 Mar 2018 23:55:47 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id ED42C5F18 for ; Thu, 8 Mar 2018 23:55:44 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Mar 2018 14:55:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,442,1515484800"; d="scan'208";a="36640881" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.111]) by fmsmga001.fm.intel.com with ESMTP; 08 Mar 2018 14:55:43 -0800 From: Gage Eads To: dev@dpdk.org Cc: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com Date: Thu, 8 Mar 2018 16:55:42 -0600 Message-Id: <1520549742-12893-1-git-send-email-gage.eads@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH] event/sw: perform partial burst enqueues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Previously, the sw event adapter would enqueue either all or no events, depending on if enough inflight credits were available for the new events in the burst. If a port is enqueueing a large burst (i.e. a multiple of the credit update quanta), this can result in suboptimal performance, and requires an understanding of the sw PMD implementation (in particular, its credit scheme) to tune an application's burst size. This affects software that enqueues large bursts of new events, such as the ethernet event adapter which uses a 128-deep event buffer, when the input packet rate is sufficiently high. This change makes the sw PMD enqueue as many events as it has credits, if there are any new events in the burst. Signed-off-by: Gage Eads --- drivers/event/sw/sw_evdev_worker.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/event/sw/sw_evdev_worker.c b/drivers/event/sw/sw_evdev_worker.c index 67151f7..063b919 100644 --- a/drivers/event/sw/sw_evdev_worker.c +++ b/drivers/event/sw/sw_evdev_worker.c @@ -77,8 +77,10 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num) rte_atomic32_add(&sw->inflights, credit_update_quanta); p->inflight_credits += (credit_update_quanta); - if (p->inflight_credits < new) - return 0; + /* If there are fewer inflight credits than new events, limit + * the number of enqueued events. + */ + num = (p->inflight_credits < new) ? p->inflight_credits : new; } for (i = 0; i < num; i++) {