From patchwork Tue Jan 12 04:38:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 86365 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0BF4DA04B5; Tue, 12 Jan 2021 05:49:48 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6B16140D1D; Tue, 12 Jan 2021 05:49:47 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 4E947140D1C for ; Tue, 12 Jan 2021 05:49:45 +0100 (CET) IronPort-SDR: pw33Z8DChdnF2rS59uUeRzTOH4OqOBeyef0LPTQ2BkI8j4p7g7e2SelDrA1XXSgYMBzMUq/eUz VjdjqC3stYJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9861"; a="242053526" X-IronPort-AV: E=Sophos;i="5.79,340,1602572400"; d="scan'208";a="242053526" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jan 2021 20:49:43 -0800 IronPort-SDR: /IT+Bibfo4lnpYGmoKMZj2A4avRfnS+v9A4AljyaWVH1R0KSyuBdxP/ABMhXihYIEV0iJae5tE ZCkqU3TFGWzA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,340,1602572400"; d="scan'208";a="351768128" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga006.jf.intel.com with ESMTP; 11 Jan 2021 20:49:41 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Jiayu.Hu@intel.com, YvonneX.Yang@intel.com, yinan.wang@intel.com, Cheng Jiang , Jiayu Hu Date: Tue, 12 Jan 2021 04:38:56 +0000 Message-Id: <20210112043857.19826-2-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210112043857.19826-1-Cheng1.jiang@intel.com> References: <20201218113327.70528-1-Cheng1.jiang@intel.com> <20210112043857.19826-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v9 1/2] examples/vhost: add ioat ring space count and check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add ioat ring space count and check, if ioat ring space is not enough for the next async vhost packet enqueue, then just return to prevent enqueue failure. Add rte_ioat_completed_ops() fail handler. Signed-off-by: Cheng Jiang Reviewed-by: Jiayu Hu Reviewed-by: Maxime Coquelin Reviewed-by: Maxime Coquelin --- examples/vhost/ioat.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) -- 2.29.2 diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c index 71d8a1f1f..dbad28d43 100644 --- a/examples/vhost/ioat.c +++ b/examples/vhost/ioat.c @@ -17,6 +17,7 @@ struct packet_tracker { unsigned short next_read; unsigned short next_write; unsigned short last_remain; + unsigned short ioat_space; }; struct packet_tracker cb_tracker[MAX_VHOST_DEVICE]; @@ -113,7 +114,7 @@ open_ioat(const char *value) goto out; } rte_rawdev_start(dev_id); - + cb_tracker[dev_id].ioat_space = IOAT_RING_SIZE; dma_info->nr++; i++; } @@ -140,13 +141,9 @@ ioat_transfer_data_cb(int vid, uint16_t queue_id, src = descs[i_desc].src; dst = descs[i_desc].dst; i_seg = 0; + if (cb_tracker[dev_id].ioat_space < src->nr_segs) + break; while (i_seg < src->nr_segs) { - /* - * TODO: Assuming that the ring space of the - * IOAT device is large enough, so there is no - * error here, and the actual error handling - * will be added later. - */ rte_ioat_enqueue_copy(dev_id, (uintptr_t)(src->iov[i_seg].iov_base) + src->offset, @@ -158,7 +155,8 @@ ioat_transfer_data_cb(int vid, uint16_t queue_id, i_seg++; } write &= mask; - cb_tracker[dev_id].size_track[write] = i_seg; + cb_tracker[dev_id].size_track[write] = src->nr_segs; + cb_tracker[dev_id].ioat_space -= src->nr_segs; write++; } } else { @@ -178,17 +176,21 @@ ioat_check_completed_copies_cb(int vid, uint16_t queue_id, { if (!opaque_data) { uintptr_t dump[255]; - unsigned short n_seg; + int n_seg; unsigned short read, write; unsigned short nb_packet = 0; unsigned short mask = MAX_ENQUEUED_SIZE - 1; unsigned short i; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id; n_seg = rte_ioat_completed_ops(dev_id, 255, dump, dump); - n_seg += cb_tracker[dev_id].last_remain; - if (!n_seg) + if (n_seg <= 0) return 0; + + cb_tracker[dev_id].ioat_space += n_seg; + n_seg += cb_tracker[dev_id].last_remain; + read = cb_tracker[dev_id].next_read; write = cb_tracker[dev_id].next_write; for (i = 0; i < max_packets; i++) {