From patchwork Mon Sep 28 10:59:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 79006 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10CDAA04C3; Mon, 28 Sep 2020 12:59:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E335C1D8D3; Mon, 28 Sep 2020 12:59:28 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id ABFE81C19E; Mon, 28 Sep 2020 12:59:26 +0200 (CEST) IronPort-SDR: ksPSePb4SNrJjTI2zb5WoqDAoyKmqhDIS4rMFIhxaRw09wVQ0YX+SfAYP4aZdgA9ixsorK9qeB juHCuClNmYaw== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226121982" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226121982" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 03:59:22 -0700 IronPort-SDR: laDQMBKjGWa83pBJznPOse9epMtMKvwJeDbjlGHUfZzgL9MDU7m0xlFsLSLiuYdhNIlKYnwZBI gzwSl9/BAiIA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="514212839" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by fmsmga005.fm.intel.com with ESMTP; 28 Sep 2020 03:59:20 -0700 From: Ferruh Yigit To: Maxime Coquelin , Chenbo Xia , Zhihong Wang , Fan Zhang Cc: dev@dpdk.org, Ferruh Yigit , stable@dpdk.org Date: Mon, 28 Sep 2020 11:59:13 +0100 Message-Id: <20200928105918.740807-1-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/6] vhost/crypto: fix pool allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This patch fixes the missing iv space allocation in crypto operation mempool. Fixes: 709521f4c2cd ("examples/vhost_crypto: support multi-core") Cc: stable@dpdk.org Signed-off-by: Fan Zhang Acked-by: Chenbo Xia --- examples/vhost_crypto/main.c | 2 +- lib/librte_vhost/rte_vhost_crypto.h | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c index 1d7ba94196..11b022e813 100644 --- a/examples/vhost_crypto/main.c +++ b/examples/vhost_crypto/main.c @@ -544,7 +544,7 @@ main(int argc, char *argv[]) snprintf(name, 127, "COPPOOL_%u", lo->lcore_id); info->cop_pool = rte_crypto_op_pool_create(name, RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS, - NB_CACHE_OBJS, 0, + NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN, rte_lcore_to_socket_id(lo->lcore_id)); if (!info->cop_pool) { diff --git a/lib/librte_vhost/rte_vhost_crypto.h b/lib/librte_vhost/rte_vhost_crypto.h index d29871c7ea..866a592a5d 100644 --- a/lib/librte_vhost/rte_vhost_crypto.h +++ b/lib/librte_vhost/rte_vhost_crypto.h @@ -10,6 +10,7 @@ #define VHOST_CRYPTO_SESSION_MAP_ENTRIES (1024) /**< Max nb sessions */ /** max nb virtual queues in a burst for finalizing*/ #define VIRTIO_CRYPTO_MAX_NUM_BURST_VQS (64) +#define VHOST_CRYPTO_MAX_IV_LEN (32) enum rte_vhost_crypto_zero_copy { RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE = 0, From patchwork Mon Sep 28 10:59:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 79007 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09247A04C3; Mon, 28 Sep 2020 12:59:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 07CEA1D8E7; Mon, 28 Sep 2020 12:59:33 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 9EEB51C19E; Mon, 28 Sep 2020 12:59:28 +0200 (CEST) IronPort-SDR: HfNCfYyEwc8nwFBNkO60DgHo+gk3rk2B+HR/LrkmsqQlWh62CWhGX8eRKMWWT05pUxv97mw+f2 KBiDVHfgQ7Wg== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226122006" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226122006" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 03:59:26 -0700 IronPort-SDR: jbtQ12sOwJb9/0+FRe87AUK9/H9rZLNQSdUCH5rRk8QIXuunciO7UjGgyEYhmFo24bx4IWtaWn BjIEnTbhhGpQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="514212861" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by fmsmga005.fm.intel.com with ESMTP; 28 Sep 2020 03:59:24 -0700 From: Ferruh Yigit To: Maxime Coquelin , Chenbo Xia , Zhihong Wang , Fan Zhang Cc: dev@dpdk.org, Ferruh Yigit , stable@dpdk.org Date: Mon, 28 Sep 2020 11:59:14 +0100 Message-Id: <20200928105918.740807-2-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200928105918.740807-1-ferruh.yigit@intel.com> References: <20200928105918.740807-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/6] vhost/crypto: fix incorrect descriptor deduction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This patch fixes the incorrect descriptor deduction for vhost crypto. CVE-2020-14378 Fixes: 16d2e718b8ce ("vhost/crypto: fix possible out of bound access") Cc: stable@dpdk.org Signed-off-by: Fan Zhang Acked-by: Chenbo Xia --- lib/librte_vhost/vhost_crypto.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c index 0f9df4059d..86747dd5f3 100644 --- a/lib/librte_vhost/vhost_crypto.c +++ b/lib/librte_vhost/vhost_crypto.c @@ -530,13 +530,14 @@ move_desc(struct vring_desc *head, struct vring_desc **cur_desc, int left = size - desc->len; while ((desc->flags & VRING_DESC_F_NEXT) && left > 0) { - (*nb_descs)--; if (unlikely(*nb_descs == 0 || desc->next >= vq_size)) return -1; desc = &head[desc->next]; rte_prefetch0(&head[desc->next]); left -= desc->len; + if (left > 0) + (*nb_descs)--; } if (unlikely(left > 0)) From patchwork Mon Sep 28 10:59:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 79008 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E44A3A04C3; Mon, 28 Sep 2020 13:00:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9EB641D8FD; Mon, 28 Sep 2020 12:59:35 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 2D3BB1D8DE; Mon, 28 Sep 2020 12:59:31 +0200 (CEST) IronPort-SDR: KGp7ke5WhhoWGGdVimVTcMHCfmTBTzqKUOH+KDIDGd8VMYokKVMR4q80KP9/Dp5KPKsUL5FA+f 9sFhEiuevhNg== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226122024" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226122024" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 03:59:30 -0700 IronPort-SDR: om6G8v4zL35cM4rMDp0NVJs2nIsLrbz/7iuvR+aFwVdo+OJguijtnFBXnrbz12bFVloVoRntXo 6g3WyObF0ycw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="514212875" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by fmsmga005.fm.intel.com with ESMTP; 28 Sep 2020 03:59:28 -0700 From: Ferruh Yigit To: Maxime Coquelin , Chenbo Xia , Zhihong Wang , Jay Zhou , Fan Zhang Cc: dev@dpdk.org, Ferruh Yigit , stable@dpdk.org Date: Mon, 28 Sep 2020 11:59:15 +0100 Message-Id: <20200928105918.740807-3-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200928105918.740807-1-ferruh.yigit@intel.com> References: <20200928105918.740807-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 3/6] vhost/crypto: fix missed request check for copy mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This patch fixes the missed request check to vhost crypto copy mode. CVE-2020-14376 CVE-2020-14377 Fixes: 3bb595ecd682 ("vhost/crypto: add request handler") Cc: stable@dpdk.org Signed-off-by: Fan Zhang Acked-by: Chenbo Xia --- lib/librte_vhost/vhost_crypto.c | 68 +++++++++++++++++++++++---------- 1 file changed, 47 insertions(+), 21 deletions(-) diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c index 86747dd5f3..494f49084b 100644 --- a/lib/librte_vhost/vhost_crypto.c +++ b/lib/librte_vhost/vhost_crypto.c @@ -756,7 +756,7 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, } wb_data->dst = dst; - wb_data->len = desc->len - offset; + wb_data->len = RTE_MIN(desc->len - offset, write_back_len); write_back_len -= wb_data->len; src += offset + wb_data->len; offset = 0; @@ -840,6 +840,17 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, return NULL; } +static __rte_always_inline uint8_t +vhost_crypto_check_cipher_request(struct virtio_crypto_cipher_data_req *req) +{ + if (likely((req->para.iv_len <= VHOST_CRYPTO_MAX_IV_LEN) && + (req->para.src_data_len <= RTE_MBUF_DEFAULT_BUF_SIZE) && + (req->para.dst_data_len >= req->para.src_data_len) && + (req->para.dst_data_len <= RTE_MBUF_DEFAULT_BUF_SIZE))) + return VIRTIO_CRYPTO_OK; + return VIRTIO_CRYPTO_BADMSG; +} + static uint8_t prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct vhost_crypto_data_req *vc_req, @@ -851,7 +862,10 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct vhost_crypto_writeback_data *ewb = NULL; struct rte_mbuf *m_src = op->sym->m_src, *m_dst = op->sym->m_dst; uint8_t *iv_data = rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET); - uint8_t ret = 0; + uint8_t ret = vhost_crypto_check_cipher_request(cipher); + + if (unlikely(ret != VIRTIO_CRYPTO_OK)) + goto error_exit; /* prepare */ /* iv */ @@ -861,10 +875,9 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, goto error_exit; } - m_src->data_len = cipher->para.src_data_len; - switch (vcrypto->option) { case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE: + m_src->data_len = cipher->para.src_data_len; m_src->buf_iova = gpa_to_hpa(vcrypto->dev, desc->addr, cipher->para.src_data_len); m_src->buf_addr = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); @@ -886,13 +899,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, break; case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE: vc_req->wb_pool = vcrypto->wb_pool; - - if (unlikely(cipher->para.src_data_len > - RTE_MBUF_DEFAULT_BUF_SIZE)) { - VC_LOG_ERR("Not enough space to do data copy"); - ret = VIRTIO_CRYPTO_ERR; - goto error_exit; - } + m_src->data_len = cipher->para.src_data_len; if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), vc_req, &desc, cipher->para.src_data_len, nb_descs, vq_size) < 0)) { @@ -975,6 +982,29 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, return ret; } +static __rte_always_inline uint8_t +vhost_crypto_check_chain_request(struct virtio_crypto_alg_chain_data_req *req) +{ + if (likely((req->para.iv_len <= VHOST_CRYPTO_MAX_IV_LEN) && + (req->para.src_data_len <= RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.dst_data_len >= req->para.src_data_len) && + (req->para.dst_data_len <= RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.cipher_start_src_offset < + RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.len_to_cipher < RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.hash_start_src_offset < + RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.len_to_hash < RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.cipher_start_src_offset + req->para.len_to_cipher <= + req->para.src_data_len) && + (req->para.hash_start_src_offset + req->para.len_to_hash <= + req->para.src_data_len) && + (req->para.dst_data_len + req->para.hash_result_len <= + RTE_MBUF_DEFAULT_DATAROOM))) + return VIRTIO_CRYPTO_OK; + return VIRTIO_CRYPTO_BADMSG; +} + static uint8_t prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct vhost_crypto_data_req *vc_req, @@ -988,7 +1018,10 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, uint8_t *iv_data = rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET); uint32_t digest_offset; void *digest_addr; - uint8_t ret = 0; + uint8_t ret = vhost_crypto_check_chain_request(chain); + + if (unlikely(ret != VIRTIO_CRYPTO_OK)) + goto error_exit; /* prepare */ /* iv */ @@ -998,10 +1031,9 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, goto error_exit; } - m_src->data_len = chain->para.src_data_len; - switch (vcrypto->option) { case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE: + m_src->data_len = chain->para.src_data_len; m_dst->data_len = chain->para.dst_data_len; m_src->buf_iova = gpa_to_hpa(vcrypto->dev, desc->addr, @@ -1023,13 +1055,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, break; case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE: vc_req->wb_pool = vcrypto->wb_pool; - - if (unlikely(chain->para.src_data_len > - RTE_MBUF_DEFAULT_BUF_SIZE)) { - VC_LOG_ERR("Not enough space to do data copy"); - ret = VIRTIO_CRYPTO_ERR; - goto error_exit; - } + m_src->data_len = chain->para.src_data_len; if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), vc_req, &desc, chain->para.src_data_len, nb_descs, vq_size) < 0)) { From patchwork Mon Sep 28 10:59:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 79009 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C13C6A04C3; Mon, 28 Sep 2020 13:00:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 625A51D908; Mon, 28 Sep 2020 12:59:38 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id A807E1D8ED; Mon, 28 Sep 2020 12:59:33 +0200 (CEST) IronPort-SDR: +/CU8vvZhWoGIEPTt7YuKX7ujR039aEbT2SeBXrRlupp42kCQdEFQytg08FKczQPcC0lxaFg2+ Sq2kqF9VO6aQ== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226122043" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226122043" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 03:59:33 -0700 IronPort-SDR: Z50+fTv56pyAIc3ULBZlNuEYblYwO3BkS9bczmobgZ1xuCVvbdn3rA6YNyro9q8qlpLZv/XlIT 8IDiKwvm5gpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="514212884" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by fmsmga005.fm.intel.com with ESMTP; 28 Sep 2020 03:59:31 -0700 From: Ferruh Yigit To: Maxime Coquelin , Chenbo Xia , Zhihong Wang , Fan Zhang Cc: dev@dpdk.org, Ferruh Yigit , stable@dpdk.org Date: Mon, 28 Sep 2020 11:59:16 +0100 Message-Id: <20200928105918.740807-4-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200928105918.740807-1-ferruh.yigit@intel.com> References: <20200928105918.740807-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 4/6] vhost/crypto: fix incorrect write back source X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This patch fixes vhost crypto library for the incorrect source and destination buffer calculation in the copy mode. Fixes: cd1e8f03abf0 ("vhost/crypto: fix packet copy in chaining mode") Cc: stable@dpdk.org Signed-off-by: Fan Zhang Acked-by: Chenbo Xia --- lib/librte_vhost/vhost_crypto.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c index 494f49084b..f1cc32a9b2 100644 --- a/lib/librte_vhost/vhost_crypto.c +++ b/lib/librte_vhost/vhost_crypto.c @@ -749,14 +749,14 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, wb_data->src = src + offset; dlen = desc->len; dst = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, - &dlen, VHOST_ACCESS_RW) + offset; + &dlen, VHOST_ACCESS_RW); if (unlikely(!dst || dlen != desc->len)) { VC_LOG_ERR("Failed to map descriptor"); goto error_exit; } - wb_data->dst = dst; - wb_data->len = RTE_MIN(desc->len - offset, write_back_len); + wb_data->dst = dst + offset; + wb_data->len = RTE_MIN(dlen - offset, write_back_len); write_back_len -= wb_data->len; src += offset + wb_data->len; offset = 0; @@ -801,7 +801,7 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, goto error_exit; } - wb_data->src = src; + wb_data->src = src + offset; wb_data->dst = dst; wb_data->len = RTE_MIN(desc->len - offset, write_back_len); write_back_len -= wb_data->len; From patchwork Mon Sep 28 10:59:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 79010 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61FD6A04C3; Mon, 28 Sep 2020 13:00:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E09151D910; Mon, 28 Sep 2020 12:59:40 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 699C91D902; Mon, 28 Sep 2020 12:59:36 +0200 (CEST) IronPort-SDR: VSo++zm9WpIsVEKMc2HFJtwADnr3478nCcJZAcpUV+Ct28TpCZwK/OBarO5I7PWLR8je6QcFB/ cBksvycqj30w== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226122061" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226122061" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 03:59:35 -0700 IronPort-SDR: 5JYP7fmgSxX2g+BtmNEyxQ/Gcy9zLana6J41w1J2zuo+cQAr4ejkvvqwpnFtjJ+nKKTU+Af07O 2pmK1HV1W6Aw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="514212888" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by fmsmga005.fm.intel.com with ESMTP; 28 Sep 2020 03:59:34 -0700 From: Ferruh Yigit To: Maxime Coquelin , Chenbo Xia , Zhihong Wang , Fan Zhang Cc: dev@dpdk.org, Ferruh Yigit , stable@dpdk.org Date: Mon, 28 Sep 2020 11:59:17 +0100 Message-Id: <20200928105918.740807-5-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200928105918.740807-1-ferruh.yigit@intel.com> References: <20200928105918.740807-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 5/6] vhost/crypto: fix data length check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This patch fixes the incorrect data length check to vhost crypto. Instead of blindly accepting the descriptor length as data length, the change compare the request provided data length and descriptor length first. The security issue CVE-2020-14374 is not fixed alone by this patch, part of the fix is done through: "vhost/crypto: fix missed request check for copy mode". CVE-2020-14374 Fixes: 3c79609fda7c ("vhost/crypto: handle virtually non-contiguous buffers") Cc: stable@dpdk.org Signed-off-by: Fan Zhang Acked-by: Chenbo Xia --- lib/librte_vhost/vhost_crypto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c index f1cc32a9b2..cf9aa2566b 100644 --- a/lib/librte_vhost/vhost_crypto.c +++ b/lib/librte_vhost/vhost_crypto.c @@ -624,7 +624,7 @@ copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, desc = &vc_req->head[desc->next]; rte_prefetch0(&vc_req->head[desc->next]); to_copy = RTE_MIN(desc->len, (uint32_t)left); - dlen = desc->len; + dlen = to_copy; src = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, &dlen, VHOST_ACCESS_RO); if (unlikely(!src || !dlen)) { From patchwork Mon Sep 28 10:59:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 79011 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FC1BA04C3; Mon, 28 Sep 2020 13:01:16 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0A50A1D924; Mon, 28 Sep 2020 12:59:44 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id E97FA1D911; Mon, 28 Sep 2020 12:59:39 +0200 (CEST) IronPort-SDR: I3qA5NmfXieM1PxqzLiwH5Q4npk+bPruH6YXgAP2yf6XJVaRw/Aghey+ERGDcC8iE6Nj3NllcK h4mg0KBLxZ6Q== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226122064" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226122064" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 03:59:39 -0700 IronPort-SDR: 5vfdgmVWPSlESUu3pNX1S4jTnJwSFgab7WiYQzcgb7ZK6eNyJoDr+scg1+BnrepzHElJ0rURqU +PvmZcsRr/CQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="514212893" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by fmsmga005.fm.intel.com with ESMTP; 28 Sep 2020 03:59:37 -0700 From: Ferruh Yigit To: Maxime Coquelin , Chenbo Xia , Zhihong Wang , Jay Zhou , Fan Zhang Cc: dev@dpdk.org, Ferruh Yigit , stable@dpdk.org Date: Mon, 28 Sep 2020 11:59:18 +0100 Message-Id: <20200928105918.740807-6-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200928105918.740807-1-ferruh.yigit@intel.com> References: <20200928105918.740807-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 6/6] vhost/crypto: fix possible TOCTOU attack X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This patch fixes the possible time-of-check to time-of-use (TOCTOU) attack problem by copying request data and descriptor index to local variable prior to process. Also the original sequential read of descriptors may lead to TOCTOU attack. This patch fixes the problem by loading all descriptors of a request to local buffer before processing. CVE-2020-14375 Fixes: 3bb595ecd682 ("vhost/crypto: add request handler") Cc: stable@dpdk.org Signed-off-by: Fan Zhang Acked-by: Chenbo Xia --- lib/librte_vhost/rte_vhost_crypto.h | 2 + lib/librte_vhost/vhost_crypto.c | 391 ++++++++++++++-------------- 2 files changed, 202 insertions(+), 191 deletions(-) diff --git a/lib/librte_vhost/rte_vhost_crypto.h b/lib/librte_vhost/rte_vhost_crypto.h index 866a592a5d..b54d61db69 100644 --- a/lib/librte_vhost/rte_vhost_crypto.h +++ b/lib/librte_vhost/rte_vhost_crypto.h @@ -7,10 +7,12 @@ #define VHOST_CRYPTO_MBUF_POOL_SIZE (8192) #define VHOST_CRYPTO_MAX_BURST_SIZE (64) +#define VHOST_CRYPTO_MAX_DATA_SIZE (4096) #define VHOST_CRYPTO_SESSION_MAP_ENTRIES (1024) /**< Max nb sessions */ /** max nb virtual queues in a burst for finalizing*/ #define VIRTIO_CRYPTO_MAX_NUM_BURST_VQS (64) #define VHOST_CRYPTO_MAX_IV_LEN (32) +#define VHOST_CRYPTO_MAX_N_DESC (32) enum rte_vhost_crypto_zero_copy { RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE = 0, diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c index cf9aa2566b..e08f9c6d75 100644 --- a/lib/librte_vhost/vhost_crypto.c +++ b/lib/librte_vhost/vhost_crypto.c @@ -46,6 +46,14 @@ #define IOVA_TO_VVA(t, r, a, l, p) \ ((t)(uintptr_t)vhost_iova_to_vva(r->dev, r->vq, a, l, p)) +/* + * vhost_crypto_desc is used to copy original vring_desc to the local buffer + * before processing (except the next index). The copy result will be an + * array of vhost_crypto_desc elements that follows the sequence of original + * vring_desc.next is arranged. + */ +#define vhost_crypto_desc vring_desc + static int cipher_algo_transform(uint32_t virtio_cipher_algo, enum rte_crypto_cipher_algorithm *algo) @@ -479,83 +487,71 @@ vhost_crypto_msg_post_handler(int vid, void *msg) return ret; } -static __rte_always_inline struct vring_desc * -find_write_desc(struct vring_desc *head, struct vring_desc *desc, - uint32_t *nb_descs, uint32_t vq_size) +static __rte_always_inline struct vhost_crypto_desc * +find_write_desc(struct vhost_crypto_desc *head, struct vhost_crypto_desc *desc, + uint32_t max_n_descs) { - if (desc->flags & VRING_DESC_F_WRITE) - return desc; - - while (desc->flags & VRING_DESC_F_NEXT) { - if (unlikely(*nb_descs == 0 || desc->next >= vq_size)) - return NULL; - (*nb_descs)--; + if (desc < head) + return NULL; - desc = &head[desc->next]; + while (desc - head < (int)max_n_descs) { if (desc->flags & VRING_DESC_F_WRITE) return desc; + desc++; } return NULL; } -static struct virtio_crypto_inhdr * -reach_inhdr(struct vhost_crypto_data_req *vc_req, struct vring_desc *desc, - uint32_t *nb_descs, uint32_t vq_size) +static __rte_always_inline struct virtio_crypto_inhdr * +reach_inhdr(struct vhost_crypto_data_req *vc_req, + struct vhost_crypto_desc *head, + uint32_t max_n_descs) { - uint64_t dlen; struct virtio_crypto_inhdr *inhdr; + struct vhost_crypto_desc *last = head + (max_n_descs - 1); + uint64_t dlen = last->len; - while (desc->flags & VRING_DESC_F_NEXT) { - if (unlikely(*nb_descs == 0 || desc->next >= vq_size)) - return NULL; - (*nb_descs)--; - desc = &vc_req->head[desc->next]; - } + if (unlikely(dlen != sizeof(*inhdr))) + return NULL; - dlen = desc->len; - inhdr = IOVA_TO_VVA(struct virtio_crypto_inhdr *, vc_req, desc->addr, + inhdr = IOVA_TO_VVA(struct virtio_crypto_inhdr *, vc_req, last->addr, &dlen, VHOST_ACCESS_WO); - if (unlikely(!inhdr || dlen != desc->len)) + if (unlikely(!inhdr || dlen != last->len)) return NULL; return inhdr; } static __rte_always_inline int -move_desc(struct vring_desc *head, struct vring_desc **cur_desc, - uint32_t size, uint32_t *nb_descs, uint32_t vq_size) +move_desc(struct vhost_crypto_desc *head, + struct vhost_crypto_desc **cur_desc, + uint32_t size, uint32_t max_n_descs) { - struct vring_desc *desc = *cur_desc; + struct vhost_crypto_desc *desc = *cur_desc; int left = size - desc->len; - while ((desc->flags & VRING_DESC_F_NEXT) && left > 0) { - if (unlikely(*nb_descs == 0 || desc->next >= vq_size)) - return -1; - - desc = &head[desc->next]; - rte_prefetch0(&head[desc->next]); + while (desc->flags & VRING_DESC_F_NEXT && left > 0 && + desc >= head && + desc - head < (int)max_n_descs) { + desc++; left -= desc->len; - if (left > 0) - (*nb_descs)--; } if (unlikely(left > 0)) return -1; - if (unlikely(*nb_descs == 0)) + if (unlikely(head - desc == (int)max_n_descs)) *cur_desc = NULL; - else { - if (unlikely(desc->next >= vq_size)) - return -1; - *cur_desc = &head[desc->next]; - } + else + *cur_desc = desc + 1; return 0; } static __rte_always_inline void * -get_data_ptr(struct vhost_crypto_data_req *vc_req, struct vring_desc *cur_desc, +get_data_ptr(struct vhost_crypto_data_req *vc_req, + struct vhost_crypto_desc *cur_desc, uint8_t perm) { void *data; @@ -570,12 +566,13 @@ get_data_ptr(struct vhost_crypto_data_req *vc_req, struct vring_desc *cur_desc, return data; } -static int +static __rte_always_inline int copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, - struct vring_desc **cur_desc, uint32_t size, - uint32_t *nb_descs, uint32_t vq_size) + struct vhost_crypto_desc *head, + struct vhost_crypto_desc **cur_desc, + uint32_t size, uint32_t max_n_descs) { - struct vring_desc *desc = *cur_desc; + struct vhost_crypto_desc *desc = *cur_desc; uint64_t remain, addr, dlen, len; uint32_t to_copy; uint8_t *data = dst_data; @@ -614,15 +611,8 @@ copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, left -= to_copy; - while ((desc->flags & VRING_DESC_F_NEXT) && left > 0) { - if (unlikely(*nb_descs == 0 || desc->next >= vq_size)) { - VC_LOG_ERR("Invalid descriptors"); - return -1; - } - (*nb_descs)--; - - desc = &vc_req->head[desc->next]; - rte_prefetch0(&vc_req->head[desc->next]); + while (desc >= head && desc - head < (int)max_n_descs && left) { + desc++; to_copy = RTE_MIN(desc->len, (uint32_t)left); dlen = to_copy; src = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, &dlen, @@ -663,13 +653,10 @@ copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, return -1; } - if (unlikely(*nb_descs == 0)) + if (unlikely(desc - head == (int)max_n_descs)) *cur_desc = NULL; - else { - if (unlikely(desc->next >= vq_size)) - return -1; - *cur_desc = &vc_req->head[desc->next]; - } + else + *cur_desc = desc + 1; return 0; } @@ -681,6 +668,7 @@ write_back_data(struct vhost_crypto_data_req *vc_req) while (wb_data) { rte_memcpy(wb_data->dst, wb_data->src, wb_data->len); + memset(wb_data->src, 0, wb_data->len); wb_last = wb_data; wb_data = wb_data->next; rte_mempool_put(vc_req->wb_pool, wb_last); @@ -722,17 +710,18 @@ free_wb_data(struct vhost_crypto_writeback_data *wb_data, * @return * The pointer to the start of the write back data linked list. */ -static struct vhost_crypto_writeback_data * +static __rte_always_inline struct vhost_crypto_writeback_data * prepare_write_back_data(struct vhost_crypto_data_req *vc_req, - struct vring_desc **cur_desc, + struct vhost_crypto_desc *head_desc, + struct vhost_crypto_desc **cur_desc, struct vhost_crypto_writeback_data **end_wb_data, uint8_t *src, uint32_t offset, uint64_t write_back_len, - uint32_t *nb_descs, uint32_t vq_size) + uint32_t max_n_descs) { struct vhost_crypto_writeback_data *wb_data, *head; - struct vring_desc *desc = *cur_desc; + struct vhost_crypto_desc *desc = *cur_desc; uint64_t dlen; uint8_t *dst; int ret; @@ -775,14 +764,10 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, } else offset -= desc->len; - while (write_back_len) { - if (unlikely(*nb_descs == 0 || desc->next >= vq_size)) { - VC_LOG_ERR("Invalid descriptors"); - goto error_exit; - } - (*nb_descs)--; - - desc = &vc_req->head[desc->next]; + while (write_back_len && + desc >= head_desc && + desc - head_desc < (int)max_n_descs) { + desc++; if (unlikely(!(desc->flags & VRING_DESC_F_WRITE))) { VC_LOG_ERR("incorrect descriptor"); goto error_exit; @@ -821,13 +806,10 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, wb_data->next = NULL; } - if (unlikely(*nb_descs == 0)) + if (unlikely(desc - head_desc == (int)max_n_descs)) *cur_desc = NULL; - else { - if (unlikely(desc->next >= vq_size)) - goto error_exit; - *cur_desc = &vc_req->head[desc->next]; - } + else + *cur_desc = desc + 1; *end_wb_data = wb_data; @@ -851,14 +833,14 @@ vhost_crypto_check_cipher_request(struct virtio_crypto_cipher_data_req *req) return VIRTIO_CRYPTO_BADMSG; } -static uint8_t +static __rte_always_inline uint8_t prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct vhost_crypto_data_req *vc_req, struct virtio_crypto_cipher_data_req *cipher, - struct vring_desc *cur_desc, - uint32_t *nb_descs, uint32_t vq_size) + struct vhost_crypto_desc *head, + uint32_t max_n_descs) { - struct vring_desc *desc = cur_desc; + struct vhost_crypto_desc *desc = head; struct vhost_crypto_writeback_data *ewb = NULL; struct rte_mbuf *m_src = op->sym->m_src, *m_dst = op->sym->m_dst; uint8_t *iv_data = rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET); @@ -869,8 +851,8 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, /* prepare */ /* iv */ - if (unlikely(copy_data(iv_data, vc_req, &desc, cipher->para.iv_len, - nb_descs, vq_size) < 0)) { + if (unlikely(copy_data(iv_data, vc_req, head, &desc, + cipher->para.iv_len, max_n_descs))) { ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -888,9 +870,8 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, goto error_exit; } - if (unlikely(move_desc(vc_req->head, &desc, - cipher->para.src_data_len, nb_descs, - vq_size) < 0)) { + if (unlikely(move_desc(head, &desc, cipher->para.src_data_len, + max_n_descs) < 0)) { VC_LOG_ERR("Incorrect descriptor"); ret = VIRTIO_CRYPTO_ERR; goto error_exit; @@ -901,8 +882,8 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, vc_req->wb_pool = vcrypto->wb_pool; m_src->data_len = cipher->para.src_data_len; if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), - vc_req, &desc, cipher->para.src_data_len, - nb_descs, vq_size) < 0)) { + vc_req, head, &desc, cipher->para.src_data_len, + max_n_descs) < 0)) { ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -913,7 +894,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, } /* dst */ - desc = find_write_desc(vc_req->head, desc, nb_descs, vq_size); + desc = find_write_desc(head, desc, max_n_descs); if (unlikely(!desc)) { VC_LOG_ERR("Cannot find write location"); ret = VIRTIO_CRYPTO_BADMSG; @@ -931,9 +912,8 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, goto error_exit; } - if (unlikely(move_desc(vc_req->head, &desc, - cipher->para.dst_data_len, - nb_descs, vq_size) < 0)) { + if (unlikely(move_desc(head, &desc, cipher->para.dst_data_len, + max_n_descs) < 0)) { VC_LOG_ERR("Incorrect descriptor"); ret = VIRTIO_CRYPTO_ERR; goto error_exit; @@ -942,9 +922,9 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, m_dst->data_len = cipher->para.dst_data_len; break; case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE: - vc_req->wb = prepare_write_back_data(vc_req, &desc, &ewb, + vc_req->wb = prepare_write_back_data(vc_req, head, &desc, &ewb, rte_pktmbuf_mtod(m_src, uint8_t *), 0, - cipher->para.dst_data_len, nb_descs, vq_size); + cipher->para.dst_data_len, max_n_descs); if (unlikely(vc_req->wb == NULL)) { ret = VIRTIO_CRYPTO_ERR; goto error_exit; @@ -986,33 +966,33 @@ static __rte_always_inline uint8_t vhost_crypto_check_chain_request(struct virtio_crypto_alg_chain_data_req *req) { if (likely((req->para.iv_len <= VHOST_CRYPTO_MAX_IV_LEN) && - (req->para.src_data_len <= RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.src_data_len <= VHOST_CRYPTO_MAX_DATA_SIZE) && (req->para.dst_data_len >= req->para.src_data_len) && - (req->para.dst_data_len <= RTE_MBUF_DEFAULT_DATAROOM) && + (req->para.dst_data_len <= VHOST_CRYPTO_MAX_DATA_SIZE) && (req->para.cipher_start_src_offset < - RTE_MBUF_DEFAULT_DATAROOM) && - (req->para.len_to_cipher < RTE_MBUF_DEFAULT_DATAROOM) && + VHOST_CRYPTO_MAX_DATA_SIZE) && + (req->para.len_to_cipher <= VHOST_CRYPTO_MAX_DATA_SIZE) && (req->para.hash_start_src_offset < - RTE_MBUF_DEFAULT_DATAROOM) && - (req->para.len_to_hash < RTE_MBUF_DEFAULT_DATAROOM) && + VHOST_CRYPTO_MAX_DATA_SIZE) && + (req->para.len_to_hash <= VHOST_CRYPTO_MAX_DATA_SIZE) && (req->para.cipher_start_src_offset + req->para.len_to_cipher <= req->para.src_data_len) && (req->para.hash_start_src_offset + req->para.len_to_hash <= req->para.src_data_len) && (req->para.dst_data_len + req->para.hash_result_len <= - RTE_MBUF_DEFAULT_DATAROOM))) + VHOST_CRYPTO_MAX_DATA_SIZE))) return VIRTIO_CRYPTO_OK; return VIRTIO_CRYPTO_BADMSG; } -static uint8_t +static __rte_always_inline uint8_t prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct vhost_crypto_data_req *vc_req, struct virtio_crypto_alg_chain_data_req *chain, - struct vring_desc *cur_desc, - uint32_t *nb_descs, uint32_t vq_size) + struct vhost_crypto_desc *head, + uint32_t max_n_descs) { - struct vring_desc *desc = cur_desc, *digest_desc; + struct vhost_crypto_desc *desc = head, *digest_desc; struct vhost_crypto_writeback_data *ewb = NULL, *ewb2 = NULL; struct rte_mbuf *m_src = op->sym->m_src, *m_dst = op->sym->m_dst; uint8_t *iv_data = rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET); @@ -1025,8 +1005,8 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, /* prepare */ /* iv */ - if (unlikely(copy_data(iv_data, vc_req, &desc, - chain->para.iv_len, nb_descs, vq_size) < 0)) { + if (unlikely(copy_data(iv_data, vc_req, head, &desc, + chain->para.iv_len, max_n_descs) < 0)) { ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1045,9 +1025,8 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, goto error_exit; } - if (unlikely(move_desc(vc_req->head, &desc, - chain->para.src_data_len, - nb_descs, vq_size) < 0)) { + if (unlikely(move_desc(head, &desc, chain->para.src_data_len, + max_n_descs) < 0)) { VC_LOG_ERR("Incorrect descriptor"); ret = VIRTIO_CRYPTO_ERR; goto error_exit; @@ -1057,8 +1036,8 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, vc_req->wb_pool = vcrypto->wb_pool; m_src->data_len = chain->para.src_data_len; if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), - vc_req, &desc, chain->para.src_data_len, - nb_descs, vq_size) < 0)) { + vc_req, head, &desc, chain->para.src_data_len, + max_n_descs) < 0)) { ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1070,7 +1049,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, } /* dst */ - desc = find_write_desc(vc_req->head, desc, nb_descs, vq_size); + desc = find_write_desc(head, desc, max_n_descs); if (unlikely(!desc)) { VC_LOG_ERR("Cannot find write location"); ret = VIRTIO_CRYPTO_BADMSG; @@ -1089,8 +1068,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, } if (unlikely(move_desc(vc_req->head, &desc, - chain->para.dst_data_len, - nb_descs, vq_size) < 0)) { + chain->para.dst_data_len, max_n_descs) < 0)) { VC_LOG_ERR("Incorrect descriptor"); ret = VIRTIO_CRYPTO_ERR; goto error_exit; @@ -1106,9 +1084,9 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, goto error_exit; } - if (unlikely(move_desc(vc_req->head, &desc, + if (unlikely(move_desc(head, &desc, chain->para.hash_result_len, - nb_descs, vq_size) < 0)) { + max_n_descs) < 0)) { VC_LOG_ERR("Incorrect descriptor"); ret = VIRTIO_CRYPTO_ERR; goto error_exit; @@ -1116,34 +1094,34 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, break; case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE: - vc_req->wb = prepare_write_back_data(vc_req, &desc, &ewb, + vc_req->wb = prepare_write_back_data(vc_req, head, &desc, &ewb, rte_pktmbuf_mtod(m_src, uint8_t *), chain->para.cipher_start_src_offset, chain->para.dst_data_len - - chain->para.cipher_start_src_offset, - nb_descs, vq_size); + chain->para.cipher_start_src_offset, + max_n_descs); if (unlikely(vc_req->wb == NULL)) { ret = VIRTIO_CRYPTO_ERR; goto error_exit; } + digest_desc = desc; digest_offset = m_src->data_len; digest_addr = rte_pktmbuf_mtod_offset(m_src, void *, digest_offset); - digest_desc = desc; /** create a wb_data for digest */ - ewb->next = prepare_write_back_data(vc_req, &desc, &ewb2, - digest_addr, 0, chain->para.hash_result_len, - nb_descs, vq_size); + ewb->next = prepare_write_back_data(vc_req, head, &desc, + &ewb2, digest_addr, 0, + chain->para.hash_result_len, max_n_descs); if (unlikely(ewb->next == NULL)) { ret = VIRTIO_CRYPTO_ERR; goto error_exit; } - if (unlikely(copy_data(digest_addr, vc_req, &digest_desc, + if (unlikely(copy_data(digest_addr, vc_req, head, &digest_desc, chain->para.hash_result_len, - nb_descs, vq_size) < 0)) { + max_n_descs) < 0)) { ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1193,74 +1171,103 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, static __rte_always_inline int vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, struct vhost_virtqueue *vq, struct rte_crypto_op *op, - struct vring_desc *head, uint16_t desc_idx) + struct vring_desc *head, struct vhost_crypto_desc *descs, + uint16_t desc_idx) { struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src); struct rte_cryptodev_sym_session *session; - struct virtio_crypto_op_data_req *req, tmp_req; + struct virtio_crypto_op_data_req req; struct virtio_crypto_inhdr *inhdr; - struct vring_desc *desc = NULL; + struct vhost_crypto_desc *desc = descs; + struct vring_desc *src_desc; uint64_t session_id; uint64_t dlen; - uint32_t nb_descs = vq->size; - int err = 0; + uint32_t nb_descs = 0, max_n_descs, i; + int err; vc_req->desc_idx = desc_idx; vc_req->dev = vcrypto->dev; vc_req->vq = vq; - if (likely(head->flags & VRING_DESC_F_INDIRECT)) { - dlen = head->len; - nb_descs = dlen / sizeof(struct vring_desc); - /* drop invalid descriptors */ - if (unlikely(nb_descs > vq->size)) - return -1; - desc = IOVA_TO_VVA(struct vring_desc *, vc_req, head->addr, - &dlen, VHOST_ACCESS_RO); - if (unlikely(!desc || dlen != head->len)) - return -1; - desc_idx = 0; - head = desc; - } else { - desc = head; + if (unlikely((head->flags & VRING_DESC_F_INDIRECT) == 0)) { + VC_LOG_ERR("Invalid descriptor"); + return -1; } - vc_req->head = head; - vc_req->zero_copy = vcrypto->option; + dlen = head->len; + src_desc = IOVA_TO_VVA(struct vring_desc *, vc_req, head->addr, + &dlen, VHOST_ACCESS_RO); + if (unlikely(!src_desc || dlen != head->len)) { + VC_LOG_ERR("Invalid descriptor"); + return -1; + } + head = src_desc; - req = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); - if (unlikely(req == NULL)) { - switch (vcrypto->option) { - case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE: - err = VIRTIO_CRYPTO_BADMSG; - VC_LOG_ERR("Invalid descriptor"); - goto error_exit; - case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE: - req = &tmp_req; - if (unlikely(copy_data(req, vc_req, &desc, sizeof(*req), - &nb_descs, vq->size) < 0)) { - err = VIRTIO_CRYPTO_BADMSG; - VC_LOG_ERR("Invalid descriptor"); - goto error_exit; + nb_descs = max_n_descs = dlen / sizeof(struct vring_desc); + if (unlikely(nb_descs > VHOST_CRYPTO_MAX_N_DESC || nb_descs == 0)) { + err = VIRTIO_CRYPTO_ERR; + VC_LOG_ERR("Cannot process num of descriptors %u", nb_descs); + if (nb_descs > 0) { + struct vring_desc *inhdr_desc = head; + while (inhdr_desc->flags & VRING_DESC_F_NEXT) { + if (inhdr_desc->next >= max_n_descs) + return -1; + inhdr_desc = &head[inhdr_desc->next]; } - break; - default: - err = VIRTIO_CRYPTO_ERR; - VC_LOG_ERR("Invalid option"); - goto error_exit; + if (inhdr_desc->len != sizeof(*inhdr)) + return -1; + inhdr = IOVA_TO_VVA(struct virtio_crypto_inhdr *, + vc_req, inhdr_desc->addr, &dlen, + VHOST_ACCESS_WO); + if (unlikely(!inhdr || dlen != inhdr_desc->len)) + return -1; + inhdr->status = VIRTIO_CRYPTO_ERR; + return -1; } - } else { - if (unlikely(move_desc(vc_req->head, &desc, - sizeof(*req), &nb_descs, vq->size) < 0)) { - VC_LOG_ERR("Incorrect descriptor"); + } + + /* copy descriptors to local variable */ + for (i = 0; i < max_n_descs; i++) { + desc->addr = src_desc->addr; + desc->len = src_desc->len; + desc->flags = src_desc->flags; + desc++; + if (unlikely((src_desc->flags & VRING_DESC_F_NEXT) == 0)) + break; + if (unlikely(src_desc->next >= max_n_descs)) { + err = VIRTIO_CRYPTO_BADMSG; + VC_LOG_ERR("Invalid descriptor"); goto error_exit; } + src_desc = &head[src_desc->next]; + } + + vc_req->head = head; + vc_req->zero_copy = vcrypto->option; + + nb_descs = desc - descs; + desc = descs; + + if (unlikely(desc->len < sizeof(req))) { + err = VIRTIO_CRYPTO_BADMSG; + VC_LOG_ERR("Invalid descriptor"); + goto error_exit; } - switch (req->header.opcode) { + if (unlikely(copy_data(&req, vc_req, descs, &desc, sizeof(req), + max_n_descs) < 0)) { + err = VIRTIO_CRYPTO_BADMSG; + VC_LOG_ERR("Invalid descriptor"); + goto error_exit; + } + + /* desc is advanced by 1 now */ + max_n_descs -= 1; + + switch (req.header.opcode) { case VIRTIO_CRYPTO_CIPHER_ENCRYPT: case VIRTIO_CRYPTO_CIPHER_DECRYPT: - session_id = req->header.session_id; + session_id = req.header.session_id; /* one branch to avoid unnecessary table lookup */ if (vcrypto->cache_session_id != session_id) { @@ -1286,19 +1293,19 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, goto error_exit; } - switch (req->u.sym_req.op_type) { + switch (req.u.sym_req.op_type) { case VIRTIO_CRYPTO_SYM_OP_NONE: err = VIRTIO_CRYPTO_NOTSUPP; break; case VIRTIO_CRYPTO_SYM_OP_CIPHER: err = prepare_sym_cipher_op(vcrypto, op, vc_req, - &req->u.sym_req.u.cipher, desc, - &nb_descs, vq->size); + &req.u.sym_req.u.cipher, desc, + max_n_descs); break; case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING: err = prepare_sym_chain_op(vcrypto, op, vc_req, - &req->u.sym_req.u.chain, desc, - &nb_descs, vq->size); + &req.u.sym_req.u.chain, desc, + max_n_descs); break; } if (unlikely(err != 0)) { @@ -1307,8 +1314,9 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, } break; default: + err = VIRTIO_CRYPTO_ERR; VC_LOG_ERR("Unsupported symmetric crypto request type %u", - req->header.opcode); + req.header.opcode); goto error_exit; } @@ -1316,7 +1324,7 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, error_exit: - inhdr = reach_inhdr(vc_req, desc, &nb_descs, vq->size); + inhdr = reach_inhdr(vc_req, descs, max_n_descs); if (likely(inhdr != NULL)) inhdr->status = (uint8_t)err; @@ -1330,17 +1338,16 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op, struct rte_mbuf *m_src = op->sym->m_src; struct rte_mbuf *m_dst = op->sym->m_dst; struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src); - uint16_t desc_idx; + struct vhost_virtqueue *vq = vc_req->vq; + uint16_t used_idx = vc_req->desc_idx, desc_idx; if (unlikely(!vc_req)) { VC_LOG_ERR("Failed to retrieve vc_req"); return NULL; } - if (old_vq && (vc_req->vq != old_vq)) - return vc_req->vq; - - desc_idx = vc_req->desc_idx; + if (old_vq && (vq != old_vq)) + return vq; if (unlikely(op->status != RTE_CRYPTO_OP_STATUS_SUCCESS)) vc_req->inhdr->status = VIRTIO_CRYPTO_ERR; @@ -1349,8 +1356,9 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op, write_back_data(vc_req); } - vc_req->vq->used->ring[desc_idx].id = desc_idx; - vc_req->vq->used->ring[desc_idx].len = vc_req->len; + desc_idx = vq->avail->ring[used_idx]; + vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx]; + vq->used->ring[desc_idx].len = vc_req->len; rte_mempool_put(m_src->pool, (void *)m_src); @@ -1448,7 +1456,7 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id, vcrypto->mbuf_pool = rte_pktmbuf_pool_create(name, VHOST_CRYPTO_MBUF_POOL_SIZE, 512, sizeof(struct vhost_crypto_data_req), - RTE_MBUF_DEFAULT_DATAROOM * 2 + RTE_PKTMBUF_HEADROOM, + VHOST_CRYPTO_MAX_DATA_SIZE + RTE_PKTMBUF_HEADROOM, rte_socket_id()); if (!vcrypto->mbuf_pool) { VC_LOG_ERR("Failed to creath mbuf pool"); @@ -1574,6 +1582,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_mbuf *mbufs[VHOST_CRYPTO_MAX_BURST_SIZE * 2]; + struct vhost_crypto_desc descs[VHOST_CRYPTO_MAX_N_DESC]; struct virtio_net *dev = get_device(vid); struct vhost_crypto *vcrypto; struct vhost_virtqueue *vq; @@ -1632,7 +1641,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, op->sym->m_dst->data_off = 0; if (unlikely(vhost_crypto_process_one_req(vcrypto, vq, - op, head, desc_idx) < 0)) + op, head, descs, used_idx) < 0)) break; } @@ -1661,7 +1670,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, op->sym->m_src->data_off = 0; if (unlikely(vhost_crypto_process_one_req(vcrypto, vq, - op, head, desc_idx) < 0)) + op, head, descs, desc_idx) < 0)) break; }