From patchwork Fri Sep 25 11:09:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 78823 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92B3DA04C0; Fri, 25 Sep 2020 13:12:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC0311E8F3; Fri, 25 Sep 2020 13:10:37 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id D07BA1E904 for ; Fri, 25 Sep 2020 13:10:35 +0200 (CEST) IronPort-SDR: pvSD76mMF47qgeczeyWnF+7Z/6kn8ACgad7d9w8JzfTNTefFpZvk25asg174PNbNkkUWQ5BXVb YKq3xhfb/veg== X-IronPort-AV: E=McAfee;i="6000,8403,9754"; a="225651674" X-IronPort-AV: E=Sophos;i="5.77,301,1596524400"; d="scan'208";a="225651674" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2020 04:10:34 -0700 IronPort-SDR: MGJMxd5vacxyU64wG4d4WSVWkMDZSmkzBUyDuZr+NwXesC2VrNsAECkqHO/YJUuSN9ifb4W2D+ DTF2SESLyQ9g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,301,1596524400"; d="scan'208";a="455788071" Received: from unknown (HELO silpixa00399126.ir.intel.com) ([10.237.222.4]) by orsmga004.jf.intel.com with ESMTP; 25 Sep 2020 04:10:33 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: patrick.fu@intel.com, Bruce Richardson , Kevin Laatz Date: Fri, 25 Sep 2020 12:09:02 +0100 Message-Id: <20200925110910.284098-18-bruce.richardson@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200925110910.284098-1-bruce.richardson@intel.com> References: <20200721095140.719297-1-bruce.richardson@intel.com> <20200925110910.284098-1-bruce.richardson@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 17/25] raw/ioat: add configure function for idxd devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add configure function for idxd devices, taking the same parameters as the existing configure function for ioat. The ring_size parameter is used to compute the maximum number of bursts to be supported by the driver, given that the hardware works on individual bursts of descriptors at a time. Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz --- drivers/raw/ioat/idxd_pci.c | 1 + drivers/raw/ioat/idxd_vdev.c | 1 + drivers/raw/ioat/ioat_common.c | 64 ++++++++++++++++++++++++++ drivers/raw/ioat/ioat_private.h | 3 ++ drivers/raw/ioat/rte_ioat_rawdev_fns.h | 1 + 5 files changed, 70 insertions(+) diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c index 113ee98e8..2d8dbc2f2 100644 --- a/drivers/raw/ioat/idxd_pci.c +++ b/drivers/raw/ioat/idxd_pci.c @@ -55,6 +55,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = { .dev_close = idxd_rawdev_close, .dev_selftest = idxd_rawdev_test, .dump = idxd_dev_dump, + .dev_configure = idxd_dev_configure, }; /* each portal uses 4 x 4k pages */ diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c index 31d8916d0..aa6a5a9aa 100644 --- a/drivers/raw/ioat/idxd_vdev.c +++ b/drivers/raw/ioat/idxd_vdev.c @@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = { .dev_close = idxd_rawdev_close, .dev_selftest = idxd_rawdev_test, .dump = idxd_dev_dump, + .dev_configure = idxd_dev_configure, }; static void * diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c index 672241351..5173c331c 100644 --- a/drivers/raw/ioat/ioat_common.c +++ b/drivers/raw/ioat/ioat_common.c @@ -44,6 +44,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f) return 0; } +int +idxd_dev_configure(const struct rte_rawdev *dev, + rte_rawdev_obj_t config, size_t config_size) +{ + struct idxd_rawdev *idxd = dev->dev_private; + struct rte_idxd_rawdev *rte_idxd = &idxd->public; + struct rte_ioat_rawdev_config *cfg = config; + uint16_t max_desc = cfg->ring_size; + uint16_t max_batches = max_desc / BATCH_SIZE; + uint16_t i; + + if (config_size != sizeof(*cfg)) + return -EINVAL; + + if (dev->started) { + IOAT_PMD_ERR("%s: Error, device is started.", __func__); + return -EAGAIN; + } + + rte_idxd->hdls_disable = cfg->hdls_disable; + + /* limit the batches to what can be stored in hardware */ + if (max_batches > idxd->max_batches) { + IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u", + max_desc, idxd->max_batches, BATCH_SIZE); + max_batches = idxd->max_batches; + max_desc = max_batches * BATCH_SIZE; + } + if (!rte_is_power_of_2(max_desc)) + max_desc = rte_align32pow2(max_desc); + IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches", + dev->dev_id, max_desc, max_batches); + + /* in case we are reconfiguring a device, free any existing memory */ + rte_free(rte_idxd->batch_ring); + rte_free(rte_idxd->hdl_ring); + + rte_idxd->batch_ring = rte_zmalloc(NULL, + sizeof(*rte_idxd->batch_ring) * max_batches, 0); + if (rte_idxd->batch_ring == NULL) + return -ENOMEM; + + rte_idxd->hdl_ring = rte_zmalloc(NULL, + sizeof(*rte_idxd->hdl_ring) * max_desc, 0); + if (rte_idxd->hdl_ring == NULL) { + rte_free(rte_idxd->batch_ring); + rte_idxd->batch_ring = NULL; + return -ENOMEM; + } + rte_idxd->batch_ring_sz = max_batches; + rte_idxd->hdl_ring_sz = max_desc; + + for (i = 0; i < rte_idxd->batch_ring_sz; i++) { + struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i]; + b->batch_desc.completion = rte_mem_virt2iova(&b->comp); + b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc); + b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) | + IDXD_FLAG_COMPLETION_ADDR_VALID | + IDXD_FLAG_REQUEST_COMPLETION; + } + + return 0; +} + int idxd_rawdev_create(const char *name, struct rte_device *dev, const struct idxd_rawdev *base_idxd, diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h index f521c85a1..aba70d8d7 100644 --- a/drivers/raw/ioat/ioat_private.h +++ b/drivers/raw/ioat/ioat_private.h @@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev, extern int idxd_rawdev_close(struct rte_rawdev *dev); +extern int idxd_dev_configure(const struct rte_rawdev *dev, + rte_rawdev_obj_t config, size_t config_size); + extern int idxd_rawdev_test(uint16_t dev_id); extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f); diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h index 178c432dd..e9cdce016 100644 --- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h +++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h @@ -187,6 +187,7 @@ struct rte_idxd_rawdev { uint16_t next_ret_hdl; /* the next user hdl to return */ uint16_t last_completed_hdl; /* the last user hdl that has completed */ uint16_t next_free_hdl; /* where the handle for next op will go */ + uint16_t hdls_disable; /* disable tracking completion handles */ struct rte_idxd_user_hdl *hdl_ring; struct rte_idxd_desc_batch *batch_ring;