Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/101351/?format=api
https://patches.dpdk.org/api/patches/101351/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/20211013133704.31296-5-konstantin.ananyev@intel.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20211013133704.31296-5-konstantin.ananyev@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20211013133704.31296-5-konstantin.ananyev@intel.com", "date": "2021-10-13T13:37:02", "name": "[v6,4/6] ethdev: make fast-path functions to use new flat array", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "98e713d717c1efdc532a7a6b702b5025c10f6476", "submitter": { "id": 33, "url": "https://patches.dpdk.org/api/people/33/?format=api", "name": "Ananyev, Konstantin", "email": "konstantin.ananyev@intel.com" }, "delegate": { "id": 319, "url": "https://patches.dpdk.org/api/users/319/?format=api", "username": "fyigit", "first_name": "Ferruh", "last_name": "Yigit", "email": "ferruh.yigit@amd.com" }, "mbox": "https://patches.dpdk.org/project/dpdk/patch/20211013133704.31296-5-konstantin.ananyev@intel.com/mbox/", "series": [ { "id": 19596, "url": "https://patches.dpdk.org/api/series/19596/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=19596", "date": "2021-10-13T13:36:58", "name": "hide eth dev related structures", "version": 6, "mbox": "https://patches.dpdk.org/series/19596/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/101351/comments/", "check": "warning", "checks": "https://patches.dpdk.org/api/patches/101351/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 8E9B5A0C55;\n\tWed, 13 Oct 2021 15:38:19 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 79BD0411DB;\n\tWed, 13 Oct 2021 15:38:19 +0200 (CEST)", "from mga04.intel.com (mga04.intel.com [192.55.52.120])\n by mails.dpdk.org (Postfix) with ESMTP id 6432E411C8\n for <dev@dpdk.org>; Wed, 13 Oct 2021 15:38:17 +0200 (CEST)", "from orsmga001.jf.intel.com ([10.7.209.18])\n by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 13 Oct 2021 06:38:16 -0700", "from sivswdev08.ir.intel.com ([10.237.217.47])\n by orsmga001.jf.intel.com with ESMTP; 13 Oct 2021 06:38:07 -0700" ], "X-IronPort-AV": [ "E=McAfee;i=\"6200,9189,10135\"; a=\"226200659\"", "E=Sophos;i=\"5.85,371,1624345200\"; d=\"scan'208\";a=\"226200659\"", "E=Sophos;i=\"5.85,371,1624345200\"; d=\"scan'208\";a=\"524628530\"" ], "X-ExtLoop1": "1", "From": "Konstantin Ananyev <konstantin.ananyev@intel.com>", "To": "dev@dpdk.org", "Cc": "xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com,\n ndabilpuram@marvell.com, adwivedi@marvell.com,\n shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com,\n john.miller@atomicrules.com, irusskikh@marvell.com,\n ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com,\n rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com,\n sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com,\n hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com,\n humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com,\n beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com,\n matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com,\n longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com,\n andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com,\n jiawenwu@trustnetic.com, jianwang@trustnetic.com,\n maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net,\n ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com,\n Konstantin Ananyev <konstantin.ananyev@intel.com>", "Date": "Wed, 13 Oct 2021 14:37:02 +0100", "Message-Id": "<20211013133704.31296-5-konstantin.ananyev@intel.com>", "X-Mailer": "git-send-email 2.18.0", "In-Reply-To": "<20211013133704.31296-1-konstantin.ananyev@intel.com>", "References": "<0211007112750.25526-1-konstantin.ananyev@intel.com>\n <20211013133704.31296-1-konstantin.ananyev@intel.com>", "Subject": "[dpdk-dev] [PATCH v6 4/6] ethdev: make fast-path functions to use\n new flat array", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Rework fast-path ethdev functions to use rte_eth_fp_ops[].\nWhile it is an API/ABI breakage, this change is intended to be\ntransparent for both users (no changes in user app is required) and\nPMD developers (no changes in PMD is required).\nOne extra thing to note - RX/TX callback invocation will cause extra\nfunction call with these changes. That might cause some insignificant\nslowdown for code-path where RX/TX callbacks are heavily involved.\n\nSigned-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>\n---\n lib/ethdev/ethdev_private.c | 31 +++++\n lib/ethdev/rte_ethdev.h | 270 +++++++++++++++++++++++++-----------\n lib/ethdev/version.map | 3 +\n 3 files changed, 226 insertions(+), 78 deletions(-)", "diff": "diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c\nindex d810c3a1d4..c905c2df6f 100644\n--- a/lib/ethdev/ethdev_private.c\n+++ b/lib/ethdev/ethdev_private.c\n@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,\n \tfpo->txq.data = dev->data->tx_queues;\n \tfpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;\n }\n+\n+uint16_t\n+rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,\n+\tstruct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,\n+\tvoid *opaque)\n+{\n+\tconst struct rte_eth_rxtx_callback *cb = opaque;\n+\n+\twhile (cb != NULL) {\n+\t\tnb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,\n+\t\t\t\tnb_pkts, cb->param);\n+\t\tcb = cb->next;\n+\t}\n+\n+\treturn nb_rx;\n+}\n+\n+uint16_t\n+rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,\n+\tstruct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)\n+{\n+\tconst struct rte_eth_rxtx_callback *cb = opaque;\n+\n+\twhile (cb != NULL) {\n+\t\tnb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,\n+\t\t\t\tcb->param);\n+\t\tcb = cb->next;\n+\t}\n+\n+\treturn nb_pkts;\n+}\ndiff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h\nindex 4007bd0e73..f4c92b3b5e 100644\n--- a/lib/ethdev/rte_ethdev.h\n+++ b/lib/ethdev/rte_ethdev.h\n@@ -4884,6 +4884,33 @@ int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);\n \n #include <rte_ethdev_core.h>\n \n+/**\n+ * @internal\n+ * Helper routine for rte_eth_rx_burst().\n+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.\n+ * Does necessary post-processing - invokes Rx callbacks if any, etc.\n+ *\n+ * @param port_id\n+ * The port identifier of the Ethernet device.\n+ * @param queue_id\n+ * The index of the receive queue from which to retrieve input packets.\n+ * @param rx_pkts\n+ * The address of an array of pointers to *rte_mbuf* structures that\n+ * have been retrieved from the device.\n+ * @param nb_rx\n+ * The number of packets that were retrieved from the device.\n+ * @param nb_pkts\n+ * The number of elements in @p rx_pkts array.\n+ * @param opaque\n+ * Opaque pointer of Rx queue callback related data.\n+ *\n+ * @return\n+ * The number of packets effectively supplied to the @p rx_pkts array.\n+ */\n+uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,\n+\t\tstruct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,\n+\t\tvoid *opaque);\n+\n /**\n *\n * Retrieve a burst of input packets from a receive queue of an Ethernet\n@@ -4975,39 +5002,51 @@ static inline uint16_t\n rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,\n \t\t struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)\n {\n-\tstruct rte_eth_dev *dev = &rte_eth_devices[port_id];\n \tuint16_t nb_rx;\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n+\n+#ifdef RTE_ETHDEV_DEBUG_RX\n+\tif (port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\tqueue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Invalid port_id=%u or queue_id=%u\\n\",\n+\t\t\tport_id, queue_id);\n+\t\treturn 0;\n+\t}\n+#endif\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[port_id];\n+\tqd = p->rxq.data[queue_id];\n \n #ifdef RTE_ETHDEV_DEBUG_RX\n \tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);\n-\tRTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);\n \n-\tif (queue_id >= dev->data->nb_rx_queues) {\n-\t\tRTE_ETHDEV_LOG(ERR, \"Invalid RX queue_id=%u\\n\", queue_id);\n+\tif (qd == NULL) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid RX queue_id=%u for port_id=%u\\n\",\n+\t\t\tqueue_id, port_id);\n \t\treturn 0;\n \t}\n #endif\n-\tnb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],\n-\t\t\t\t rx_pkts, nb_pkts);\n \n-#ifdef RTE_ETHDEV_RXTX_CALLBACKS\n-\tstruct rte_eth_rxtx_callback *cb;\n+\tnb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);\n \n-\t/* __ATOMIC_RELEASE memory order was used when the\n-\t * call back was inserted into the list.\n-\t * Since there is a clear dependency between loading\n-\t * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is\n-\t * not required.\n-\t */\n-\tcb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],\n+#ifdef RTE_ETHDEV_RXTX_CALLBACKS\n+\t{\n+\t\tvoid *cb;\n+\n+\t\t/* __ATOMIC_RELEASE memory order was used when the\n+\t\t * call back was inserted into the list.\n+\t\t * Since there is a clear dependency between loading\n+\t\t * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is\n+\t\t * not required.\n+\t\t */\n+\t\tcb = __atomic_load_n((void **)&p->rxq.clbk[queue_id],\n \t\t\t\t__ATOMIC_RELAXED);\n-\n-\tif (unlikely(cb != NULL)) {\n-\t\tdo {\n-\t\t\tnb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,\n-\t\t\t\t\t\tnb_pkts, cb->param);\n-\t\t\tcb = cb->next;\n-\t\t} while (cb != NULL);\n+\t\tif (unlikely(cb != NULL))\n+\t\t\tnb_rx = rte_eth_call_rx_callbacks(port_id, queue_id,\n+\t\t\t\t\trx_pkts, nb_rx, nb_pkts, cb);\n \t}\n #endif\n \n@@ -5031,16 +5070,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,\n static inline int\n rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)\n {\n-\tstruct rte_eth_dev *dev;\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n+\n+\tif (port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\tqueue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Invalid port_id=%u or queue_id=%u\\n\",\n+\t\t\tport_id, queue_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[port_id];\n+\tqd = p->rxq.data[queue_id];\n \n \tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n-\tdev = &rte_eth_devices[port_id];\n-\tRTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);\n-\tif (queue_id >= dev->data->nb_rx_queues ||\n-\t dev->data->rx_queues[queue_id] == NULL)\n+\tRTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);\n+\tif (qd == NULL)\n \t\treturn -EINVAL;\n \n-\treturn (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);\n+\treturn (int)(*p->rx_queue_count)(qd);\n }\n \n /**@{@name Rx hardware descriptor states\n@@ -5088,21 +5138,30 @@ static inline int\n rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,\n \tuint16_t offset)\n {\n-\tstruct rte_eth_dev *dev;\n-\tvoid *rxq;\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n \n #ifdef RTE_ETHDEV_DEBUG_RX\n-\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tif (port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\tqueue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Invalid port_id=%u or queue_id=%u\\n\",\n+\t\t\tport_id, queue_id);\n+\t\treturn -EINVAL;\n+\t}\n #endif\n-\tdev = &rte_eth_devices[port_id];\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[port_id];\n+\tqd = p->rxq.data[queue_id];\n+\n #ifdef RTE_ETHDEV_DEBUG_RX\n-\tif (queue_id >= dev->data->nb_rx_queues)\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tif (qd == NULL)\n \t\treturn -ENODEV;\n #endif\n-\tRTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);\n-\trxq = dev->data->rx_queues[queue_id];\n-\n-\treturn (*dev->rx_descriptor_status)(rxq, offset);\n+\tRTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);\n+\treturn (*p->rx_descriptor_status)(qd, offset);\n }\n \n /**@{@name Tx hardware descriptor states\n@@ -5149,23 +5208,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,\n static inline int rte_eth_tx_descriptor_status(uint16_t port_id,\n \tuint16_t queue_id, uint16_t offset)\n {\n-\tstruct rte_eth_dev *dev;\n-\tvoid *txq;\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n \n #ifdef RTE_ETHDEV_DEBUG_TX\n-\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tif (port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\tqueue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Invalid port_id=%u or queue_id=%u\\n\",\n+\t\t\tport_id, queue_id);\n+\t\treturn -EINVAL;\n+\t}\n #endif\n-\tdev = &rte_eth_devices[port_id];\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[port_id];\n+\tqd = p->txq.data[queue_id];\n+\n #ifdef RTE_ETHDEV_DEBUG_TX\n-\tif (queue_id >= dev->data->nb_tx_queues)\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tif (qd == NULL)\n \t\treturn -ENODEV;\n #endif\n-\tRTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);\n-\ttxq = dev->data->tx_queues[queue_id];\n-\n-\treturn (*dev->tx_descriptor_status)(txq, offset);\n+\tRTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);\n+\treturn (*p->tx_descriptor_status)(qd, offset);\n }\n \n+/**\n+ * @internal\n+ * Helper routine for rte_eth_tx_burst().\n+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.\n+ * Does necessary pre-processing - invokes Tx callbacks if any, etc.\n+ *\n+ * @param port_id\n+ * The port identifier of the Ethernet device.\n+ * @param queue_id\n+ * The index of the transmit queue through which output packets must be\n+ * sent.\n+ * @param tx_pkts\n+ * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures\n+ * which contain the output packets.\n+ * @param nb_pkts\n+ * The maximum number of packets to transmit.\n+ * @return\n+ * The number of output packets to transmit.\n+ */\n+uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,\n+\tstruct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);\n+\n /**\n * Send a burst of output packets on a transmit queue of an Ethernet device.\n *\n@@ -5236,42 +5326,55 @@ static inline uint16_t\n rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,\n \t\t struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n {\n-\tstruct rte_eth_dev *dev = &rte_eth_devices[port_id];\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n+\n+#ifdef RTE_ETHDEV_DEBUG_TX\n+\tif (port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\tqueue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Invalid port_id=%u or queue_id=%u\\n\",\n+\t\t\tport_id, queue_id);\n+\t\treturn 0;\n+\t}\n+#endif\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[port_id];\n+\tqd = p->txq.data[queue_id];\n \n #ifdef RTE_ETHDEV_DEBUG_TX\n \tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);\n-\tRTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);\n \n-\tif (queue_id >= dev->data->nb_tx_queues) {\n-\t\tRTE_ETHDEV_LOG(ERR, \"Invalid TX queue_id=%u\\n\", queue_id);\n+\tif (qd == NULL) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid TX queue_id=%u for port_id=%u\\n\",\n+\t\t\tqueue_id, port_id);\n \t\treturn 0;\n \t}\n #endif\n \n #ifdef RTE_ETHDEV_RXTX_CALLBACKS\n-\tstruct rte_eth_rxtx_callback *cb;\n-\n-\t/* __ATOMIC_RELEASE memory order was used when the\n-\t * call back was inserted into the list.\n-\t * Since there is a clear dependency between loading\n-\t * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is\n-\t * not required.\n-\t */\n-\tcb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],\n+\t{\n+\t\tvoid *cb;\n+\n+\t\t/* __ATOMIC_RELEASE memory order was used when the\n+\t\t * call back was inserted into the list.\n+\t\t * Since there is a clear dependency between loading\n+\t\t * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is\n+\t\t * not required.\n+\t\t */\n+\t\tcb = __atomic_load_n((void **)&p->txq.clbk[queue_id],\n \t\t\t\t__ATOMIC_RELAXED);\n-\n-\tif (unlikely(cb != NULL)) {\n-\t\tdo {\n-\t\t\tnb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,\n-\t\t\t\t\tcb->param);\n-\t\t\tcb = cb->next;\n-\t\t} while (cb != NULL);\n+\t\tif (unlikely(cb != NULL))\n+\t\t\tnb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id,\n+\t\t\t\t\ttx_pkts, nb_pkts, cb);\n \t}\n #endif\n \n-\trte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,\n-\t\tnb_pkts);\n-\treturn (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);\n+\tnb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);\n+\n+\trte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);\n+\treturn nb_pkts;\n }\n \n /**\n@@ -5334,31 +5437,42 @@ static inline uint16_t\n rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,\n \t\tstruct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n {\n-\tstruct rte_eth_dev *dev;\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n \n #ifdef RTE_ETHDEV_DEBUG_TX\n-\tif (!rte_eth_dev_is_valid_port(port_id)) {\n-\t\tRTE_ETHDEV_LOG(ERR, \"Invalid TX port_id=%u\\n\", port_id);\n+\tif (port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\tqueue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\"Invalid port_id=%u or queue_id=%u\\n\",\n+\t\t\tport_id, queue_id);\n \t\trte_errno = ENODEV;\n \t\treturn 0;\n \t}\n #endif\n \n-\tdev = &rte_eth_devices[port_id];\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[port_id];\n+\tqd = p->txq.data[queue_id];\n \n #ifdef RTE_ETHDEV_DEBUG_TX\n-\tif (queue_id >= dev->data->nb_tx_queues) {\n-\t\tRTE_ETHDEV_LOG(ERR, \"Invalid TX queue_id=%u\\n\", queue_id);\n+\tif (!rte_eth_dev_is_valid_port(port_id)) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid TX port_id=%u\\n\", port_id);\n+\t\trte_errno = ENODEV;\n+\t\treturn 0;\n+\t}\n+\tif (qd == NULL) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid TX queue_id=%u for port_id=%u\\n\",\n+\t\t\tqueue_id, port_id);\n \t\trte_errno = EINVAL;\n \t\treturn 0;\n \t}\n #endif\n \n-\tif (!dev->tx_pkt_prepare)\n+\tif (!p->tx_pkt_prepare)\n \t\treturn nb_pkts;\n \n-\treturn (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],\n-\t\t\ttx_pkts, nb_pkts);\n+\treturn p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);\n }\n \n #else\ndiff --git a/lib/ethdev/version.map b/lib/ethdev/version.map\nindex 29fb71f1af..61011b110a 100644\n--- a/lib/ethdev/version.map\n+++ b/lib/ethdev/version.map\n@@ -7,6 +7,8 @@ DPDK_22 {\n \trte_eth_allmulticast_disable;\n \trte_eth_allmulticast_enable;\n \trte_eth_allmulticast_get;\n+\trte_eth_call_rx_callbacks;\n+\trte_eth_call_tx_callbacks;\n \trte_eth_dev_adjust_nb_rx_tx_desc;\n \trte_eth_dev_callback_register;\n \trte_eth_dev_callback_unregister;\n@@ -76,6 +78,7 @@ DPDK_22 {\n \trte_eth_find_next_of;\n \trte_eth_find_next_owned_by;\n \trte_eth_find_next_sibling;\n+\trte_eth_fp_ops;\n \trte_eth_iterator_cleanup;\n \trte_eth_iterator_init;\n \trte_eth_iterator_next;\n", "prefixes": [ "v6", "4/6" ] }{ "id": 101351, "url": "