get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/7443/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 7443,
    "url": "https://patches.dpdk.org/api/patches/7443/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1444067692-29645-4-git-send-email-adrien.mazarguil@6wind.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1444067692-29645-4-git-send-email-adrien.mazarguil@6wind.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1444067692-29645-4-git-send-email-adrien.mazarguil@6wind.com",
    "date": "2015-10-05T17:54:38",
    "name": "[dpdk-dev,03/17] mlx5: refactor RX code for the new Verbs RSS API",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "cbad1624d0974968ca99a2b575ce71c42320edd4",
    "submitter": {
        "id": 165,
        "url": "https://patches.dpdk.org/api/people/165/?format=api",
        "name": "Adrien Mazarguil",
        "email": "adrien.mazarguil@6wind.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1444067692-29645-4-git-send-email-adrien.mazarguil@6wind.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/7443/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/7443/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 138779396;\n\tMon,  5 Oct 2015 19:55:20 +0200 (CEST)",
            "from mail-wi0-f171.google.com (mail-wi0-f171.google.com\n\t[209.85.212.171]) by dpdk.org (Postfix) with ESMTP id 75F2D9366\n\tfor <dev@dpdk.org>; Mon,  5 Oct 2015 19:55:19 +0200 (CEST)",
            "by wiclk2 with SMTP id lk2so132469672wic.0\n\tfor <dev@dpdk.org>; Mon, 05 Oct 2015 10:55:19 -0700 (PDT)",
            "from 6wind.com (guy78-3-82-239-227-177.fbx.proxad.net.\n\t[82.239.227.177]) by smtp.gmail.com with ESMTPSA id\n\tpb4sm28096422wjb.8.2015.10.05.10.55.17\n\t(version=TLSv1.2 cipher=RC4-SHA bits=128/128);\n\tMon, 05 Oct 2015 10:55:18 -0700 (PDT)"
        ],
        "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n\t:references;\n\tbh=0vFSo8Y2gyFyuhqOuT/OxRH6AeZFm27+NoDa5CMAwDI=;\n\tb=WYbTSsrFqwFsoeG5vH/51iA9rd2U37kqAHtjie9Zclvg2CE5WCJ8k7NfBkYLw/+69g\n\tiXgT6ngxVzqsrNnTa/Q4PTrw9JAiDn7CXOVfmvQ3Jm2RUiMB9oFP32DaR9mameU/im3t\n\tO1duEEsyZ0KEJEFxBpsiOnTouXaxfBWhsDU2TINREz718jnEfZkXEt0kOB3MMLA5Mteq\n\tgvdrv6kXcatjdN2vhjxh8C9ZEl7ERNeGuGAz3mC/V5XsGXxe5l3ecHC56onsxz8Bc8fo\n\tJp83euQhZWMJDTOuAJLp2U6WG65+ualwqRT/PEm7OWKX/sevPMLEHZ/RKNQCeffoXztP\n\tRdcA==",
        "X-Gm-Message-State": "ALoCoQl3VtFF60ae095+C6DGeDH4S36g2Jzyd2e8cn8ZNownkXDo5sTJM+H7oc2Mpi1K2wCltZYO",
        "X-Received": "by 10.181.13.241 with SMTP id fb17mr12379281wid.45.1444067719335;\n\tMon, 05 Oct 2015 10:55:19 -0700 (PDT)",
        "From": "Adrien Mazarguil <adrien.mazarguil@6wind.com>",
        "To": "dev@dpdk.org",
        "Date": "Mon,  5 Oct 2015 19:54:38 +0200",
        "Message-Id": "<1444067692-29645-4-git-send-email-adrien.mazarguil@6wind.com>",
        "X-Mailer": "git-send-email 2.1.0",
        "In-Reply-To": "<1444067692-29645-1-git-send-email-adrien.mazarguil@6wind.com>",
        "References": "<1444067692-29645-1-git-send-email-adrien.mazarguil@6wind.com>",
        "Subject": "[dpdk-dev] [PATCH 03/17] mlx5: refactor RX code for the new Verbs\n\tRSS API",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "The new Verbs RSS API is lower-level than the previous one and much more\nflexible but requires RX queues to use Work Queues (WQs) internally instead\nof Queue Pairs (QPs), which are grouped in an indirection table used by a\nnew kind of hash RX QPs.\n\nHash RX QPs and the indirection table together replace the parent RSS QP\nwhile WQs are mostly similar to child QPs.\n\nRSS hash key is not configurable yet.\n\nSummary of changes:\n\n- Individual DPDK RX queues do not store flow properties anymore, this info\n  is now part of the hash RX queues.\n- All functions affecting the parent queue when RSS is enabled or the basic\n  queues otherwise are modified to affect hash RX queues instead.\n- Hash RX queues are also used when a single DPDK RX queue is configured (no\n  RSS) to remove that special case.\n- Hash RX queues and indirection table are created/destroyed when device\n  is started/stopped in addition to create/destroy flows.\n- Contrary to QPs, WQs are moved to the \"ready\" state before posting RX\n  buffers, otherwise they are ignored.\n- Resource domain information is added to WQs for better performance.\n- CQs are not resized anymore when switching between non-SG and SG modes as\n  it does not work correctly with WQs. Use the largest possible size\n  instead, since CQ size does not have to be the same as the number of\n  elements in the RX queue. This also applies to the maximum number of\n  outstanding WRs in a WQ (max_recv_wr).\n\nSigned-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>\nSigned-off-by: Olga Shern <olgas@mellanox.com>\nSigned-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>\nSigned-off-by: Or Ami <ora@mellanox.com>\n---\n drivers/net/mlx5/Makefile       |   4 -\n drivers/net/mlx5/mlx5.c         |  38 +--\n drivers/net/mlx5/mlx5.h         |  25 +-\n drivers/net/mlx5/mlx5_ethdev.c  |  53 +---\n drivers/net/mlx5/mlx5_mac.c     | 186 +++++++------\n drivers/net/mlx5/mlx5_rxmode.c  | 295 +++++++++++----------\n drivers/net/mlx5/mlx5_rxq.c     | 559 +++++++++++++++++++++-------------------\n drivers/net/mlx5/mlx5_rxtx.c    |   4 +-\n drivers/net/mlx5/mlx5_rxtx.h    |  23 +-\n drivers/net/mlx5/mlx5_trigger.c |  86 ++-----\n drivers/net/mlx5/mlx5_vlan.c    |  44 +---\n 11 files changed, 641 insertions(+), 676 deletions(-)",
    "diff": "diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile\nindex 8b1e32b..938f924 100644\n--- a/drivers/net/mlx5/Makefile\n+++ b/drivers/net/mlx5/Makefile\n@@ -112,10 +112,6 @@ endif\n mlx5_autoconf.h: $(RTE_SDK)/scripts/auto-config-h.sh\n \t$Q $(RM) -f -- '$@'\n \t$Q sh -- '$<' '$@' \\\n-\t\tRSS_SUPPORT \\\n-\t\tinfiniband/verbs.h \\\n-\t\tenum IBV_EXP_DEVICE_UD_RSS $(AUTOCONF_OUTPUT)\n-\t$Q sh -- '$<' '$@' \\\n \t\tHAVE_EXP_QUERY_DEVICE \\\n \t\tinfiniband/verbs.h \\\n \t\ttype 'struct ibv_exp_device_attr' $(AUTOCONF_OUTPUT)\ndiff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c\nindex 47070f8..a316989 100644\n--- a/drivers/net/mlx5/mlx5.c\n+++ b/drivers/net/mlx5/mlx5.c\n@@ -85,6 +85,13 @@ mlx5_dev_close(struct rte_eth_dev *dev)\n \tDEBUG(\"%p: closing device \\\"%s\\\"\",\n \t      (void *)dev,\n \t      ((priv->ctx != NULL) ? priv->ctx->device->name : \"\"));\n+\t/* In case mlx5_dev_stop() has not been called. */\n+\tif (priv->started) {\n+\t\tpriv_allmulticast_disable(priv);\n+\t\tpriv_promiscuous_disable(priv);\n+\t\tpriv_mac_addrs_disable(priv);\n+\t\tpriv_destroy_hash_rxqs(priv);\n+\t}\n \t/* Prevent crashes when queues are still in use. */\n \tdev->rx_pkt_burst = removed_rx_burst;\n \tdev->tx_pkt_burst = removed_tx_burst;\n@@ -116,8 +123,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)\n \t\tpriv->txqs_n = 0;\n \t\tpriv->txqs = NULL;\n \t}\n-\tif (priv->rss)\n-\t\trxq_cleanup(&priv->rxq_parent);\n \tif (priv->pd != NULL) {\n \t\tassert(priv->ctx != NULL);\n \t\tclaim_zero(ibv_dealloc_pd(priv->pd));\n@@ -297,9 +302,6 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)\n \n #ifdef HAVE_EXP_QUERY_DEVICE\n \t\texp_device_attr.comp_mask = IBV_EXP_DEVICE_ATTR_EXP_CAP_FLAGS;\n-#ifdef RSS_SUPPORT\n-\t\texp_device_attr.comp_mask |= IBV_EXP_DEVICE_ATTR_RSS_TBL_SZ;\n-#endif /* RSS_SUPPORT */\n #endif /* HAVE_EXP_QUERY_DEVICE */\n \n \t\tDEBUG(\"using port %u (%08\" PRIx32 \")\", port, test);\n@@ -349,32 +351,6 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)\n \t\t\tERROR(\"ibv_exp_query_device() failed\");\n \t\t\tgoto port_error;\n \t\t}\n-#ifdef RSS_SUPPORT\n-\t\tif ((exp_device_attr.exp_device_cap_flags &\n-\t\t     IBV_EXP_DEVICE_QPG) &&\n-\t\t    (exp_device_attr.exp_device_cap_flags &\n-\t\t     IBV_EXP_DEVICE_UD_RSS) &&\n-\t\t    (exp_device_attr.comp_mask &\n-\t\t     IBV_EXP_DEVICE_ATTR_RSS_TBL_SZ) &&\n-\t\t    (exp_device_attr.max_rss_tbl_sz > 0)) {\n-\t\t\tpriv->hw_qpg = 1;\n-\t\t\tpriv->hw_rss = 1;\n-\t\t\tpriv->max_rss_tbl_sz = exp_device_attr.max_rss_tbl_sz;\n-\t\t} else {\n-\t\t\tpriv->hw_qpg = 0;\n-\t\t\tpriv->hw_rss = 0;\n-\t\t\tpriv->max_rss_tbl_sz = 0;\n-\t\t}\n-\t\tpriv->hw_tss = !!(exp_device_attr.exp_device_cap_flags &\n-\t\t\t\t  IBV_EXP_DEVICE_UD_TSS);\n-\t\tDEBUG(\"device flags: %s%s%s\",\n-\t\t      (priv->hw_qpg ? \"IBV_DEVICE_QPG \" : \"\"),\n-\t\t      (priv->hw_tss ? \"IBV_DEVICE_TSS \" : \"\"),\n-\t\t      (priv->hw_rss ? \"IBV_DEVICE_RSS \" : \"\"));\n-\t\tif (priv->hw_rss)\n-\t\t\tDEBUG(\"maximum RSS indirection table size: %u\",\n-\t\t\t      exp_device_attr.max_rss_tbl_sz);\n-#endif /* RSS_SUPPORT */\n \n \t\tpriv->hw_csum =\n \t\t\t((exp_device_attr.exp_device_cap_flags &\ndiff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h\nindex a818703..9720e96 100644\n--- a/drivers/net/mlx5/mlx5.h\n+++ b/drivers/net/mlx5/mlx5.h\n@@ -101,20 +101,19 @@ struct priv {\n \tunsigned int started:1; /* Device started, flows enabled. */\n \tunsigned int promisc:1; /* Device in promiscuous mode. */\n \tunsigned int allmulti:1; /* Device receives all multicast packets. */\n-\tunsigned int hw_qpg:1; /* QP groups are supported. */\n-\tunsigned int hw_tss:1; /* TSS is supported. */\n-\tunsigned int hw_rss:1; /* RSS is supported. */\n \tunsigned int hw_csum:1; /* Checksum offload is supported. */\n \tunsigned int hw_csum_l2tun:1; /* Same for L2 tunnels. */\n-\tunsigned int rss:1; /* RSS is enabled. */\n \tunsigned int vf:1; /* This is a VF device. */\n-\tunsigned int max_rss_tbl_sz; /* Maximum number of RSS queues. */\n \t/* RX/TX queues. */\n-\tstruct rxq rxq_parent; /* Parent queue when RSS is enabled. */\n \tunsigned int rxqs_n; /* RX queues array size. */\n \tunsigned int txqs_n; /* TX queues array size. */\n \tstruct rxq *(*rxqs)[]; /* RX queues. */\n \tstruct txq *(*txqs)[]; /* TX queues. */\n+\t/* Indirection table referencing all RX WQs. */\n+\tstruct ibv_exp_rwq_ind_table *ind_table;\n+\t/* Hash RX QPs feeding the indirection table. */\n+\tstruct hash_rxq (*hash_rxqs)[];\n+\tunsigned int hash_rxqs_n; /* Hash RX QPs array size. */\n \trte_spinlock_t lock; /* Lock for control functions. */\n };\n \n@@ -161,23 +160,25 @@ int mlx5_ibv_device_to_pci_addr(const struct ibv_device *,\n /* mlx5_mac.c */\n \n int priv_get_mac(struct priv *, uint8_t (*)[ETHER_ADDR_LEN]);\n-void rxq_mac_addrs_del(struct rxq *);\n+void hash_rxq_mac_addrs_del(struct hash_rxq *);\n+void priv_mac_addrs_disable(struct priv *);\n void mlx5_mac_addr_remove(struct rte_eth_dev *, uint32_t);\n-int rxq_mac_addrs_add(struct rxq *);\n+int hash_rxq_mac_addrs_add(struct hash_rxq *);\n int priv_mac_addr_add(struct priv *, unsigned int,\n \t\t      const uint8_t (*)[ETHER_ADDR_LEN]);\n+int priv_mac_addrs_enable(struct priv *);\n void mlx5_mac_addr_add(struct rte_eth_dev *, struct ether_addr *, uint32_t,\n \t\t       uint32_t);\n \n /* mlx5_rxmode.c */\n \n-int rxq_promiscuous_enable(struct rxq *);\n+int priv_promiscuous_enable(struct priv *);\n void mlx5_promiscuous_enable(struct rte_eth_dev *);\n-void rxq_promiscuous_disable(struct rxq *);\n+void priv_promiscuous_disable(struct priv *);\n void mlx5_promiscuous_disable(struct rte_eth_dev *);\n-int rxq_allmulticast_enable(struct rxq *);\n+int priv_allmulticast_enable(struct priv *);\n void mlx5_allmulticast_enable(struct rte_eth_dev *);\n-void rxq_allmulticast_disable(struct rxq *);\n+void priv_allmulticast_disable(struct priv *);\n void mlx5_allmulticast_disable(struct rte_eth_dev *);\n \n /* mlx5_stats.c */\ndiff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c\nindex 181a877..fac685e 100644\n--- a/drivers/net/mlx5/mlx5_ethdev.c\n+++ b/drivers/net/mlx5/mlx5_ethdev.c\n@@ -394,7 +394,6 @@ priv_set_flags(struct priv *priv, unsigned int keep, unsigned int flags)\n  * Ethernet device configuration.\n  *\n  * Prepare the driver for a given number of TX and RX queues.\n- * Allocate parent RSS queue when several RX queues are requested.\n  *\n  * @param dev\n  *   Pointer to Ethernet device structure.\n@@ -408,8 +407,6 @@ dev_configure(struct rte_eth_dev *dev)\n \tstruct priv *priv = dev->data->dev_private;\n \tunsigned int rxqs_n = dev->data->nb_rx_queues;\n \tunsigned int txqs_n = dev->data->nb_tx_queues;\n-\tunsigned int tmp;\n-\tint ret;\n \n \tpriv->rxqs = (void *)dev->data->rx_queues;\n \tpriv->txqs = (void *)dev->data->tx_queues;\n@@ -422,47 +419,8 @@ dev_configure(struct rte_eth_dev *dev)\n \t\treturn 0;\n \tINFO(\"%p: RX queues number update: %u -> %u\",\n \t     (void *)dev, priv->rxqs_n, rxqs_n);\n-\t/* If RSS is enabled, disable it first. */\n-\tif (priv->rss) {\n-\t\tunsigned int i;\n-\n-\t\t/* Only if there are no remaining child RX queues. */\n-\t\tfor (i = 0; (i != priv->rxqs_n); ++i)\n-\t\t\tif ((*priv->rxqs)[i] != NULL)\n-\t\t\t\treturn EINVAL;\n-\t\trxq_cleanup(&priv->rxq_parent);\n-\t\tpriv->rss = 0;\n-\t\tpriv->rxqs_n = 0;\n-\t}\n-\tif (rxqs_n <= 1) {\n-\t\t/* Nothing else to do. */\n-\t\tpriv->rxqs_n = rxqs_n;\n-\t\treturn 0;\n-\t}\n-\t/* Allocate a new RSS parent queue if supported by hardware. */\n-\tif (!priv->hw_rss) {\n-\t\tERROR(\"%p: only a single RX queue can be configured when\"\n-\t\t      \" hardware doesn't support RSS\",\n-\t\t      (void *)dev);\n-\t\treturn EINVAL;\n-\t}\n-\t/* Fail if hardware doesn't support that many RSS queues. */\n-\tif (rxqs_n >= priv->max_rss_tbl_sz) {\n-\t\tERROR(\"%p: only %u RX queues can be configured for RSS\",\n-\t\t      (void *)dev, priv->max_rss_tbl_sz);\n-\t\treturn EINVAL;\n-\t}\n-\tpriv->rss = 1;\n-\ttmp = priv->rxqs_n;\n \tpriv->rxqs_n = rxqs_n;\n-\tret = rxq_setup(dev, &priv->rxq_parent, 0, 0, NULL, NULL);\n-\tif (!ret)\n-\t\treturn 0;\n-\t/* Failure, rollback. */\n-\tpriv->rss = 0;\n-\tpriv->rxqs_n = tmp;\n-\tassert(ret > 0);\n-\treturn ret;\n+\treturn 0;\n }\n \n /**\n@@ -671,15 +629,6 @@ mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)\n \t\t\t\trx_func = mlx5_rx_burst_sp;\n \t\t\tbreak;\n \t\t}\n-\t\t/* Reenable non-RSS queue attributes. No need to check\n-\t\t * for errors at this stage. */\n-\t\tif (!priv->rss) {\n-\t\t\trxq_mac_addrs_add(rxq);\n-\t\t\tif (priv->promisc)\n-\t\t\t\trxq_promiscuous_enable(rxq);\n-\t\t\tif (priv->allmulti)\n-\t\t\t\trxq_allmulticast_enable(rxq);\n-\t\t}\n \t\t/* Scattered burst function takes priority. */\n \t\tif (rxq->sp)\n \t\t\trx_func = mlx5_rx_burst_sp;\ndiff --git a/drivers/net/mlx5/mlx5_mac.c b/drivers/net/mlx5/mlx5_mac.c\nindex f01faf0..971f2cd 100644\n--- a/drivers/net/mlx5/mlx5_mac.c\n+++ b/drivers/net/mlx5/mlx5_mac.c\n@@ -93,83 +93,84 @@ priv_get_mac(struct priv *priv, uint8_t (*mac)[ETHER_ADDR_LEN])\n /**\n  * Delete flow steering rule.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  * @param mac_index\n  *   MAC address index.\n  * @param vlan_index\n  *   VLAN index.\n  */\n static void\n-rxq_del_flow(struct rxq *rxq, unsigned int mac_index, unsigned int vlan_index)\n+hash_rxq_del_flow(struct hash_rxq *hash_rxq, unsigned int mac_index,\n+\t\t  unsigned int vlan_index)\n {\n #ifndef NDEBUG\n-\tstruct priv *priv = rxq->priv;\n+\tstruct priv *priv = hash_rxq->priv;\n \tconst uint8_t (*mac)[ETHER_ADDR_LEN] =\n \t\t(const uint8_t (*)[ETHER_ADDR_LEN])\n \t\tpriv->mac[mac_index].addr_bytes;\n #endif\n-\tassert(rxq->mac_flow[mac_index][vlan_index] != NULL);\n+\tassert(hash_rxq->mac_flow[mac_index][vlan_index] != NULL);\n \tDEBUG(\"%p: removing MAC address %02x:%02x:%02x:%02x:%02x:%02x index %u\"\n \t      \" (VLAN ID %\" PRIu16 \")\",\n-\t      (void *)rxq,\n+\t      (void *)hash_rxq,\n \t      (*mac)[0], (*mac)[1], (*mac)[2], (*mac)[3], (*mac)[4], (*mac)[5],\n \t      mac_index, priv->vlan_filter[vlan_index].id);\n-\tclaim_zero(ibv_destroy_flow(rxq->mac_flow[mac_index][vlan_index]));\n-\trxq->mac_flow[mac_index][vlan_index] = NULL;\n+\tclaim_zero(ibv_destroy_flow(hash_rxq->mac_flow\n+\t\t\t\t    [mac_index][vlan_index]));\n+\thash_rxq->mac_flow[mac_index][vlan_index] = NULL;\n }\n \n /**\n- * Unregister a MAC address from a RX queue.\n+ * Unregister a MAC address from a hash RX queue.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  * @param mac_index\n  *   MAC address index.\n  */\n static void\n-rxq_mac_addr_del(struct rxq *rxq, unsigned int mac_index)\n+hash_rxq_mac_addr_del(struct hash_rxq *hash_rxq, unsigned int mac_index)\n {\n-\tstruct priv *priv = rxq->priv;\n+\tstruct priv *priv = hash_rxq->priv;\n \tunsigned int i;\n \tunsigned int vlans = 0;\n \n \tassert(mac_index < RTE_DIM(priv->mac));\n-\tif (!BITFIELD_ISSET(rxq->mac_configured, mac_index))\n+\tif (!BITFIELD_ISSET(hash_rxq->mac_configured, mac_index))\n \t\treturn;\n \tfor (i = 0; (i != RTE_DIM(priv->vlan_filter)); ++i) {\n \t\tif (!priv->vlan_filter[i].enabled)\n \t\t\tcontinue;\n-\t\trxq_del_flow(rxq, mac_index, i);\n+\t\thash_rxq_del_flow(hash_rxq, mac_index, i);\n \t\tvlans++;\n \t}\n \tif (!vlans) {\n-\t\trxq_del_flow(rxq, mac_index, 0);\n+\t\thash_rxq_del_flow(hash_rxq, mac_index, 0);\n \t}\n-\tBITFIELD_RESET(rxq->mac_configured, mac_index);\n+\tBITFIELD_RESET(hash_rxq->mac_configured, mac_index);\n }\n \n /**\n- * Unregister all MAC addresses from a RX queue.\n+ * Unregister all MAC addresses from a hash RX queue.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  */\n void\n-rxq_mac_addrs_del(struct rxq *rxq)\n+hash_rxq_mac_addrs_del(struct hash_rxq *hash_rxq)\n {\n-\tstruct priv *priv = rxq->priv;\n+\tstruct priv *priv = hash_rxq->priv;\n \tunsigned int i;\n \n \tfor (i = 0; (i != RTE_DIM(priv->mac)); ++i)\n-\t\trxq_mac_addr_del(rxq, i);\n+\t\thash_rxq_mac_addr_del(hash_rxq, i);\n }\n \n /**\n  * Unregister a MAC address.\n  *\n- * In RSS mode, the MAC address is unregistered from the parent queue,\n- * otherwise it is unregistered from each queue directly.\n+ * This is done for each hash RX queue.\n  *\n  * @param priv\n  *   Pointer to private structure.\n@@ -184,17 +185,27 @@ priv_mac_addr_del(struct priv *priv, unsigned int mac_index)\n \tassert(mac_index < RTE_DIM(priv->mac));\n \tif (!BITFIELD_ISSET(priv->mac_configured, mac_index))\n \t\treturn;\n-\tif (priv->rss) {\n-\t\trxq_mac_addr_del(&priv->rxq_parent, mac_index);\n-\t\tgoto end;\n-\t}\n-\tfor (i = 0; (i != priv->dev->data->nb_rx_queues); ++i)\n-\t\trxq_mac_addr_del((*priv->rxqs)[i], mac_index);\n-end:\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\thash_rxq_mac_addr_del(&(*priv->hash_rxqs)[i], mac_index);\n \tBITFIELD_RESET(priv->mac_configured, mac_index);\n }\n \n /**\n+ * Unregister all MAC addresses from all hash RX queues.\n+ *\n+ * @param priv\n+ *   Pointer to private structure.\n+ */\n+void\n+priv_mac_addrs_disable(struct priv *priv)\n+{\n+\tunsigned int i;\n+\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\thash_rxq_mac_addrs_del(&(*priv->hash_rxqs)[i]);\n+}\n+\n+/**\n  * DPDK callback to remove a MAC address.\n  *\n  * @param dev\n@@ -221,8 +232,8 @@ end:\n /**\n  * Add single flow steering rule.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  * @param mac_index\n  *   MAC address index to register.\n  * @param vlan_index\n@@ -232,10 +243,11 @@ end:\n  *   0 on success, errno value on failure.\n  */\n static int\n-rxq_add_flow(struct rxq *rxq, unsigned int mac_index, unsigned int vlan_index)\n+hash_rxq_add_flow(struct hash_rxq *hash_rxq, unsigned int mac_index,\n+\t\t  unsigned int vlan_index)\n {\n \tstruct ibv_flow *flow;\n-\tstruct priv *priv = rxq->priv;\n+\tstruct priv *priv = hash_rxq->priv;\n \tconst uint8_t (*mac)[ETHER_ADDR_LEN] =\n \t\t\t(const uint8_t (*)[ETHER_ADDR_LEN])\n \t\t\tpriv->mac[mac_index].addr_bytes;\n@@ -280,18 +292,18 @@ rxq_add_flow(struct rxq *rxq, unsigned int mac_index, unsigned int vlan_index)\n \t};\n \tDEBUG(\"%p: adding MAC address %02x:%02x:%02x:%02x:%02x:%02x index %u\"\n \t      \" (VLAN %s %\" PRIu16 \")\",\n-\t      (void *)rxq,\n+\t      (void *)hash_rxq,\n \t      (*mac)[0], (*mac)[1], (*mac)[2], (*mac)[3], (*mac)[4], (*mac)[5],\n \t      mac_index,\n \t      ((vlan_index != -1u) ? \"ID\" : \"index\"),\n \t      ((vlan_index != -1u) ? priv->vlan_filter[vlan_index].id : -1u));\n \t/* Create related flow. */\n \terrno = 0;\n-\tflow = ibv_create_flow(rxq->qp, attr);\n+\tflow = ibv_create_flow(hash_rxq->qp, attr);\n \tif (flow == NULL) {\n \t\t/* It's not clear whether errno is always set in this case. */\n \t\tERROR(\"%p: flow configuration failed, errno=%d: %s\",\n-\t\t      (void *)rxq, errno,\n+\t\t      (void *)hash_rxq, errno,\n \t\t      (errno ? strerror(errno) : \"Unknown error\"));\n \t\tif (errno)\n \t\t\treturn errno;\n@@ -299,16 +311,16 @@ rxq_add_flow(struct rxq *rxq, unsigned int mac_index, unsigned int vlan_index)\n \t}\n \tif (vlan_index == -1u)\n \t\tvlan_index = 0;\n-\tassert(rxq->mac_flow[mac_index][vlan_index] == NULL);\n-\trxq->mac_flow[mac_index][vlan_index] = flow;\n+\tassert(hash_rxq->mac_flow[mac_index][vlan_index] == NULL);\n+\thash_rxq->mac_flow[mac_index][vlan_index] = flow;\n \treturn 0;\n }\n \n /**\n- * Register a MAC address in a RX queue.\n+ * Register a MAC address in a hash RX queue.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  * @param mac_index\n  *   MAC address index to register.\n  *\n@@ -316,22 +328,22 @@ rxq_add_flow(struct rxq *rxq, unsigned int mac_index, unsigned int vlan_index)\n  *   0 on success, errno value on failure.\n  */\n static int\n-rxq_mac_addr_add(struct rxq *rxq, unsigned int mac_index)\n+hash_rxq_mac_addr_add(struct hash_rxq *hash_rxq, unsigned int mac_index)\n {\n-\tstruct priv *priv = rxq->priv;\n+\tstruct priv *priv = hash_rxq->priv;\n \tunsigned int i;\n \tunsigned int vlans = 0;\n \tint ret;\n \n \tassert(mac_index < RTE_DIM(priv->mac));\n-\tif (BITFIELD_ISSET(rxq->mac_configured, mac_index))\n-\t\trxq_mac_addr_del(rxq, mac_index);\n+\tif (BITFIELD_ISSET(hash_rxq->mac_configured, mac_index))\n+\t\thash_rxq_mac_addr_del(hash_rxq, mac_index);\n \t/* Fill VLAN specifications. */\n \tfor (i = 0; (i != RTE_DIM(priv->vlan_filter)); ++i) {\n \t\tif (!priv->vlan_filter[i].enabled)\n \t\t\tcontinue;\n \t\t/* Create related flow. */\n-\t\tret = rxq_add_flow(rxq, mac_index, i);\n+\t\tret = hash_rxq_add_flow(hash_rxq, mac_index, i);\n \t\tif (!ret) {\n \t\t\tvlans++;\n \t\t\tcontinue;\n@@ -339,45 +351,45 @@ rxq_mac_addr_add(struct rxq *rxq, unsigned int mac_index)\n \t\t/* Failure, rollback. */\n \t\twhile (i != 0)\n \t\t\tif (priv->vlan_filter[--i].enabled)\n-\t\t\t\trxq_del_flow(rxq, mac_index, i);\n+\t\t\t\thash_rxq_del_flow(hash_rxq, mac_index, i);\n \t\tassert(ret > 0);\n \t\treturn ret;\n \t}\n \t/* In case there is no VLAN filter. */\n \tif (!vlans) {\n-\t\tret = rxq_add_flow(rxq, mac_index, -1);\n+\t\tret = hash_rxq_add_flow(hash_rxq, mac_index, -1);\n \t\tif (ret)\n \t\t\treturn ret;\n \t}\n-\tBITFIELD_SET(rxq->mac_configured, mac_index);\n+\tBITFIELD_SET(hash_rxq->mac_configured, mac_index);\n \treturn 0;\n }\n \n /**\n- * Register all MAC addresses in a RX queue.\n+ * Register all MAC addresses in a hash RX queue.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  *\n  * @return\n  *   0 on success, errno value on failure.\n  */\n int\n-rxq_mac_addrs_add(struct rxq *rxq)\n+hash_rxq_mac_addrs_add(struct hash_rxq *hash_rxq)\n {\n-\tstruct priv *priv = rxq->priv;\n+\tstruct priv *priv = hash_rxq->priv;\n \tunsigned int i;\n \tint ret;\n \n \tfor (i = 0; (i != RTE_DIM(priv->mac)); ++i) {\n \t\tif (!BITFIELD_ISSET(priv->mac_configured, i))\n \t\t\tcontinue;\n-\t\tret = rxq_mac_addr_add(rxq, i);\n+\t\tret = hash_rxq_mac_addr_add(hash_rxq, i);\n \t\tif (!ret)\n \t\t\tcontinue;\n \t\t/* Failure, rollback. */\n \t\twhile (i != 0)\n-\t\t\trxq_mac_addr_del(rxq, --i);\n+\t\t\thash_rxq_mac_addr_del(hash_rxq, --i);\n \t\tassert(ret > 0);\n \t\treturn ret;\n \t}\n@@ -387,8 +399,7 @@ rxq_mac_addrs_add(struct rxq *rxq)\n /**\n  * Register a MAC address.\n  *\n- * In RSS mode, the MAC address is registered in the parent queue,\n- * otherwise it is registered in each queue directly.\n+ * This is done for each hash RX queue.\n  *\n  * @param priv\n  *   Pointer to private structure.\n@@ -431,32 +442,23 @@ priv_mac_addr_add(struct priv *priv, unsigned int mac_index,\n \t/* If device isn't started, this is all we need to do. */\n \tif (!priv->started) {\n #ifndef NDEBUG\n-\t\t/* Verify that all queues have this index disabled. */\n-\t\tfor (i = 0; (i != priv->rxqs_n); ++i) {\n-\t\t\tif ((*priv->rxqs)[i] == NULL)\n-\t\t\t\tcontinue;\n+\t\t/* Verify that all hash RX queues have this index disabled. */\n+\t\tfor (i = 0; (i != priv->hash_rxqs_n); ++i) {\n \t\t\tassert(!BITFIELD_ISSET\n-\t\t\t       ((*priv->rxqs)[i]->mac_configured, mac_index));\n+\t\t\t       ((*priv->hash_rxqs)[i].mac_configured,\n+\t\t\t\tmac_index));\n \t\t}\n #endif\n \t\tgoto end;\n \t}\n-\tif (priv->rss) {\n-\t\tret = rxq_mac_addr_add(&priv->rxq_parent, mac_index);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t\tgoto end;\n-\t}\n-\tfor (i = 0; (i != priv->rxqs_n); ++i) {\n-\t\tif ((*priv->rxqs)[i] == NULL)\n-\t\t\tcontinue;\n-\t\tret = rxq_mac_addr_add((*priv->rxqs)[i], mac_index);\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i) {\n+\t\tret = hash_rxq_mac_addr_add(&(*priv->hash_rxqs)[i], mac_index);\n \t\tif (!ret)\n \t\t\tcontinue;\n \t\t/* Failure, rollback. */\n \t\twhile (i != 0)\n-\t\t\tif ((*priv->rxqs)[(--i)] != NULL)\n-\t\t\t\trxq_mac_addr_del((*priv->rxqs)[i], mac_index);\n+\t\t\thash_rxq_mac_addr_del(&(*priv->hash_rxqs)[--i],\n+\t\t\t\t\t      mac_index);\n \t\treturn ret;\n \t}\n end:\n@@ -465,6 +467,34 @@ end:\n }\n \n /**\n+ * Register all MAC addresses in all hash RX queues.\n+ *\n+ * @param priv\n+ *   Pointer to private structure.\n+ *\n+ * @return\n+ *   0 on success, errno value on failure.\n+ */\n+int\n+priv_mac_addrs_enable(struct priv *priv)\n+{\n+\tunsigned int i;\n+\tint ret;\n+\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i) {\n+\t\tret = hash_rxq_mac_addrs_add(&(*priv->hash_rxqs)[i]);\n+\t\tif (!ret)\n+\t\t\tcontinue;\n+\t\t/* Failure, rollback. */\n+\t\twhile (i != 0)\n+\t\t\thash_rxq_mac_addrs_del(&(*priv->hash_rxqs)[--i]);\n+\t\tassert(ret > 0);\n+\t\treturn ret;\n+\t}\n+\treturn 0;\n+}\n+\n+/**\n  * DPDK callback to add a MAC address.\n  *\n  * @param dev\ndiff --git a/drivers/net/mlx5/mlx5_rxmode.c b/drivers/net/mlx5/mlx5_rxmode.c\nindex b4e5493..1f5cd40 100644\n--- a/drivers/net/mlx5/mlx5_rxmode.c\n+++ b/drivers/net/mlx5/mlx5_rxmode.c\n@@ -58,111 +58,142 @@\n #include \"mlx5_rxtx.h\"\n #include \"mlx5_utils.h\"\n \n+static void hash_rxq_promiscuous_disable(struct hash_rxq *);\n+static void hash_rxq_allmulticast_disable(struct hash_rxq *);\n+\n /**\n- * Enable promiscuous mode in a RX queue.\n+ * Enable promiscuous mode in a hash RX queue.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  *\n  * @return\n  *   0 on success, errno value on failure.\n  */\n-int\n-rxq_promiscuous_enable(struct rxq *rxq)\n+static int\n+hash_rxq_promiscuous_enable(struct hash_rxq *hash_rxq)\n {\n \tstruct ibv_flow *flow;\n \tstruct ibv_flow_attr attr = {\n \t\t.type = IBV_FLOW_ATTR_ALL_DEFAULT,\n \t\t.num_of_specs = 0,\n-\t\t.port = rxq->priv->port,\n+\t\t.port = hash_rxq->priv->port,\n \t\t.flags = 0\n \t};\n \n-\tif (rxq->priv->vf)\n+\tif (hash_rxq->priv->vf)\n \t\treturn 0;\n-\tDEBUG(\"%p: enabling promiscuous mode\", (void *)rxq);\n-\tif (rxq->promisc_flow != NULL)\n+\tDEBUG(\"%p: enabling promiscuous mode\", (void *)hash_rxq);\n+\tif (hash_rxq->promisc_flow != NULL)\n \t\treturn EBUSY;\n \terrno = 0;\n-\tflow = ibv_create_flow(rxq->qp, &attr);\n+\tflow = ibv_create_flow(hash_rxq->qp, &attr);\n \tif (flow == NULL) {\n \t\t/* It's not clear whether errno is always set in this case. */\n \t\tERROR(\"%p: flow configuration failed, errno=%d: %s\",\n-\t\t      (void *)rxq, errno,\n+\t\t      (void *)hash_rxq, errno,\n \t\t      (errno ? strerror(errno) : \"Unknown error\"));\n \t\tif (errno)\n \t\t\treturn errno;\n \t\treturn EINVAL;\n \t}\n-\trxq->promisc_flow = flow;\n-\tDEBUG(\"%p: promiscuous mode enabled\", (void *)rxq);\n+\thash_rxq->promisc_flow = flow;\n+\tDEBUG(\"%p: promiscuous mode enabled\", (void *)hash_rxq);\n \treturn 0;\n }\n \n /**\n- * DPDK callback to enable promiscuous mode.\n+ * Enable promiscuous mode in all hash RX queues.\n  *\n- * @param dev\n- *   Pointer to Ethernet device structure.\n+ * @param priv\n+ *   Private structure.\n+ *\n+ * @return\n+ *   0 on success, errno value on failure.\n  */\n-void\n-mlx5_promiscuous_enable(struct rte_eth_dev *dev)\n+int\n+priv_promiscuous_enable(struct priv *priv)\n {\n-\tstruct priv *priv = dev->data->dev_private;\n \tunsigned int i;\n-\tint ret;\n \n-\tpriv_lock(priv);\n-\tif (priv->promisc) {\n-\t\tpriv_unlock(priv);\n-\t\treturn;\n-\t}\n+\tif (priv->promisc)\n+\t\treturn 0;\n \t/* If device isn't started, this is all we need to do. */\n \tif (!priv->started)\n \t\tgoto end;\n-\tif (priv->rss) {\n-\t\tret = rxq_promiscuous_enable(&priv->rxq_parent);\n-\t\tif (ret) {\n-\t\t\tpriv_unlock(priv);\n-\t\t\treturn;\n-\t\t}\n-\t\tgoto end;\n-\t}\n-\tfor (i = 0; (i != priv->rxqs_n); ++i) {\n-\t\tif ((*priv->rxqs)[i] == NULL)\n-\t\t\tcontinue;\n-\t\tret = rxq_promiscuous_enable((*priv->rxqs)[i]);\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i) {\n+\t\tstruct hash_rxq *hash_rxq = &(*priv->hash_rxqs)[i];\n+\t\tint ret;\n+\n+\t\tret = hash_rxq_promiscuous_enable(hash_rxq);\n \t\tif (!ret)\n \t\t\tcontinue;\n \t\t/* Failure, rollback. */\n-\t\twhile (i != 0)\n-\t\t\tif ((*priv->rxqs)[--i] != NULL)\n-\t\t\t\trxq_promiscuous_disable((*priv->rxqs)[i]);\n-\t\tpriv_unlock(priv);\n-\t\treturn;\n+\t\twhile (i != 0) {\n+\t\t\thash_rxq = &(*priv->hash_rxqs)[--i];\n+\t\t\thash_rxq_promiscuous_disable(hash_rxq);\n+\t\t}\n+\t\treturn ret;\n \t}\n end:\n \tpriv->promisc = 1;\n-\tpriv_unlock(priv);\n+\treturn 0;\n }\n \n /**\n- * Disable promiscuous mode in a RX queue.\n+ * DPDK callback to enable promiscuous mode.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param dev\n+ *   Pointer to Ethernet device structure.\n  */\n void\n-rxq_promiscuous_disable(struct rxq *rxq)\n+mlx5_promiscuous_enable(struct rte_eth_dev *dev)\n {\n-\tif (rxq->priv->vf)\n+\tstruct priv *priv = dev->data->dev_private;\n+\tint ret;\n+\n+\tpriv_lock(priv);\n+\tret = priv_promiscuous_enable(priv);\n+\tif (ret)\n+\t\tERROR(\"cannot enable promiscuous mode: %s\", strerror(ret));\n+\tpriv_unlock(priv);\n+}\n+\n+/**\n+ * Disable promiscuous mode in a hash RX queue.\n+ *\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n+ */\n+static void\n+hash_rxq_promiscuous_disable(struct hash_rxq *hash_rxq)\n+{\n+\tif (hash_rxq->priv->vf)\n \t\treturn;\n-\tDEBUG(\"%p: disabling promiscuous mode\", (void *)rxq);\n-\tif (rxq->promisc_flow == NULL)\n+\tDEBUG(\"%p: disabling promiscuous mode\", (void *)hash_rxq);\n+\tif (hash_rxq->promisc_flow == NULL)\n \t\treturn;\n-\tclaim_zero(ibv_destroy_flow(rxq->promisc_flow));\n-\trxq->promisc_flow = NULL;\n-\tDEBUG(\"%p: promiscuous mode disabled\", (void *)rxq);\n+\tclaim_zero(ibv_destroy_flow(hash_rxq->promisc_flow));\n+\thash_rxq->promisc_flow = NULL;\n+\tDEBUG(\"%p: promiscuous mode disabled\", (void *)hash_rxq);\n+}\n+\n+/**\n+ * Disable promiscuous mode in all hash RX queues.\n+ *\n+ * @param priv\n+ *   Private structure.\n+ */\n+void\n+priv_promiscuous_disable(struct priv *priv)\n+{\n+\tunsigned int i;\n+\n+\tif (!priv->promisc)\n+\t\treturn;\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\thash_rxq_promiscuous_disable(&(*priv->hash_rxqs)[i]);\n+\tpriv->promisc = 0;\n }\n \n /**\n@@ -175,126 +206,141 @@ void\n mlx5_promiscuous_disable(struct rte_eth_dev *dev)\n {\n \tstruct priv *priv = dev->data->dev_private;\n-\tunsigned int i;\n \n \tpriv_lock(priv);\n-\tif (!priv->promisc) {\n-\t\tpriv_unlock(priv);\n-\t\treturn;\n-\t}\n-\tif (priv->rss) {\n-\t\trxq_promiscuous_disable(&priv->rxq_parent);\n-\t\tgoto end;\n-\t}\n-\tfor (i = 0; (i != priv->rxqs_n); ++i)\n-\t\tif ((*priv->rxqs)[i] != NULL)\n-\t\t\trxq_promiscuous_disable((*priv->rxqs)[i]);\n-end:\n-\tpriv->promisc = 0;\n+\tpriv_promiscuous_disable(priv);\n \tpriv_unlock(priv);\n }\n \n /**\n- * Enable allmulti mode in a RX queue.\n+ * Enable allmulti mode in a hash RX queue.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n  *\n  * @return\n  *   0 on success, errno value on failure.\n  */\n-int\n-rxq_allmulticast_enable(struct rxq *rxq)\n+static int\n+hash_rxq_allmulticast_enable(struct hash_rxq *hash_rxq)\n {\n \tstruct ibv_flow *flow;\n \tstruct ibv_flow_attr attr = {\n \t\t.type = IBV_FLOW_ATTR_MC_DEFAULT,\n \t\t.num_of_specs = 0,\n-\t\t.port = rxq->priv->port,\n+\t\t.port = hash_rxq->priv->port,\n \t\t.flags = 0\n \t};\n \n-\tDEBUG(\"%p: enabling allmulticast mode\", (void *)rxq);\n-\tif (rxq->allmulti_flow != NULL)\n+\tDEBUG(\"%p: enabling allmulticast mode\", (void *)hash_rxq);\n+\tif (hash_rxq->allmulti_flow != NULL)\n \t\treturn EBUSY;\n \terrno = 0;\n-\tflow = ibv_create_flow(rxq->qp, &attr);\n+\tflow = ibv_create_flow(hash_rxq->qp, &attr);\n \tif (flow == NULL) {\n \t\t/* It's not clear whether errno is always set in this case. */\n \t\tERROR(\"%p: flow configuration failed, errno=%d: %s\",\n-\t\t      (void *)rxq, errno,\n+\t\t      (void *)hash_rxq, errno,\n \t\t      (errno ? strerror(errno) : \"Unknown error\"));\n \t\tif (errno)\n \t\t\treturn errno;\n \t\treturn EINVAL;\n \t}\n-\trxq->allmulti_flow = flow;\n-\tDEBUG(\"%p: allmulticast mode enabled\", (void *)rxq);\n+\thash_rxq->allmulti_flow = flow;\n+\tDEBUG(\"%p: allmulticast mode enabled\", (void *)hash_rxq);\n \treturn 0;\n }\n \n /**\n- * DPDK callback to enable allmulti mode.\n+ * Enable allmulti mode in all hash RX queues.\n  *\n- * @param dev\n- *   Pointer to Ethernet device structure.\n+ * @param priv\n+ *   Private structure.\n+ *\n+ * @return\n+ *   0 on success, errno value on failure.\n  */\n-void\n-mlx5_allmulticast_enable(struct rte_eth_dev *dev)\n+int\n+priv_allmulticast_enable(struct priv *priv)\n {\n-\tstruct priv *priv = dev->data->dev_private;\n \tunsigned int i;\n-\tint ret;\n \n-\tpriv_lock(priv);\n-\tif (priv->allmulti) {\n-\t\tpriv_unlock(priv);\n-\t\treturn;\n-\t}\n+\tif (priv->allmulti)\n+\t\treturn 0;\n \t/* If device isn't started, this is all we need to do. */\n \tif (!priv->started)\n \t\tgoto end;\n-\tif (priv->rss) {\n-\t\tret = rxq_allmulticast_enable(&priv->rxq_parent);\n-\t\tif (ret) {\n-\t\t\tpriv_unlock(priv);\n-\t\t\treturn;\n-\t\t}\n-\t\tgoto end;\n-\t}\n-\tfor (i = 0; (i != priv->rxqs_n); ++i) {\n-\t\tif ((*priv->rxqs)[i] == NULL)\n-\t\t\tcontinue;\n-\t\tret = rxq_allmulticast_enable((*priv->rxqs)[i]);\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i) {\n+\t\tstruct hash_rxq *hash_rxq = &(*priv->hash_rxqs)[i];\n+\t\tint ret;\n+\n+\t\tret = hash_rxq_allmulticast_enable(hash_rxq);\n \t\tif (!ret)\n \t\t\tcontinue;\n \t\t/* Failure, rollback. */\n-\t\twhile (i != 0)\n-\t\t\tif ((*priv->rxqs)[--i] != NULL)\n-\t\t\t\trxq_allmulticast_disable((*priv->rxqs)[i]);\n-\t\tpriv_unlock(priv);\n-\t\treturn;\n+\t\twhile (i != 0) {\n+\t\t\thash_rxq = &(*priv->hash_rxqs)[--i];\n+\t\t\thash_rxq_allmulticast_disable(hash_rxq);\n+\t\t}\n+\t\treturn ret;\n \t}\n end:\n \tpriv->allmulti = 1;\n+\treturn 0;\n+}\n+\n+/**\n+ * DPDK callback to enable allmulti mode.\n+ *\n+ * @param dev\n+ *   Pointer to Ethernet device structure.\n+ */\n+void\n+mlx5_allmulticast_enable(struct rte_eth_dev *dev)\n+{\n+\tstruct priv *priv = dev->data->dev_private;\n+\tint ret;\n+\n+\tpriv_lock(priv);\n+\tret = priv_allmulticast_enable(priv);\n+\tif (ret)\n+\t\tERROR(\"cannot enable allmulticast mode: %s\", strerror(ret));\n \tpriv_unlock(priv);\n }\n \n /**\n- * Disable allmulti mode in a RX queue.\n+ * Disable allmulti mode in a hash RX queue.\n+ *\n+ * @param hash_rxq\n+ *   Pointer to hash RX queue structure.\n+ */\n+static void\n+hash_rxq_allmulticast_disable(struct hash_rxq *hash_rxq)\n+{\n+\tDEBUG(\"%p: disabling allmulticast mode\", (void *)hash_rxq);\n+\tif (hash_rxq->allmulti_flow == NULL)\n+\t\treturn;\n+\tclaim_zero(ibv_destroy_flow(hash_rxq->allmulti_flow));\n+\thash_rxq->allmulti_flow = NULL;\n+\tDEBUG(\"%p: allmulticast mode disabled\", (void *)hash_rxq);\n+}\n+\n+/**\n+ * Disable allmulti mode in all hash RX queues.\n  *\n- * @param rxq\n- *   Pointer to RX queue structure.\n+ * @param priv\n+ *   Private structure.\n  */\n void\n-rxq_allmulticast_disable(struct rxq *rxq)\n+priv_allmulticast_disable(struct priv *priv)\n {\n-\tDEBUG(\"%p: disabling allmulticast mode\", (void *)rxq);\n-\tif (rxq->allmulti_flow == NULL)\n+\tunsigned int i;\n+\n+\tif (!priv->allmulti)\n \t\treturn;\n-\tclaim_zero(ibv_destroy_flow(rxq->allmulti_flow));\n-\trxq->allmulti_flow = NULL;\n-\tDEBUG(\"%p: allmulticast mode disabled\", (void *)rxq);\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\thash_rxq_allmulticast_disable(&(*priv->hash_rxqs)[i]);\n+\tpriv->allmulti = 0;\n }\n \n /**\n@@ -307,21 +353,8 @@ void\n mlx5_allmulticast_disable(struct rte_eth_dev *dev)\n {\n \tstruct priv *priv = dev->data->dev_private;\n-\tunsigned int i;\n \n \tpriv_lock(priv);\n-\tif (!priv->allmulti) {\n-\t\tpriv_unlock(priv);\n-\t\treturn;\n-\t}\n-\tif (priv->rss) {\n-\t\trxq_allmulticast_disable(&priv->rxq_parent);\n-\t\tgoto end;\n-\t}\n-\tfor (i = 0; (i != priv->rxqs_n); ++i)\n-\t\tif ((*priv->rxqs)[i] != NULL)\n-\t\t\trxq_allmulticast_disable((*priv->rxqs)[i]);\n-end:\n-\tpriv->allmulti = 0;\n+\tpriv_allmulticast_disable(priv);\n \tpriv_unlock(priv);\n }\ndiff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c\nindex c938d2d..5392221 100644\n--- a/drivers/net/mlx5/mlx5_rxq.c\n+++ b/drivers/net/mlx5/mlx5_rxq.c\n@@ -64,6 +64,224 @@\n #include \"mlx5_utils.h\"\n #include \"mlx5_defs.h\"\n \n+/* Default RSS hash key also used for ConnectX-3. */\n+static uint8_t hash_rxq_default_key[] = {\n+\t0x2c, 0xc6, 0x81, 0xd1,\n+\t0x5b, 0xdb, 0xf4, 0xf7,\n+\t0xfc, 0xa2, 0x83, 0x19,\n+\t0xdb, 0x1a, 0x3e, 0x94,\n+\t0x6b, 0x9e, 0x38, 0xd9,\n+\t0x2c, 0x9c, 0x03, 0xd1,\n+\t0xad, 0x99, 0x44, 0xa7,\n+\t0xd9, 0x56, 0x3d, 0x59,\n+\t0x06, 0x3c, 0x25, 0xf3,\n+\t0xfc, 0x1f, 0xdc, 0x2a,\n+};\n+\n+/**\n+ * Return nearest power of two above input value.\n+ *\n+ * @param v\n+ *   Input value.\n+ *\n+ * @return\n+ *   Nearest power of two above input value.\n+ */\n+static unsigned int\n+log2above(unsigned int v)\n+{\n+\tunsigned int l;\n+\tunsigned int r;\n+\n+\tfor (l = 0, r = 0; (v >> 1); ++l, v >>= 1)\n+\t\tr |= (v & 1);\n+\treturn (l + r);\n+}\n+\n+/**\n+ * Initialize hash RX queues and indirection table.\n+ *\n+ * @param priv\n+ *   Pointer to private structure.\n+ *\n+ * @return\n+ *   0 on success, errno value on failure.\n+ */\n+int\n+priv_create_hash_rxqs(struct priv *priv)\n+{\n+\tstatic const uint64_t rss_hash_table[] = {\n+\t\t/* TCPv4. */\n+\t\t(IBV_EXP_RX_HASH_SRC_IPV4 | IBV_EXP_RX_HASH_DST_IPV4 |\n+\t\t IBV_EXP_RX_HASH_SRC_PORT_TCP | IBV_EXP_RX_HASH_DST_PORT_TCP),\n+\t\t/* UDPv4. */\n+\t\t(IBV_EXP_RX_HASH_SRC_IPV4 | IBV_EXP_RX_HASH_DST_IPV4 |\n+\t\t IBV_EXP_RX_HASH_SRC_PORT_UDP | IBV_EXP_RX_HASH_DST_PORT_UDP),\n+\t\t/* TCPv6. */\n+\t\t(IBV_EXP_RX_HASH_SRC_IPV6 | IBV_EXP_RX_HASH_DST_IPV6 |\n+\t\t IBV_EXP_RX_HASH_SRC_PORT_TCP | IBV_EXP_RX_HASH_DST_PORT_TCP),\n+\t\t/* UDPv6. */\n+\t\t(IBV_EXP_RX_HASH_SRC_IPV6 | IBV_EXP_RX_HASH_DST_IPV6 |\n+\t\t IBV_EXP_RX_HASH_SRC_PORT_UDP | IBV_EXP_RX_HASH_DST_PORT_UDP),\n+\t\t/* Other IPv4. */\n+\t\t(IBV_EXP_RX_HASH_SRC_IPV4 | IBV_EXP_RX_HASH_DST_IPV4),\n+\t\t/* Other IPv6. */\n+\t\t(IBV_EXP_RX_HASH_SRC_IPV6 | IBV_EXP_RX_HASH_DST_IPV6),\n+\t\t/* None, used for everything else. */\n+\t\t0,\n+\t};\n+\n+\tDEBUG(\"allocating hash RX queues for %u WQs\", priv->rxqs_n);\n+\tassert(priv->ind_table == NULL);\n+\tassert(priv->hash_rxqs == NULL);\n+\tassert(priv->hash_rxqs_n == 0);\n+\tassert(priv->pd != NULL);\n+\tassert(priv->ctx != NULL);\n+\tif (priv->rxqs_n == 0)\n+\t\treturn EINVAL;\n+\tassert(priv->rxqs != NULL);\n+\n+\t/* FIXME: large data structures are allocated on the stack. */\n+\tunsigned int wqs_n = (1 << log2above(priv->rxqs_n));\n+\tstruct ibv_exp_wq *wqs[wqs_n];\n+\tstruct ibv_exp_rwq_ind_table_init_attr ind_init_attr = {\n+\t\t.pd = priv->pd,\n+\t\t.log_ind_tbl_size = log2above(priv->rxqs_n),\n+\t\t.ind_tbl = wqs,\n+\t\t.comp_mask = 0,\n+\t};\n+\tstruct ibv_exp_rwq_ind_table *ind_table = NULL;\n+\t/* If only one RX queue is configured, RSS is not needed and a single\n+\t * empty hash entry is used (last rss_hash_table[] entry). */\n+\tunsigned int hash_rxqs_n =\n+\t\t((priv->rxqs_n == 1) ? 1 : RTE_DIM(rss_hash_table));\n+\tstruct hash_rxq (*hash_rxqs)[hash_rxqs_n] = NULL;\n+\tunsigned int i;\n+\tunsigned int j;\n+\tint err = 0;\n+\n+\tif (wqs_n < priv->rxqs_n) {\n+\t\tERROR(\"cannot handle this many RX queues (%u)\", priv->rxqs_n);\n+\t\terr = ERANGE;\n+\t\tgoto error;\n+\t}\n+\tif (wqs_n != priv->rxqs_n)\n+\t\tWARN(\"%u RX queues are configured, consider rounding this\"\n+\t\t     \" number to the next power of two (%u) for optimal\"\n+\t\t     \" performance\",\n+\t\t     priv->rxqs_n, wqs_n);\n+\t/* When the number of RX queues is not a power of two, the remaining\n+\t * table entries are padded with reused WQs and hashes are not spread\n+\t * uniformly. */\n+\tfor (i = 0, j = 0; (i != wqs_n); ++i) {\n+\t\twqs[i] = (*priv->rxqs)[j]->wq;\n+\t\tif (++j == priv->rxqs_n)\n+\t\t\tj = 0;\n+\t}\n+\terrno = 0;\n+\tind_table = ibv_exp_create_rwq_ind_table(priv->ctx, &ind_init_attr);\n+\tif (ind_table == NULL) {\n+\t\t/* Not clear whether errno is set. */\n+\t\terr = (errno ? errno : EINVAL);\n+\t\tERROR(\"RX indirection table creation failed with error %d: %s\",\n+\t\t      err, strerror(err));\n+\t\tgoto error;\n+\t}\n+\t/* Allocate array that holds hash RX queues and related data. */\n+\thash_rxqs = rte_malloc(__func__, sizeof(*hash_rxqs), 0);\n+\tif (hash_rxqs == NULL) {\n+\t\terr = ENOMEM;\n+\t\tERROR(\"cannot allocate hash RX queues container: %s\",\n+\t\t      strerror(err));\n+\t\tgoto error;\n+\t}\n+\tfor (i = 0, j = (RTE_DIM(rss_hash_table) - hash_rxqs_n);\n+\t     (j != RTE_DIM(rss_hash_table));\n+\t     ++i, ++j) {\n+\t\tstruct hash_rxq *hash_rxq = &(*hash_rxqs)[i];\n+\n+\t\tstruct ibv_exp_rx_hash_conf hash_conf = {\n+\t\t\t.rx_hash_function = IBV_EXP_RX_HASH_FUNC_TOEPLITZ,\n+\t\t\t.rx_hash_key_len = sizeof(hash_rxq_default_key),\n+\t\t\t.rx_hash_key = hash_rxq_default_key,\n+\t\t\t.rx_hash_fields_mask = rss_hash_table[j],\n+\t\t\t.rwq_ind_tbl = ind_table,\n+\t\t};\n+\t\tstruct ibv_exp_qp_init_attr qp_init_attr = {\n+\t\t\t.max_inl_recv = 0, /* Currently not supported. */\n+\t\t\t.qp_type = IBV_QPT_RAW_PACKET,\n+\t\t\t.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |\n+\t\t\t\t      IBV_EXP_QP_INIT_ATTR_RX_HASH),\n+\t\t\t.pd = priv->pd,\n+\t\t\t.rx_hash_conf = &hash_conf,\n+\t\t\t.port_num = priv->port,\n+\t\t};\n+\n+\t\t*hash_rxq = (struct hash_rxq){\n+\t\t\t.priv = priv,\n+\t\t\t.qp = ibv_exp_create_qp(priv->ctx, &qp_init_attr),\n+\t\t};\n+\t\tif (hash_rxq->qp == NULL) {\n+\t\t\terr = (errno ? errno : EINVAL);\n+\t\t\tERROR(\"Hash RX QP creation failure: %s\",\n+\t\t\t      strerror(err));\n+\t\t\twhile (i) {\n+\t\t\t\thash_rxq = &(*hash_rxqs)[--i];\n+\t\t\t\tclaim_zero(ibv_destroy_qp(hash_rxq->qp));\n+\t\t\t}\n+\t\t\tgoto error;\n+\t\t}\n+\t}\n+\tpriv->ind_table = ind_table;\n+\tpriv->hash_rxqs = hash_rxqs;\n+\tpriv->hash_rxqs_n = hash_rxqs_n;\n+\tassert(err == 0);\n+\treturn 0;\n+error:\n+\trte_free(hash_rxqs);\n+\tif (ind_table != NULL)\n+\t\tclaim_zero(ibv_exp_destroy_rwq_ind_table(ind_table));\n+\treturn err;\n+}\n+\n+/**\n+ * Clean up hash RX queues and indirection table.\n+ *\n+ * @param priv\n+ *   Pointer to private structure.\n+ */\n+void\n+priv_destroy_hash_rxqs(struct priv *priv)\n+{\n+\tunsigned int i;\n+\n+\tDEBUG(\"destroying %u hash RX queues\", priv->hash_rxqs_n);\n+\tif (priv->hash_rxqs_n == 0) {\n+\t\tassert(priv->hash_rxqs == NULL);\n+\t\tassert(priv->ind_table == NULL);\n+\t\treturn;\n+\t}\n+\tfor (i = 0; (i != priv->hash_rxqs_n); ++i) {\n+\t\tstruct hash_rxq *hash_rxq = &(*priv->hash_rxqs)[i];\n+\t\tunsigned int j, k;\n+\n+\t\tassert(hash_rxq->priv == priv);\n+\t\tassert(hash_rxq->qp != NULL);\n+\t\t/* Also check that there are no remaining flows. */\n+\t\tassert(hash_rxq->allmulti_flow == NULL);\n+\t\tassert(hash_rxq->promisc_flow == NULL);\n+\t\tfor (j = 0; (j != RTE_DIM(hash_rxq->mac_flow)); ++j)\n+\t\t\tfor (k = 0; (k != RTE_DIM(hash_rxq->mac_flow[j])); ++k)\n+\t\t\t\tassert(hash_rxq->mac_flow[j][k] == NULL);\n+\t\tclaim_zero(ibv_destroy_qp(hash_rxq->qp));\n+\t}\n+\tpriv->hash_rxqs_n = 0;\n+\trte_free(priv->hash_rxqs);\n+\tpriv->hash_rxqs = NULL;\n+\tclaim_zero(ibv_exp_destroy_rwq_ind_table(priv->ind_table));\n+\tpriv->ind_table = NULL;\n+}\n+\n /**\n  * Allocate RX queue elements with scattered packets support.\n  *\n@@ -335,15 +553,15 @@ rxq_cleanup(struct rxq *rxq)\n \t\trxq_free_elts_sp(rxq);\n \telse\n \t\trxq_free_elts(rxq);\n-\tif (rxq->if_qp != NULL) {\n+\tif (rxq->if_wq != NULL) {\n \t\tassert(rxq->priv != NULL);\n \t\tassert(rxq->priv->ctx != NULL);\n-\t\tassert(rxq->qp != NULL);\n+\t\tassert(rxq->wq != NULL);\n \t\tparams = (struct ibv_exp_release_intf_params){\n \t\t\t.comp_mask = 0,\n \t\t};\n \t\tclaim_zero(ibv_exp_release_intf(rxq->priv->ctx,\n-\t\t\t\t\t\trxq->if_qp,\n+\t\t\t\t\t\trxq->if_wq,\n \t\t\t\t\t\t&params));\n \t}\n \tif (rxq->if_cq != NULL) {\n@@ -357,12 +575,8 @@ rxq_cleanup(struct rxq *rxq)\n \t\t\t\t\t\trxq->if_cq,\n \t\t\t\t\t\t&params));\n \t}\n-\tif (rxq->qp != NULL) {\n-\t\trxq_promiscuous_disable(rxq);\n-\t\trxq_allmulticast_disable(rxq);\n-\t\trxq_mac_addrs_del(rxq);\n-\t\tclaim_zero(ibv_destroy_qp(rxq->qp));\n-\t}\n+\tif (rxq->wq != NULL)\n+\t\tclaim_zero(ibv_exp_destroy_wq(rxq->wq));\n \tif (rxq->cq != NULL)\n \t\tclaim_zero(ibv_destroy_cq(rxq->cq));\n \tif (rxq->rd != NULL) {\n@@ -382,112 +596,6 @@ rxq_cleanup(struct rxq *rxq)\n }\n \n /**\n- * Allocate a Queue Pair.\n- * Optionally setup inline receive if supported.\n- *\n- * @param priv\n- *   Pointer to private structure.\n- * @param cq\n- *   Completion queue to associate with QP.\n- * @param desc\n- *   Number of descriptors in QP (hint only).\n- *\n- * @return\n- *   QP pointer or NULL in case of error.\n- */\n-static struct ibv_qp *\n-rxq_setup_qp(struct priv *priv, struct ibv_cq *cq, uint16_t desc,\n-\t     struct ibv_exp_res_domain *rd)\n-{\n-\tstruct ibv_exp_qp_init_attr attr = {\n-\t\t/* CQ to be associated with the send queue. */\n-\t\t.send_cq = cq,\n-\t\t/* CQ to be associated with the receive queue. */\n-\t\t.recv_cq = cq,\n-\t\t.cap = {\n-\t\t\t/* Max number of outstanding WRs. */\n-\t\t\t.max_recv_wr = ((priv->device_attr.max_qp_wr < desc) ?\n-\t\t\t\t\tpriv->device_attr.max_qp_wr :\n-\t\t\t\t\tdesc),\n-\t\t\t/* Max number of scatter/gather elements in a WR. */\n-\t\t\t.max_recv_sge = ((priv->device_attr.max_sge <\n-\t\t\t\t\t  MLX5_PMD_SGE_WR_N) ?\n-\t\t\t\t\t priv->device_attr.max_sge :\n-\t\t\t\t\t MLX5_PMD_SGE_WR_N),\n-\t\t},\n-\t\t.qp_type = IBV_QPT_RAW_PACKET,\n-\t\t.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |\n-\t\t\t      IBV_EXP_QP_INIT_ATTR_RES_DOMAIN),\n-\t\t.pd = priv->pd,\n-\t\t.res_domain = rd,\n-\t};\n-\n-\treturn ibv_exp_create_qp(priv->ctx, &attr);\n-}\n-\n-#ifdef RSS_SUPPORT\n-\n-/**\n- * Allocate a RSS Queue Pair.\n- * Optionally setup inline receive if supported.\n- *\n- * @param priv\n- *   Pointer to private structure.\n- * @param cq\n- *   Completion queue to associate with QP.\n- * @param desc\n- *   Number of descriptors in QP (hint only).\n- * @param parent\n- *   If nonzero, create a parent QP, otherwise a child.\n- *\n- * @return\n- *   QP pointer or NULL in case of error.\n- */\n-static struct ibv_qp *\n-rxq_setup_qp_rss(struct priv *priv, struct ibv_cq *cq, uint16_t desc,\n-\t\t int parent, struct ibv_exp_res_domain *rd)\n-{\n-\tstruct ibv_exp_qp_init_attr attr = {\n-\t\t/* CQ to be associated with the send queue. */\n-\t\t.send_cq = cq,\n-\t\t/* CQ to be associated with the receive queue. */\n-\t\t.recv_cq = cq,\n-\t\t.cap = {\n-\t\t\t/* Max number of outstanding WRs. */\n-\t\t\t.max_recv_wr = ((priv->device_attr.max_qp_wr < desc) ?\n-\t\t\t\t\tpriv->device_attr.max_qp_wr :\n-\t\t\t\t\tdesc),\n-\t\t\t/* Max number of scatter/gather elements in a WR. */\n-\t\t\t.max_recv_sge = ((priv->device_attr.max_sge <\n-\t\t\t\t\t  MLX5_PMD_SGE_WR_N) ?\n-\t\t\t\t\t priv->device_attr.max_sge :\n-\t\t\t\t\t MLX5_PMD_SGE_WR_N),\n-\t\t},\n-\t\t.qp_type = IBV_QPT_RAW_PACKET,\n-\t\t.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |\n-\t\t\t      IBV_EXP_QP_INIT_ATTR_RES_DOMAIN |\n-\t\t\t      IBV_EXP_QP_INIT_ATTR_QPG),\n-\t\t.pd = priv->pd,\n-\t\t.res_domain = rd,\n-\t};\n-\n-\tif (parent) {\n-\t\tattr.qpg.qpg_type = IBV_EXP_QPG_PARENT;\n-\t\t/* TSS isn't necessary. */\n-\t\tattr.qpg.parent_attrib.tss_child_count = 0;\n-\t\tattr.qpg.parent_attrib.rss_child_count = priv->rxqs_n;\n-\t\tDEBUG(\"initializing parent RSS queue\");\n-\t} else {\n-\t\tattr.qpg.qpg_type = IBV_EXP_QPG_CHILD_RX;\n-\t\tattr.qpg.qpg_parent = priv->rxq_parent.qp;\n-\t\tDEBUG(\"initializing child RSS queue\");\n-\t}\n-\treturn ibv_exp_create_qp(priv->ctx, &attr);\n-}\n-\n-#endif /* RSS_SUPPORT */\n-\n-/**\n  * Reconfigure a RX queue with new parameters.\n  *\n  * rxq_rehash() does not allocate mbufs, which, if not done from the right\n@@ -511,15 +619,9 @@ rxq_rehash(struct rte_eth_dev *dev, struct rxq *rxq)\n \tunsigned int desc_n;\n \tstruct rte_mbuf **pool;\n \tunsigned int i, k;\n-\tstruct ibv_exp_qp_attr mod;\n+\tstruct ibv_exp_wq_attr mod;\n \tint err;\n-\tint parent = (rxq == &priv->rxq_parent);\n \n-\tif (parent) {\n-\t\tERROR(\"%p: cannot rehash parent queue %p\",\n-\t\t      (void *)dev, (void *)rxq);\n-\t\treturn EINVAL;\n-\t}\n \tDEBUG(\"%p: rehashing queue %p\", (void *)dev, (void *)rxq);\n \t/* Number of descriptors and mbufs currently allocated. */\n \tdesc_n = (tmpl.elts_n * (tmpl.sp ? MLX5_PMD_SGE_WR_N : 1));\n@@ -548,64 +650,17 @@ rxq_rehash(struct rte_eth_dev *dev, struct rxq *rxq)\n \t\tDEBUG(\"%p: nothing to do\", (void *)dev);\n \t\treturn 0;\n \t}\n-\t/* Remove attached flows if RSS is disabled (no parent queue). */\n-\tif (!priv->rss) {\n-\t\trxq_allmulticast_disable(&tmpl);\n-\t\trxq_promiscuous_disable(&tmpl);\n-\t\trxq_mac_addrs_del(&tmpl);\n-\t\t/* Update original queue in case of failure. */\n-\t\trxq->allmulti_flow = tmpl.allmulti_flow;\n-\t\trxq->promisc_flow = tmpl.promisc_flow;\n-\t\tmemcpy(rxq->mac_configured, tmpl.mac_configured,\n-\t\t       sizeof(rxq->mac_configured));\n-\t\tmemcpy(rxq->mac_flow, tmpl.mac_flow, sizeof(rxq->mac_flow));\n-\t}\n \t/* From now on, any failure will render the queue unusable.\n-\t * Reinitialize QP. */\n-\tmod = (struct ibv_exp_qp_attr){ .qp_state = IBV_QPS_RESET };\n-\terr = ibv_exp_modify_qp(tmpl.qp, &mod, IBV_EXP_QP_STATE);\n-\tif (err) {\n-\t\tERROR(\"%p: cannot reset QP: %s\", (void *)dev, strerror(err));\n-\t\tassert(err > 0);\n-\t\treturn err;\n-\t}\n-\terr = ibv_resize_cq(tmpl.cq, desc_n);\n-\tif (err) {\n-\t\tERROR(\"%p: cannot resize CQ: %s\", (void *)dev, strerror(err));\n-\t\tassert(err > 0);\n-\t\treturn err;\n-\t}\n-\tmod = (struct ibv_exp_qp_attr){\n-\t\t/* Move the QP to this state. */\n-\t\t.qp_state = IBV_QPS_INIT,\n-\t\t/* Primary port number. */\n-\t\t.port_num = priv->port\n+\t * Reinitialize WQ. */\n+\tmod = (struct ibv_exp_wq_attr){\n+\t\t.attr_mask = IBV_EXP_WQ_ATTR_STATE,\n+\t\t.wq_state = IBV_EXP_WQS_RESET,\n \t};\n-\terr = ibv_exp_modify_qp(tmpl.qp, &mod,\n-\t\t\t\t(IBV_EXP_QP_STATE |\n-#ifdef RSS_SUPPORT\n-\t\t\t\t (parent ? IBV_EXP_QP_GROUP_RSS : 0) |\n-#endif /* RSS_SUPPORT */\n-\t\t\t\t IBV_EXP_QP_PORT));\n+\terr = ibv_exp_modify_wq(tmpl.wq, &mod);\n \tif (err) {\n-\t\tERROR(\"%p: QP state to IBV_QPS_INIT failed: %s\",\n-\t\t      (void *)dev, strerror(err));\n+\t\tERROR(\"%p: cannot reset WQ: %s\", (void *)dev, strerror(err));\n \t\tassert(err > 0);\n \t\treturn err;\n-\t};\n-\t/* Reconfigure flows. Do not care for errors. */\n-\tif (!priv->rss) {\n-\t\trxq_mac_addrs_add(&tmpl);\n-\t\tif (priv->promisc)\n-\t\t\trxq_promiscuous_enable(&tmpl);\n-\t\tif (priv->allmulti)\n-\t\t\trxq_allmulticast_enable(&tmpl);\n-\t\t/* Update original queue in case of failure. */\n-\t\trxq->allmulti_flow = tmpl.allmulti_flow;\n-\t\trxq->promisc_flow = tmpl.promisc_flow;\n-\t\tmemcpy(rxq->mac_configured, tmpl.mac_configured,\n-\t\t       sizeof(rxq->mac_configured));\n-\t\tmemcpy(rxq->mac_flow, tmpl.mac_flow, sizeof(rxq->mac_flow));\n \t}\n \t/* Allocate pool. */\n \tpool = rte_malloc(__func__, (mbuf_n * sizeof(*pool)), 0);\n@@ -657,14 +712,25 @@ rxq_rehash(struct rte_eth_dev *dev, struct rxq *rxq)\n \trxq->elts_n = 0;\n \trte_free(rxq->elts.sp);\n \trxq->elts.sp = NULL;\n+\t/* Change queue state to ready. */\n+\tmod = (struct ibv_exp_wq_attr){\n+\t\t.attr_mask = IBV_EXP_WQ_ATTR_STATE,\n+\t\t.wq_state = IBV_EXP_WQS_RDY,\n+\t};\n+\terr = ibv_exp_modify_wq(tmpl.wq, &mod);\n+\tif (err) {\n+\t\tERROR(\"%p: WQ state to IBV_EXP_WQS_RDY failed: %s\",\n+\t\t      (void *)dev, strerror(err));\n+\t\tgoto error;\n+\t}\n \t/* Post SGEs. */\n-\tassert(tmpl.if_qp != NULL);\n+\tassert(tmpl.if_wq != NULL);\n \tif (tmpl.sp) {\n \t\tstruct rxq_elt_sp (*elts)[tmpl.elts_n] = tmpl.elts.sp;\n \n \t\tfor (i = 0; (i != RTE_DIM(*elts)); ++i) {\n-\t\t\terr = tmpl.if_qp->recv_sg_list\n-\t\t\t\t(tmpl.qp,\n+\t\t\terr = tmpl.if_wq->recv_sg_list\n+\t\t\t\t(tmpl.wq,\n \t\t\t\t (*elts)[i].sges,\n \t\t\t\t RTE_DIM((*elts)[i].sges));\n \t\t\tif (err)\n@@ -674,8 +740,8 @@ rxq_rehash(struct rte_eth_dev *dev, struct rxq *rxq)\n \t\tstruct rxq_elt (*elts)[tmpl.elts_n] = tmpl.elts.no_sp;\n \n \t\tfor (i = 0; (i != RTE_DIM(*elts)); ++i) {\n-\t\t\terr = tmpl.if_qp->recv_burst(\n-\t\t\t\ttmpl.qp,\n+\t\t\terr = tmpl.if_wq->recv_burst(\n+\t\t\t\ttmpl.wq,\n \t\t\t\t&(*elts)[i].sge,\n \t\t\t\t1);\n \t\t\tif (err)\n@@ -687,16 +753,9 @@ rxq_rehash(struct rte_eth_dev *dev, struct rxq *rxq)\n \t\t      (void *)dev, err);\n \t\t/* Set err because it does not contain a valid errno value. */\n \t\terr = EIO;\n-\t\tgoto skip_rtr;\n+\t\tgoto error;\n \t}\n-\tmod = (struct ibv_exp_qp_attr){\n-\t\t.qp_state = IBV_QPS_RTR\n-\t};\n-\terr = ibv_exp_modify_qp(tmpl.qp, &mod, IBV_EXP_QP_STATE);\n-\tif (err)\n-\t\tERROR(\"%p: QP state to IBV_QPS_RTR failed: %s\",\n-\t\t      (void *)dev, strerror(err));\n-skip_rtr:\n+error:\n \t*rxq = tmpl;\n \tassert(err >= 0);\n \treturn err;\n@@ -732,30 +791,20 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,\n \t\t.mp = mp,\n \t\t.socket = socket\n \t};\n-\tstruct ibv_exp_qp_attr mod;\n+\tstruct ibv_exp_wq_attr mod;\n \tunion {\n \t\tstruct ibv_exp_query_intf_params params;\n \t\tstruct ibv_exp_cq_init_attr cq;\n \t\tstruct ibv_exp_res_domain_init_attr rd;\n+\t\tstruct ibv_exp_wq_init_attr wq;\n \t} attr;\n \tenum ibv_exp_query_intf_status status;\n \tstruct rte_mbuf *buf;\n \tint ret = 0;\n-\tint parent = (rxq == &priv->rxq_parent);\n \tunsigned int i;\n+\tunsigned int cq_size = desc;\n \n \t(void)conf; /* Thresholds configuration (ignored). */\n-\t/*\n-\t * If this is a parent queue, hardware must support RSS and\n-\t * RSS must be enabled.\n-\t */\n-\tassert((!parent) || ((priv->hw_rss) && (priv->rss)));\n-\tif (parent) {\n-\t\t/* Even if unused, ibv_create_cq() requires at least one\n-\t\t * descriptor. */\n-\t\tdesc = 1;\n-\t\tgoto skip_mr;\n-\t}\n \tif ((desc == 0) || (desc % MLX5_PMD_SGE_WR_N)) {\n \t\tERROR(\"%p: invalid number of RX descriptors (must be a\"\n \t\t      \" multiple of %d)\", (void *)dev, MLX5_PMD_SGE_WR_N);\n@@ -798,7 +847,6 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,\n \t\t      (void *)dev, strerror(ret));\n \t\tgoto error;\n \t}\n-skip_mr:\n \tattr.rd = (struct ibv_exp_res_domain_init_attr){\n \t\t.comp_mask = (IBV_EXP_RES_DOMAIN_THREAD_MODEL |\n \t\t\t      IBV_EXP_RES_DOMAIN_MSG_MODEL),\n@@ -816,7 +864,8 @@ skip_mr:\n \t\t.comp_mask = IBV_EXP_CQ_INIT_ATTR_RES_DOMAIN,\n \t\t.res_domain = tmpl.rd,\n \t};\n-\ttmpl.cq = ibv_exp_create_cq(priv->ctx, desc, NULL, NULL, 0, &attr.cq);\n+\ttmpl.cq = ibv_exp_create_cq(priv->ctx, cq_size, NULL, NULL, 0,\n+\t\t\t\t    &attr.cq);\n \tif (tmpl.cq == NULL) {\n \t\tret = ENOMEM;\n \t\tERROR(\"%p: CQ creation failure: %s\",\n@@ -827,48 +876,30 @@ skip_mr:\n \t      priv->device_attr.max_qp_wr);\n \tDEBUG(\"priv->device_attr.max_sge is %d\",\n \t      priv->device_attr.max_sge);\n-#ifdef RSS_SUPPORT\n-\tif (priv->rss)\n-\t\ttmpl.qp = rxq_setup_qp_rss(priv, tmpl.cq, desc, parent,\n-\t\t\t\t\t   tmpl.rd);\n-\telse\n-#endif /* RSS_SUPPORT */\n-\t\ttmpl.qp = rxq_setup_qp(priv, tmpl.cq, desc, tmpl.rd);\n-\tif (tmpl.qp == NULL) {\n-\t\tret = (errno ? errno : EINVAL);\n-\t\tERROR(\"%p: QP creation failure: %s\",\n-\t\t      (void *)dev, strerror(ret));\n-\t\tgoto error;\n-\t}\n-\tmod = (struct ibv_exp_qp_attr){\n-\t\t/* Move the QP to this state. */\n-\t\t.qp_state = IBV_QPS_INIT,\n-\t\t/* Primary port number. */\n-\t\t.port_num = priv->port\n+\tattr.wq = (struct ibv_exp_wq_init_attr){\n+\t\t.wq_context = NULL, /* Could be useful in the future. */\n+\t\t.wq_type = IBV_EXP_WQT_RQ,\n+\t\t/* Max number of outstanding WRs. */\n+\t\t.max_recv_wr = ((priv->device_attr.max_qp_wr < (int)cq_size) ?\n+\t\t\t\tpriv->device_attr.max_qp_wr :\n+\t\t\t\t(int)cq_size),\n+\t\t/* Max number of scatter/gather elements in a WR. */\n+\t\t.max_recv_sge = ((priv->device_attr.max_sge <\n+\t\t\t\t  MLX5_PMD_SGE_WR_N) ?\n+\t\t\t\t priv->device_attr.max_sge :\n+\t\t\t\t MLX5_PMD_SGE_WR_N),\n+\t\t.pd = priv->pd,\n+\t\t.cq = tmpl.cq,\n+\t\t.comp_mask = IBV_EXP_CREATE_WQ_RES_DOMAIN,\n+\t\t.res_domain = tmpl.rd,\n \t};\n-\tret = ibv_exp_modify_qp(tmpl.qp, &mod,\n-\t\t\t\t(IBV_EXP_QP_STATE |\n-#ifdef RSS_SUPPORT\n-\t\t\t\t (parent ? IBV_EXP_QP_GROUP_RSS : 0) |\n-#endif /* RSS_SUPPORT */\n-\t\t\t\t IBV_EXP_QP_PORT));\n-\tif (ret) {\n-\t\tERROR(\"%p: QP state to IBV_QPS_INIT failed: %s\",\n+\ttmpl.wq = ibv_exp_create_wq(priv->ctx, &attr.wq);\n+\tif (tmpl.wq == NULL) {\n+\t\tret = (errno ? errno : EINVAL);\n+\t\tERROR(\"%p: WQ creation failure: %s\",\n \t\t      (void *)dev, strerror(ret));\n \t\tgoto error;\n \t}\n-\tif ((parent) || (!priv->rss))  {\n-\t\t/* Configure MAC and broadcast addresses. */\n-\t\tret = rxq_mac_addrs_add(&tmpl);\n-\t\tif (ret) {\n-\t\t\tERROR(\"%p: QP flow attachment failed: %s\",\n-\t\t\t      (void *)dev, strerror(ret));\n-\t\t\tgoto error;\n-\t\t}\n-\t}\n-\t/* Allocate descriptors for RX queues, except for the RSS parent. */\n-\tif (parent)\n-\t\tgoto skip_alloc;\n \tif (tmpl.sp)\n \t\tret = rxq_alloc_elts_sp(&tmpl, desc, NULL);\n \telse\n@@ -878,7 +909,6 @@ skip_mr:\n \t\t      (void *)dev, strerror(ret));\n \t\tgoto error;\n \t}\n-skip_alloc:\n \t/* Save port ID. */\n \ttmpl.port_id = dev->data->port_id;\n \tDEBUG(\"%p: RTE port ID: %u\", (void *)rxq, tmpl.port_id);\n@@ -895,33 +925,44 @@ skip_alloc:\n \t}\n \tattr.params = (struct ibv_exp_query_intf_params){\n \t\t.intf_scope = IBV_EXP_INTF_GLOBAL,\n-\t\t.intf = IBV_EXP_INTF_QP_BURST,\n-\t\t.obj = tmpl.qp,\n+\t\t.intf = IBV_EXP_INTF_WQ,\n+\t\t.obj = tmpl.wq,\n \t};\n-\ttmpl.if_qp = ibv_exp_query_intf(priv->ctx, &attr.params, &status);\n-\tif (tmpl.if_qp == NULL) {\n-\t\tERROR(\"%p: QP interface family query failed with status %d\",\n+\ttmpl.if_wq = ibv_exp_query_intf(priv->ctx, &attr.params, &status);\n+\tif (tmpl.if_wq == NULL) {\n+\t\tERROR(\"%p: WQ interface family query failed with status %d\",\n \t\t      (void *)dev, status);\n \t\tgoto error;\n \t}\n+\t/* Change queue state to ready. */\n+\tmod = (struct ibv_exp_wq_attr){\n+\t\t.attr_mask = IBV_EXP_WQ_ATTR_STATE,\n+\t\t.wq_state = IBV_EXP_WQS_RDY,\n+\t};\n+\tret = ibv_exp_modify_wq(tmpl.wq, &mod);\n+\tif (ret) {\n+\t\tERROR(\"%p: WQ state to IBV_EXP_WQS_RDY failed: %s\",\n+\t\t      (void *)dev, strerror(ret));\n+\t\tgoto error;\n+\t}\n \t/* Post SGEs. */\n-\tif (!parent && tmpl.sp) {\n+\tif (tmpl.sp) {\n \t\tstruct rxq_elt_sp (*elts)[tmpl.elts_n] = tmpl.elts.sp;\n \n \t\tfor (i = 0; (i != RTE_DIM(*elts)); ++i) {\n-\t\t\tret = tmpl.if_qp->recv_sg_list\n-\t\t\t\t(tmpl.qp,\n+\t\t\tret = tmpl.if_wq->recv_sg_list\n+\t\t\t\t(tmpl.wq,\n \t\t\t\t (*elts)[i].sges,\n \t\t\t\t RTE_DIM((*elts)[i].sges));\n \t\t\tif (ret)\n \t\t\t\tbreak;\n \t\t}\n-\t} else if (!parent) {\n+\t} else {\n \t\tstruct rxq_elt (*elts)[tmpl.elts_n] = tmpl.elts.no_sp;\n \n \t\tfor (i = 0; (i != RTE_DIM(*elts)); ++i) {\n-\t\t\tret = tmpl.if_qp->recv_burst(\n-\t\t\t\ttmpl.qp,\n+\t\t\tret = tmpl.if_wq->recv_burst(\n+\t\t\t\ttmpl.wq,\n \t\t\t\t&(*elts)[i].sge,\n \t\t\t\t1);\n \t\t\tif (ret)\n@@ -935,15 +976,6 @@ skip_alloc:\n \t\tret = EIO;\n \t\tgoto error;\n \t}\n-\tmod = (struct ibv_exp_qp_attr){\n-\t\t.qp_state = IBV_QPS_RTR\n-\t};\n-\tret = ibv_exp_modify_qp(tmpl.qp, &mod, IBV_EXP_QP_STATE);\n-\tif (ret) {\n-\t\tERROR(\"%p: QP state to IBV_QPS_RTR failed: %s\",\n-\t\t      (void *)dev, strerror(ret));\n-\t\tgoto error;\n-\t}\n \t/* Clean up rxq in case we're reinitializing it. */\n \tDEBUG(\"%p: cleaning-up old rxq just in case\", (void *)rxq);\n \trxq_cleanup(rxq);\n@@ -1047,7 +1079,6 @@ mlx5_rx_queue_release(void *dpdk_rxq)\n \t\treturn;\n \tpriv = rxq->priv;\n \tpriv_lock(priv);\n-\tassert(rxq != &priv->rxq_parent);\n \tfor (i = 0; (i != priv->rxqs_n); ++i)\n \t\tif ((*priv->rxqs)[i] == rxq) {\n \t\t\tDEBUG(\"%p: removing RX queue %p from list\",\ndiff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c\nindex 06712cb..6469a8d 100644\n--- a/drivers/net/mlx5/mlx5_rxtx.c\n+++ b/drivers/net/mlx5/mlx5_rxtx.c\n@@ -753,7 +753,7 @@ mlx5_rx_burst_sp(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)\n \t\trxq->stats.ibytes += pkt_buf_len;\n #endif\n repost:\n-\t\tret = rxq->if_qp->recv_sg_list(rxq->qp,\n+\t\tret = rxq->if_wq->recv_sg_list(rxq->wq,\n \t\t\t\t\t       elt->sges,\n \t\t\t\t\t       RTE_DIM(elt->sges));\n \t\tif (unlikely(ret)) {\n@@ -911,7 +911,7 @@ repost:\n #ifdef DEBUG_RECV\n \tDEBUG(\"%p: reposting %u WRs\", (void *)rxq, i);\n #endif\n-\tret = rxq->if_qp->recv_burst(rxq->qp, sges, i);\n+\tret = rxq->if_wq->recv_burst(rxq->wq, sges, i);\n \tif (unlikely(ret)) {\n \t\t/* Inability to repost WRs is fatal. */\n \t\tDEBUG(\"%p: recv_burst(): failed (ret=%d)\",\ndiff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h\nindex aec67f6..75f8297 100644\n--- a/drivers/net/mlx5/mlx5_rxtx.h\n+++ b/drivers/net/mlx5/mlx5_rxtx.h\n@@ -99,16 +99,9 @@ struct rxq {\n \tstruct rte_mempool *mp; /* Memory Pool for allocations. */\n \tstruct ibv_mr *mr; /* Memory Region (for mp). */\n \tstruct ibv_cq *cq; /* Completion Queue. */\n-\tstruct ibv_qp *qp; /* Queue Pair. */\n-\tstruct ibv_exp_qp_burst_family *if_qp; /* QP burst interface. */\n+\tstruct ibv_exp_wq *wq; /* Work Queue. */\n+\tstruct ibv_exp_wq_family *if_wq; /* WQ burst interface. */\n \tstruct ibv_exp_cq_family *if_cq; /* CQ interface. */\n-\t/*\n-\t * Each VLAN ID requires a separate flow steering rule.\n-\t */\n-\tBITFIELD_DECLARE(mac_configured, uint32_t, MLX5_MAX_MAC_ADDRESSES);\n-\tstruct ibv_flow *mac_flow[MLX5_MAX_MAC_ADDRESSES][MLX5_MAX_VLAN_IDS];\n-\tstruct ibv_flow *promisc_flow; /* Promiscuous flow. */\n-\tstruct ibv_flow *allmulti_flow; /* Multicast flow. */\n \tunsigned int port_id; /* Port ID for incoming packets. */\n \tunsigned int elts_n; /* (*elts)[] length. */\n \tunsigned int elts_head; /* Current index in (*elts)[]. */\n@@ -125,6 +118,16 @@ struct rxq {\n \tstruct ibv_exp_res_domain *rd; /* Resource Domain. */\n };\n \n+struct hash_rxq {\n+\tstruct priv *priv; /* Back pointer to private data. */\n+\tstruct ibv_qp *qp; /* Hash RX QP. */\n+\t/* Each VLAN ID requires a separate flow steering rule. */\n+\tBITFIELD_DECLARE(mac_configured, uint32_t, MLX5_MAX_MAC_ADDRESSES);\n+\tstruct ibv_flow *mac_flow[MLX5_MAX_MAC_ADDRESSES][MLX5_MAX_VLAN_IDS];\n+\tstruct ibv_flow *promisc_flow; /* Promiscuous flow. */\n+\tstruct ibv_flow *allmulti_flow; /* Multicast flow. */\n+};\n+\n /* TX element. */\n struct txq_elt {\n \tstruct rte_mbuf *buf;\n@@ -169,6 +172,8 @@ struct txq {\n \n /* mlx5_rxq.c */\n \n+int priv_create_hash_rxqs(struct priv *);\n+void priv_destroy_hash_rxqs(struct priv *);\n void rxq_cleanup(struct rxq *);\n int rxq_rehash(struct rte_eth_dev *, struct rxq *);\n int rxq_setup(struct rte_eth_dev *, struct rxq *, uint16_t, unsigned int,\ndiff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c\nindex fbc977c..2876ea7 100644\n--- a/drivers/net/mlx5/mlx5_trigger.c\n+++ b/drivers/net/mlx5/mlx5_trigger.c\n@@ -60,54 +60,35 @@ int\n mlx5_dev_start(struct rte_eth_dev *dev)\n {\n \tstruct priv *priv = dev->data->dev_private;\n-\tunsigned int i = 0;\n-\tunsigned int r;\n-\tstruct rxq *rxq;\n+\tint err;\n \n \tpriv_lock(priv);\n \tif (priv->started) {\n \t\tpriv_unlock(priv);\n \t\treturn 0;\n \t}\n-\tDEBUG(\"%p: attaching configured flows to all RX queues\", (void *)dev);\n-\tpriv->started = 1;\n-\tif (priv->rss) {\n-\t\trxq = &priv->rxq_parent;\n-\t\tr = 1;\n-\t} else {\n-\t\trxq = (*priv->rxqs)[0];\n-\t\tr = priv->rxqs_n;\n-\t}\n-\t/* Iterate only once when RSS is enabled. */\n-\tdo {\n-\t\tint ret;\n-\n-\t\t/* Ignore nonexistent RX queues. */\n-\t\tif (rxq == NULL)\n-\t\t\tcontinue;\n-\t\tret = rxq_mac_addrs_add(rxq);\n-\t\tif (!ret && priv->promisc)\n-\t\t\tret = rxq_promiscuous_enable(rxq);\n-\t\tif (!ret && priv->allmulti)\n-\t\t\tret = rxq_allmulticast_enable(rxq);\n-\t\tif (!ret)\n-\t\t\tcontinue;\n-\t\tWARN(\"%p: QP flow attachment failed: %s\",\n-\t\t     (void *)dev, strerror(ret));\n+\tDEBUG(\"%p: allocating and configuring hash RX queues\", (void *)dev);\n+\terr = priv_create_hash_rxqs(priv);\n+\tif (!err)\n+\t\terr = priv_mac_addrs_enable(priv);\n+\tif (!err && priv->promisc)\n+\t\terr = priv_promiscuous_enable(priv);\n+\tif (!err && priv->allmulti)\n+\t\terr = priv_allmulticast_enable(priv);\n+\tif (!err)\n+\t\tpriv->started = 1;\n+\telse {\n+\t\tERROR(\"%p: an error occurred while configuring hash RX queues:\"\n+\t\t      \" %s\",\n+\t\t      (void *)priv, strerror(err));\n \t\t/* Rollback. */\n-\t\twhile (i != 0) {\n-\t\t\trxq = (*priv->rxqs)[--i];\n-\t\t\tif (rxq != NULL) {\n-\t\t\t\trxq_allmulticast_disable(rxq);\n-\t\t\t\trxq_promiscuous_disable(rxq);\n-\t\t\t\trxq_mac_addrs_del(rxq);\n-\t\t\t}\n-\t\t}\n-\t\tpriv->started = 0;\n-\t\treturn -ret;\n-\t} while ((--r) && ((rxq = (*priv->rxqs)[++i]), i));\n+\t\tpriv_allmulticast_disable(priv);\n+\t\tpriv_promiscuous_disable(priv);\n+\t\tpriv_mac_addrs_disable(priv);\n+\t\tpriv_destroy_hash_rxqs(priv);\n+\t}\n \tpriv_unlock(priv);\n-\treturn 0;\n+\treturn -err;\n }\n \n /**\n@@ -122,32 +103,17 @@ void\n mlx5_dev_stop(struct rte_eth_dev *dev)\n {\n \tstruct priv *priv = dev->data->dev_private;\n-\tunsigned int i = 0;\n-\tunsigned int r;\n-\tstruct rxq *rxq;\n \n \tpriv_lock(priv);\n \tif (!priv->started) {\n \t\tpriv_unlock(priv);\n \t\treturn;\n \t}\n-\tDEBUG(\"%p: detaching flows from all RX queues\", (void *)dev);\n+\tDEBUG(\"%p: cleaning up and destroying hash RX queues\", (void *)dev);\n+\tpriv_allmulticast_disable(priv);\n+\tpriv_promiscuous_disable(priv);\n+\tpriv_mac_addrs_disable(priv);\n+\tpriv_destroy_hash_rxqs(priv);\n \tpriv->started = 0;\n-\tif (priv->rss) {\n-\t\trxq = &priv->rxq_parent;\n-\t\tr = 1;\n-\t} else {\n-\t\trxq = (*priv->rxqs)[0];\n-\t\tr = priv->rxqs_n;\n-\t}\n-\t/* Iterate only once when RSS is enabled. */\n-\tdo {\n-\t\t/* Ignore nonexistent RX queues. */\n-\t\tif (rxq == NULL)\n-\t\t\tcontinue;\n-\t\trxq_allmulticast_disable(rxq);\n-\t\trxq_promiscuous_disable(rxq);\n-\t\trxq_mac_addrs_del(rxq);\n-\t} while ((--r) && ((rxq = (*priv->rxqs)[++i]), i));\n \tpriv_unlock(priv);\n }\ndiff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c\nindex 60fe06b..2105a81 100644\n--- a/drivers/net/mlx5/mlx5_vlan.c\n+++ b/drivers/net/mlx5/mlx5_vlan.c\n@@ -94,47 +94,25 @@ vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)\n \tif ((on) && (!priv->vlan_filter[j].enabled)) {\n \t\t/*\n \t\t * Filter is disabled, enable it.\n-\t\t * Rehashing flows in all RX queues is necessary.\n+\t\t * Rehashing flows in all hash RX queues is necessary.\n \t\t */\n-\t\tif (priv->rss)\n-\t\t\trxq_mac_addrs_del(&priv->rxq_parent);\n-\t\telse\n-\t\t\tfor (i = 0; (i != priv->rxqs_n); ++i)\n-\t\t\t\tif ((*priv->rxqs)[i] != NULL)\n-\t\t\t\t\trxq_mac_addrs_del((*priv->rxqs)[i]);\n+\t\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\t\thash_rxq_mac_addrs_del(&(*priv->hash_rxqs)[i]);\n \t\tpriv->vlan_filter[j].enabled = 1;\n-\t\tif (priv->started) {\n-\t\t\tif (priv->rss)\n-\t\t\t\trxq_mac_addrs_add(&priv->rxq_parent);\n-\t\t\telse\n-\t\t\t\tfor (i = 0; (i != priv->rxqs_n); ++i) {\n-\t\t\t\t\tif ((*priv->rxqs)[i] == NULL)\n-\t\t\t\t\t\tcontinue;\n-\t\t\t\t\trxq_mac_addrs_add((*priv->rxqs)[i]);\n-\t\t\t\t}\n-\t\t}\n+\t\tif (priv->started)\n+\t\t\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\t\t\thash_rxq_mac_addrs_add(&(*priv->hash_rxqs)[i]);\n \t} else if ((!on) && (priv->vlan_filter[j].enabled)) {\n \t\t/*\n \t\t * Filter is enabled, disable it.\n \t\t * Rehashing flows in all RX queues is necessary.\n \t\t */\n-\t\tif (priv->rss)\n-\t\t\trxq_mac_addrs_del(&priv->rxq_parent);\n-\t\telse\n-\t\t\tfor (i = 0; (i != priv->rxqs_n); ++i)\n-\t\t\t\tif ((*priv->rxqs)[i] != NULL)\n-\t\t\t\t\trxq_mac_addrs_del((*priv->rxqs)[i]);\n+\t\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\t\thash_rxq_mac_addrs_del(&(*priv->hash_rxqs)[i]);\n \t\tpriv->vlan_filter[j].enabled = 0;\n-\t\tif (priv->started) {\n-\t\t\tif (priv->rss)\n-\t\t\t\trxq_mac_addrs_add(&priv->rxq_parent);\n-\t\t\telse\n-\t\t\t\tfor (i = 0; (i != priv->rxqs_n); ++i) {\n-\t\t\t\t\tif ((*priv->rxqs)[i] == NULL)\n-\t\t\t\t\t\tcontinue;\n-\t\t\t\t\trxq_mac_addrs_add((*priv->rxqs)[i]);\n-\t\t\t\t}\n-\t\t}\n+\t\tif (priv->started)\n+\t\t\tfor (i = 0; (i != priv->hash_rxqs_n); ++i)\n+\t\t\t\thash_rxq_mac_addrs_add(&(*priv->hash_rxqs)[i]);\n \t}\n \treturn 0;\n }\n",
    "prefixes": [
        "dpdk-dev",
        "03/17"
    ]
}