Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/464/?format=api
http://patches.dpdk.org/api/patches/464/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/1411478047-1251-5-git-send-email-jing.d.chen@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<1411478047-1251-5-git-send-email-jing.d.chen@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/1411478047-1251-5-git-send-email-jing.d.chen@intel.com", "date": "2014-09-23T13:14:05", "name": "[dpdk-dev,4/6] i40e: add VMDQ support", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "82231a1365d2c662f0f37fe049270734a1bfa1f6", "submitter": { "id": 40, "url": "http://patches.dpdk.org/api/people/40/?format=api", "name": "Chen, Jing D", "email": "jing.d.chen@intel.com" }, "delegate": null, "mbox": "http://patches.dpdk.org/project/dpdk/patch/1411478047-1251-5-git-send-email-jing.d.chen@intel.com/mbox/", "series": [], "comments": "http://patches.dpdk.org/api/patches/464/comments/", "check": "pending", "checks": "http://patches.dpdk.org/api/patches/464/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 1B556B3AD;\n\tTue, 23 Sep 2014 15:12:57 +0200 (CEST)", "from mga01.intel.com (mga01.intel.com [192.55.52.88])\n\tby dpdk.org (Postfix) with ESMTP id 0D43F333\n\tfor <dev@dpdk.org>; Tue, 23 Sep 2014 15:12:47 +0200 (CEST)", "from fmsmga003.fm.intel.com ([10.253.24.29])\n\tby fmsmga101.fm.intel.com with ESMTP; 23 Sep 2014 06:14:23 -0700", "from shvmail01.sh.intel.com ([10.239.29.42])\n\tby FMSMGA003.fm.intel.com with ESMTP; 23 Sep 2014 06:08:30 -0700", "from shecgisg003.sh.intel.com (shecgisg003.sh.intel.com\n\t[10.239.29.90])\n\tby shvmail01.sh.intel.com with ESMTP id s8NDEK6Y012272;\n\tTue, 23 Sep 2014 21:14:20 +0800", "from shecgisg003.sh.intel.com (localhost [127.0.0.1])\n\tby shecgisg003.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP\n\tid s8NDEIo6001316; Tue, 23 Sep 2014 21:14:20 +0800", "(from jingche2@localhost)\n\tby shecgisg003.sh.intel.com (8.13.6/8.13.6/Submit) id s8NDEH5T001312; \n\tTue, 23 Sep 2014 21:14:17 +0800" ], "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"4.97,862,1389772800\"; d=\"scan'208\";a=\"390347850\"", "From": "\"Chen Jing D(Mark)\" <jing.d.chen@intel.com>", "To": "dev@dpdk.org", "Date": "Tue, 23 Sep 2014 21:14:05 +0800", "Message-Id": "<1411478047-1251-5-git-send-email-jing.d.chen@intel.com>", "X-Mailer": "git-send-email 1.7.4.1", "In-Reply-To": "<1411478047-1251-1-git-send-email-jing.d.chen@intel.com>", "References": "<1411478047-1251-1-git-send-email-jing.d.chen@intel.com>", "Subject": "[dpdk-dev] [PATCH 4/6] i40e: add VMDQ support", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "patches and discussions about DPDK <dev.dpdk.org>", "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://dpdk.org/ml/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "From: \"Chen Jing D(Mark)\" <jing.d.chen@intel.com>\n\nThe change includes several parts:\n1. Get maximum number of VMDQ pools supported in dev_init.\n2. Fill VMDQ info in i40e_dev_info_get.\n3. Setup VMDQ pools in i40e_dev_configure.\n4. i40e_vsi_setup change to support creation of VMDQ VSI.\n\nSigned-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>\nAcked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>\nAcked-by: Jingjing Wu <jingjing.wu@intel.com>\nAcked-by: Jijiang Liu <jijiang.liu@intel.com>\nAcked-by: Huawei Xie <huawei.xie@intel.com>\n---\n config/common_linuxapp | 1 +\n lib/librte_pmd_i40e/i40e_ethdev.c | 237 ++++++++++++++++++++++++++++++++-----\n lib/librte_pmd_i40e/i40e_ethdev.h | 17 +++-\n 3 files changed, 225 insertions(+), 30 deletions(-)", "diff": "diff --git a/config/common_linuxapp b/config/common_linuxapp\nindex 5bee910..d0bb3f7 100644\n--- a/config/common_linuxapp\n+++ b/config/common_linuxapp\n@@ -208,6 +208,7 @@ CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y\n CONFIG_RTE_LIBRTE_I40E_ALLOW_UNSUPPORTED_SFP=n\n CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n\n CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4\n+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4\n # interval up to 8160 us, aligned to 2 (or default value)\n CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1\n \ndiff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c\nindex a00d6ca..a267c96 100644\n--- a/lib/librte_pmd_i40e/i40e_ethdev.c\n+++ b/lib/librte_pmd_i40e/i40e_ethdev.c\n@@ -168,6 +168,7 @@ static int i40e_get_cap(struct i40e_hw *hw);\n static int i40e_pf_parameter_init(struct rte_eth_dev *dev);\n static int i40e_pf_setup(struct i40e_pf *pf);\n static int i40e_vsi_init(struct i40e_vsi *vsi);\n+static int i40e_vmdq_setup(struct rte_eth_dev *dev);\n static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,\n \t\tbool offset_loaded, uint64_t *offset, uint64_t *stat);\n static void i40e_stat_update_48(struct i40e_hw *hw,\n@@ -269,21 +270,11 @@ static struct eth_driver rte_i40e_pmd = {\n };\n \n static inline int\n-i40e_prev_power_of_2(int n)\n+i40e_align_floor(int n)\n {\n- int p = n;\n-\n- --p;\n- p |= p >> 1;\n- p |= p >> 2;\n- p |= p >> 4;\n- p |= p >> 8;\n- p |= p >> 16;\n- if (p == (n - 1))\n- return n;\n- p >>= 1;\n-\n- return ++p;\n+\tif (n == 0)\n+\t\treturn 0;\n+\treturn (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n)));\n }\n \n static inline int\n@@ -500,7 +491,7 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,\n \tif (!dev->data->mac_addrs) {\n \t\tPMD_INIT_LOG(ERR, \"Failed to allocated memory \"\n \t\t\t\t\t\"for storing mac address\");\n-\t\tgoto err_get_mac_addr;\n+\t\tgoto err_mac_alloc;\n \t}\n \tether_addr_copy((struct ether_addr *)hw->mac.perm_addr,\n \t\t\t\t\t&dev->data->mac_addrs[0]);\n@@ -521,8 +512,9 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,\n \n \treturn 0;\n \n+err_mac_alloc:\n+\ti40e_vsi_release(pf->main_vsi);\n err_setup_pf_switch:\n-\trte_free(pf->main_vsi);\n err_get_mac_addr:\n err_configure_lan_hmc:\n \t(void)i40e_shutdown_lan_hmc(hw);\n@@ -541,6 +533,27 @@ err_get_capabilities:\n static int\n i40e_dev_configure(struct rte_eth_dev *dev)\n {\n+\tint ret;\n+\tenum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;\n+\n+\t/* VMDQ setup.\n+\t * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as VMDQ and\n+\t * RSS setting have different requirements.\n+\t * General PMD driver call sequence are NIC init, configure,\n+\t * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it\n+\t * will try to lookup the VSI that specific queue belongs to if VMDQ\n+\t * applicable. So, VMDQ setting has to be done before\n+\t * rx/tx_queue_setup(). This function is good to place vmdq_setup.\n+\t * For RSS setting, it will try to calculate actual configured RX queue\n+\t * number, which will be available after rx_queue_setup(). dev_start()\n+\t * function is good to place RSS setup.\n+\t */\n+\tif (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {\n+\t\tret = i40e_vmdq_setup(dev);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n \treturn i40e_dev_init_vlan(dev);\n }\n \n@@ -1389,6 +1402,16 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)\n \t\tDEV_TX_OFFLOAD_UDP_CKSUM |\n \t\tDEV_TX_OFFLOAD_TCP_CKSUM |\n \t\tDEV_TX_OFFLOAD_SCTP_CKSUM;\n+\n+\tif (pf->flags | I40E_FLAG_VMDQ) {\n+\t\tdev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;\n+\t\tdev_info->vmdq_queue_base = dev_info->max_rx_queues;\n+\t\tdev_info->vmdq_queue_num = pf->vmdq_nb_qps *\n+\t\t\t\t\t\tpf->max_nb_vmdq_vsi;\n+\t\tdev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE;\n+\t\tdev_info->max_rx_queues += dev_info->vmdq_queue_num;\n+\t\tdev_info->max_tx_queues += dev_info->vmdq_queue_num;\n+\t}\n }\n \n static int\n@@ -1814,7 +1837,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)\n {\n \tstruct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);\n \tstruct i40e_hw *hw = I40E_PF_TO_HW(pf);\n-\tuint16_t sum_queues = 0, sum_vsis;\n+\tuint16_t sum_queues = 0, sum_vsis, left_queues;\n \n \t/* First check if FW support SRIOV */\n \tif (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {\n@@ -1830,7 +1853,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)\n \t\tpf->flags |= I40E_FLAG_RSS;\n \t\tpf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,\n \t\t\t(uint32_t)(1 << hw->func_caps.rss_table_entry_width));\n-\t\tpf->lan_nb_qps = i40e_prev_power_of_2(pf->lan_nb_qps);\n+\t\tpf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);\n \t} else\n \t\tpf->lan_nb_qps = 1;\n \tsum_queues = pf->lan_nb_qps;\n@@ -1864,11 +1887,19 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)\n \n \tif (hw->func_caps.vmdq) {\n \t\tpf->flags |= I40E_FLAG_VMDQ;\n-\t\tpf->vmdq_nb_qps = I40E_DEFAULT_QP_NUM_VMDQ;\n-\t\tsum_queues += pf->vmdq_nb_qps;\n-\t\tsum_vsis += 1;\n-\t\tPMD_INIT_LOG(INFO, \"VMDQ queue pairs:%u\", pf->vmdq_nb_qps);\n+\t\tpf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;\n+\t\tpf->max_nb_vmdq_vsi = 1;\n+\t\t/*\n+\t\t * If VMDQ available, assume a single VSI can be created. Will adjust\n+\t\t * later.\n+\t\t */\n+\t\tsum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;\n+\t\tsum_vsis += pf->max_nb_vmdq_vsi;\n+\t} else {\n+\t\tpf->vmdq_nb_qps = 0;\n+\t\tpf->max_nb_vmdq_vsi = 0;\n \t}\n+\tpf->nb_cfg_vmdq_vsi = 0;\n \n \tif (hw->func_caps.fd) {\n \t\tpf->flags |= I40E_FLAG_FDIR;\n@@ -1889,6 +1920,22 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)\n \t\treturn -EINVAL;\n \t}\n \n+\t/* Adjust VMDQ setting to support as many VMs as possible */\n+\tif (pf->flags & I40E_FLAG_VMDQ) {\n+\t\tleft_queues = hw->func_caps.num_rx_qp - sum_queues;\n+\n+\t\tpf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,\n+\t\t\t\t\tpf->max_num_vsi - sum_vsis);\n+\n+\t\t/* Limit the max VMDQ number that rte_ether that can support */\n+\t\tpf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,\n+\t\t\t\t\tETH_64_POOLS - 1);\n+\n+\t\tPMD_INIT_LOG(INFO, \"Max VMDQ VSI num:%u\",\n+\t\t\t\tpf->max_nb_vmdq_vsi);\n+\t\tPMD_INIT_LOG(INFO, \"VMDQ queue pairs:%u\", pf->vmdq_nb_qps);\n+\t}\n+\n \t/* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr\n \t * cause */\n \tif (sum_vsis > hw->func_caps.num_msix_vectors - 1) {\n@@ -2281,7 +2328,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,\n \tvsi->enabled_tc = enabled_tcmap;\n \n \t/* Number of queues per enabled TC */\n-\tqpnum_per_tc = i40e_prev_power_of_2(vsi->nb_qps / total_tc);\n+\tqpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc);\n \tqpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC);\n \tbsf = rte_bsf32(qpnum_per_tc);\n \n@@ -2587,6 +2634,9 @@ i40e_vsi_setup(struct i40e_pf *pf,\n \tcase I40E_VSI_SRIOV :\n \t\tvsi->nb_qps = pf->vf_nb_qps;\n \t\tbreak;\n+\tcase I40E_VSI_VMDQ2:\n+\t\tvsi->nb_qps = pf->vmdq_nb_qps;\n+\t\tbreak;\n \tdefault:\n \t\tgoto fail_mem;\n \t}\n@@ -2728,8 +2778,44 @@ i40e_vsi_setup(struct i40e_pf *pf,\n \t\t * Since VSI is not created yet, only configure parameter,\n \t\t * will add vsi below.\n \t\t */\n-\t}\n-\telse {\n+\t} else if (type == I40E_VSI_VMDQ2) {\n+\t\tmemset(&ctxt, 0, sizeof(ctxt));\n+\t\t/*\n+\t\t * For other VSI, the uplink_seid equals to uplink VSI's\n+\t\t * uplink_seid since they share same VEB\n+\t\t */\n+\t\tvsi->uplink_seid = uplink_vsi->uplink_seid;\n+\t\tctxt.pf_num = hw->pf_id;\n+\t\tctxt.vf_num = 0;\n+\t\tctxt.uplink_seid = vsi->uplink_seid;\n+\t\tctxt.connection_type = 0x1;\n+\t\tctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2;\n+\n+\t\tctxt.info.valid_sections |=\n+\t\t\t\trte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID);\n+\t\t/* user_param carries flag to enable loop back */\n+\t\tif (user_param) {\n+\t\t\tctxt.info.switch_id =\n+\t\t\trte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB);\n+\t\t\tctxt.info.switch_id |=\n+\t\t\trte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);\n+\t\t}\n+\n+\t\t/* Configure port/vlan */\n+\t\tctxt.info.valid_sections |=\n+\t\t\trte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);\n+\t\tctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL;\n+\t\tret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info,\n+\t\t\t\t\t\tI40E_DEFAULT_TCMAP);\n+\t\tif (ret != I40E_SUCCESS) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Failed to configure \"\n+\t\t\t\t\t\"TC queue mapping\\n\");\n+\t\t\tgoto fail_msix_alloc;\n+\t\t}\n+\t\tctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP;\n+\t\tctxt.info.valid_sections |=\n+\t\t\trte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID);\n+\t} else {\n \t\tPMD_DRV_LOG(ERR, \"VSI: Not support other type VSI yet\");\n \t\tgoto fail_msix_alloc;\n \t}\n@@ -2901,7 +2987,6 @@ i40e_pf_setup(struct i40e_pf *pf)\n {\n \tstruct i40e_hw *hw = I40E_PF_TO_HW(pf);\n \tstruct i40e_filter_control_settings settings;\n-\tstruct rte_eth_dev_data *dev_data = pf->dev_data;\n \tstruct i40e_vsi *vsi;\n \tint ret;\n \n@@ -2923,8 +3008,6 @@ i40e_pf_setup(struct i40e_pf *pf)\n \t\treturn I40E_ERR_NOT_READY;\n \t}\n \tpf->main_vsi = vsi;\n-\tdev_data->nb_rx_queues = vsi->nb_qps;\n-\tdev_data->nb_tx_queues = vsi->nb_qps;\n \n \t/* Configure filter control */\n \tmemset(&settings, 0, sizeof(settings));\n@@ -3195,6 +3278,102 @@ i40e_vsi_init(struct i40e_vsi *vsi)\n \treturn err;\n }\n \n+static int\n+i40e_vmdq_setup(struct rte_eth_dev *dev)\n+{\n+\tstruct rte_eth_conf *conf = &dev->data->dev_conf;\n+\tstruct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);\n+\tint i, err, conf_vsis, j, loop;\n+\tstruct i40e_vsi *vsi;\n+\tstruct i40e_vmdq_info *vmdq_info;\n+\tstruct rte_eth_vmdq_rx_conf *vmdq_conf;\n+\tstruct i40e_hw *hw = I40E_PF_TO_HW(pf);\n+\n+\t/*\n+\t * Disable interrupt to avoid message from VF. Furthermore, it will\n+\t * avoid race condition in VSI creation/destroy.\n+\t */\n+\ti40e_pf_disable_irq0(hw);\n+\n+\tif ((pf->flags & I40E_FLAG_VMDQ) == 0) {\n+\t\tPMD_INIT_LOG(ERR, \"FW doesn't support VMDQ\");\n+\t\treturn -ENOTSUP;\n+\t}\n+\n+\tconf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools;\n+\tif (conf_vsis > pf->max_nb_vmdq_vsi) {\n+\t\tPMD_INIT_LOG(ERR, \"VMDQ config: %u, max support:%u\",\n+\t\t\tconf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools,\n+\t\t\tpf->max_nb_vmdq_vsi);\n+\t\treturn -ENOTSUP;\n+\t}\n+\n+\tif (pf->vmdq != NULL) {\n+\t\tPMD_INIT_LOG(INFO, \"VMDQ already configured\");\n+\t\treturn 0;\n+\t}\n+\n+\tpf->vmdq = rte_zmalloc(\"vmdq_info_struct\",\n+\t\t\t\tsizeof(*vmdq_info) * conf_vsis, 0);\n+\n+\tif (pf->vmdq == NULL) {\n+\t\tPMD_INIT_LOG(ERR, \"Failed to allocate memory\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tvmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf;\n+\n+\t/* Create VMDQ VSI */\n+\tfor (i = 0; i < conf_vsis; i++) {\n+\t\tvsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi,\n+\t\t\t\tvmdq_conf->enable_loop_back);\n+\t\tif (vsi == NULL) {\n+\t\t\tPMD_INIT_LOG(ERR, \"Failed to create VMDQ VSI\");\n+\t\t\terr = -1;\n+\t\t\tgoto err_vsi_setup;\n+\t\t}\n+\t\tvmdq_info = &pf->vmdq[i];\n+\t\tvmdq_info->pf = pf;\n+\t\tvmdq_info->vsi = vsi;\n+\t}\n+\tpf->nb_cfg_vmdq_vsi = conf_vsis;\n+\n+\t/* Configure Vlan */\n+\tloop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT;\n+\tfor (i = 0; i < vmdq_conf->nb_pool_maps; i++) {\n+\t\tfor (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) {\n+\t\t\tif (vmdq_conf->pool_map[i].pools & (1UL << j)) {\n+\t\t\t\tPMD_INIT_LOG(INFO, \"Add vlan %u to vmdq pool %u\",\n+\t\t\t\t\tvmdq_conf->pool_map[i].vlan_id, j);\n+\n+\t\t\t\terr = i40e_vsi_add_vlan(pf->vmdq[j].vsi,\n+\t\t\t\t\t\tvmdq_conf->pool_map[i].vlan_id);\n+\t\t\t\tif (err) {\n+\t\t\t\t\tPMD_INIT_LOG(ERR, \"Failed to add vlan\");\n+\t\t\t\t\terr = -1;\n+\t\t\t\t\tgoto err_vsi_setup;\n+\t\t\t\t}\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\ti40e_pf_enable_irq0(hw);\n+\n+\treturn 0;\n+\n+err_vsi_setup:\n+\tfor (i = 0; i < conf_vsis; i++)\n+\t\tif (pf->vmdq[i].vsi == NULL)\n+\t\t\tbreak;\n+\t\telse\n+\t\t\ti40e_vsi_release(pf->vmdq[i].vsi);\n+\n+\trte_free(pf->vmdq);\n+\tpf->vmdq = NULL;\n+\ti40e_pf_enable_irq0(hw);\n+\treturn err;\n+}\n+\n static void\n i40e_stat_update_32(struct i40e_hw *hw,\n \t\t uint32_t reg,\n@@ -4086,7 +4265,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)\n \tstruct i40e_hw *hw = I40E_PF_TO_HW(pf);\n \tstruct rte_eth_rss_conf rss_conf;\n \tuint32_t i, lut = 0;\n-\tuint16_t j, num = i40e_prev_power_of_2(pf->dev_data->nb_rx_queues);\n+\tuint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);\n \n \tfor (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {\n \t\tif (j == num)\ndiff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h\nindex 64deef2..b06de05 100644\n--- a/lib/librte_pmd_i40e/i40e_ethdev.h\n+++ b/lib/librte_pmd_i40e/i40e_ethdev.h\n@@ -45,13 +45,15 @@\n #define I40E_QUEUE_BASE_ADDR_UNIT 128\n /* number of VSIs and queue default setting */\n #define I40E_MAX_QP_NUM_PER_VF 16\n-#define I40E_DEFAULT_QP_NUM_VMDQ 64\n #define I40E_DEFAULT_QP_NUM_FDIR 64\n #define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))\n #define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE)\n /* Default TC traffic in case DCB is not enabled */\n #define I40E_DEFAULT_TCMAP 0x1\n \n+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */\n+#define I40E_VMDQ_POOL_BASE 1\n+\n /* i40e flags */\n #define I40E_FLAG_RSS (1ULL << 0)\n #define I40E_FLAG_DCB (1ULL << 1)\n@@ -189,6 +191,14 @@ struct i40e_pf_vf {\n };\n \n /*\n+ * Structure to store private data for VMDQ instance\n+ */\n+struct i40e_vmdq_info {\n+\tstruct i40e_pf *pf;\n+\tstruct i40e_vsi *vsi;\n+};\n+\n+/*\n * Structure to store private data specific for PF instance.\n */\n struct i40e_pf {\n@@ -216,6 +226,11 @@ struct i40e_pf {\n \tuint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */\n \tuint16_t vf_nb_qps; /* The number of queue pairs of VF */\n \tuint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */\n+\n+\t/* VMDQ related info */\n+\tuint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs supported */\n+\tuint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */\n+\tstruct i40e_vmdq_info *vmdq;\n };\n \n enum pending_msg {\n", "prefixes": [ "dpdk-dev", "4/6" ] }{ "id": 464, "url": "