[dpdk-dev] Multi-driver support for Fortville
Checks
Commit Message
Hi,
Resending the queries with change in subject line.
1) With these patches, we have 2 different values for some of the global registers depending upon whether single driver or multi-driver is using all ports of the NIC. Does it impact any functionality/performance if we use DPDK drivers in single driver vs multi-driver support?
2) Why can't we have same settings for both the cases? i.e Unconditionally programming the global registers in DPDK driver with the same values as in Kernel driver. That way we don't have to care for extra parameter.
3) Does this issue need any update for kernel driver also?
Regards,
Nitin
-----Original Message-----
From: Nitin Katiyar
Sent: Monday, February 12, 2018 11:32 AM
To: dev@dpdk.org
Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
Subject: RE: dev Digest, Vol 180, Issue 152
Hi Beilei,
I was looking at the patches and have few queries regarding support-multi-driver.
1) With these patches, we have 2 different values for some of the global registers depending upon whether single driver or multi-driver is using all ports of the NIC. Does it impact any functionality/performance if we use DPDK drivers in single driver vs multi-driver support?
2) Why can't we have same settings for both the cases? That way we don't have to care for extra parameter.
3) Does this issue need any update for kernel driver also?
Regards,
Nitin
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of dev-request@dpdk.org
Sent: Friday, February 02, 2018 5:55 PM
To: dev@dpdk.org
Subject: dev Digest, Vol 180, Issue 152
Send dev mailing list submissions to
dev@dpdk.org
To subscribe or unsubscribe via the World Wide Web, visit
https://dpdk.org/ml/listinfo/dev
or, via email, send a message with subject or body 'help' to
dev-request@dpdk.org
You can reach the person managing the list at
dev-owner@dpdk.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of dev digest..."
Today's Topics:
1. [PATCH v3 2/4] net/i40e: add debug logs when writing global
registers (Beilei Xing)
2. [PATCH v3 3/4] net/i40e: fix multiple driver support issue
(Beilei Xing)
3. [PATCH v3 4/4] net/i40e: fix interrupt conflict when using
multi-driver (Beilei Xing)
----------------------------------------------------------------------
Message: 1
Date: Fri, 2 Feb 2018 20:25:08 +0800
From: Beilei Xing <beilei.xing@intel.com>
To: dev@dpdk.org, jingjing.wu@intel.com
Cc: stable@dpdk.org
Subject: [dpdk-dev] [PATCH v3 2/4] net/i40e: add debug logs when
writing global registers
Message-ID: <1517574310-93096-3-git-send-email-beilei.xing@intel.com>
Add debug logs when writing global registers.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Cc: stable@dpdk.org
---
drivers/net/i40e/i40e_ethdev.c | 127 +++++++++++++++++++++++++----------------
drivers/net/i40e/i40e_ethdev.h | 8 +++
2 files changed, 87 insertions(+), 48 deletions(-)
Comments
Hi Nitin,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nitin Katiyar
> Sent: Tuesday, February 13, 2018 11:48 AM
> To: dev@dpdk.org
> Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> Subject: [dpdk-dev] Multi-driver support for Fortville
>
> Hi,
> Resending the queries with change in subject line.
> 1) With these patches, we have 2 different values for some of the global
> registers depending upon whether single driver or multi-driver is using all
> ports of the NIC. Does it impact any functionality/performance if we use
> DPDK drivers in single driver vs multi-driver support?
Yes. If support multi-driver,
for functionality, some configurations will not be supported. Including flow director flexible payload, RSS input set/RSS bit mask/hash function/symmetric hash/FDIR input set/TPID/flow control watermark/GRE tunnel key length configuration, QinQ parser and QinQ cloud filter support.
For performance, PF will use INT0 instead of INTN when support multi-driver, so there'll be many interrupts costing CPU cycles during receiving packets.
> 2) Why can't we have same settings for both the cases? i.e Unconditionally
> programming the global registers in DPDK driver with the same values as in
> Kernel driver. That way we don't have to care for extra parameter.
The reason is same as above.
> 3) Does this issue need any update for kernel driver also?
As I know, there's no need to update kernel driver.
>
> Regards,
> Nitin
>
> -----Original Message-----
> From: Nitin Katiyar
> Sent: Monday, February 12, 2018 11:32 AM
> To: dev@dpdk.org
> Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> Subject: RE: dev Digest, Vol 180, Issue 152
>
> Hi Beilei,
> I was looking at the patches and have few queries regarding
> support-multi-driver.
> 1) With these patches, we have 2 different values for some of the global
> registers depending upon whether single driver or multi-driver is using all
> ports of the NIC. Does it impact any functionality/performance if we use
> DPDK drivers in single driver vs multi-driver support?
> 2) Why can't we have same settings for both the cases? That way we don't
> have to care for extra parameter.
> 3) Does this issue need any update for kernel driver also?
>
>
> Regards,
> Nitin
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> dev-request@dpdk.org
> Sent: Friday, February 02, 2018 5:55 PM
> To: dev@dpdk.org
> Subject: dev Digest, Vol 180, Issue 152
>
> Send dev mailing list submissions to
> dev@dpdk.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://dpdk.org/ml/listinfo/dev
> or, via email, send a message with subject or body 'help' to
> dev-request@dpdk.org
>
> You can reach the person managing the list at
> dev-owner@dpdk.org
>
> When replying, please edit your Subject line so it is more specific than "Re:
> Contents of dev digest..."
>
>
> Today's Topics:
>
> 1. [PATCH v3 2/4] net/i40e: add debug logs when writing global
> registers (Beilei Xing)
> 2. [PATCH v3 3/4] net/i40e: fix multiple driver support issue
> (Beilei Xing)
> 3. [PATCH v3 4/4] net/i40e: fix interrupt conflict when using
> multi-driver (Beilei Xing)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 2 Feb 2018 20:25:08 +0800
> From: Beilei Xing <beilei.xing@intel.com>
> To: dev@dpdk.org, jingjing.wu@intel.com
> Cc: stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 2/4] net/i40e: add debug logs when
> writing global registers
> Message-ID: <1517574310-93096-3-git-send-email-beilei.xing@intel.com>
>
> Add debug logs when writing global registers.
>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> Cc: stable@dpdk.org
> ---
> drivers/net/i40e/i40e_ethdev.c | 127
> +++++++++++++++++++++++++----------------
> drivers/net/i40e/i40e_ethdev.h | 8 +++
> 2 files changed, 87 insertions(+), 48 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 44821f2..ef23241 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -716,6 +716,15 @@ rte_i40e_dev_atomic_write_link_status(struct
> rte_eth_dev *dev,
> return 0;
> }
>
> +static inline void
> +i40e_write_global_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val)
> +{
> + i40e_write_rx_ctl(hw, reg_addr, reg_val);
> + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
> + "with value 0x%08x",
> + reg_addr, reg_val);
> +}
> +
> RTE_PMD_REGISTER_PCI(net_i40e, rte_i40e_pmd.pci_drv);
> RTE_PMD_REGISTER_PCI_TABLE(net_i40e, pci_id_i40e_map);
>
> @@ -735,9 +744,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw
> *hw)
> * configuration API is added to avoid configuration conflicts
> * between ports of the same device.
> */
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
> i40e_global_cfg_warning(I40E_WARNING_ENA_FLX_PLD);
>
> /*
> @@ -746,8 +755,8 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw
> *hw)
> * configuration API is added to avoid configuration conflicts
> * between ports of the same device.
> */
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
> - I40E_WRITE_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
> i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
> }
>
> @@ -2799,8 +2808,9 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
> "I40E_GL_SWT_L2TAGCTRL[%d]", reg_id);
> return ret;
> }
> - PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
> - "I40E_GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
> + PMD_DRV_LOG(DEBUG,
> + "Global register 0x%08x is changed with value 0x%08x",
> + I40E_GL_SWT_L2TAGCTRL(reg_id), (uint32_t)reg_w);
>
> i40e_global_cfg_warning(I40E_WARNING_TPID);
>
> @@ -3030,16 +3040,16 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev,
> struct rte_eth_fc_conf *fc_conf)
> }
>
> /* config the water marker both based on the packets and bytes */
> - I40E_WRITE_REG(hw, I40E_GLRPB_PHW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_REG(hw, I40E_GLRPB_PLW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_REG(hw, I40E_GLRPB_GHW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT);
> - I40E_WRITE_REG(hw, I40E_GLRPB_GLW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT);
> i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> @@ -6880,6 +6890,9 @@ i40e_dev_set_gre_key_len(struct i40e_hw *hw,
> uint8_t len)
> reg, NULL);
> if (ret != 0)
> return ret;
> + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is changed "
> + "with value 0x%08x",
> + I40E_GL_PRS_FVBM(2), reg);
> i40e_global_cfg_warning(I40E_WARNING_GRE_KEY_LEN);
> } else {
> ret = 0;
> @@ -7124,41 +7137,43 @@ i40e_set_hash_filter_global_config(struct
> i40e_hw *hw,
> I40E_GLQF_HSYM_SYMH_ENA_MASK : 0;
> if (hw->mac.type == I40E_MAC_X722) {
> if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_UDP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV4_UDP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP),
> reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP),
> reg);
> } else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_TCP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV4_TCP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
>
> I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK),
> reg);
> } else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV6_UDP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV6_UDP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP),
> reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP),
> reg);
> } else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV6_TCP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV6_TCP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
>
> I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK),
> reg);
> } else {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
> - reg);
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(pctype),
> + reg);
> }
> } else {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype), reg);
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
> + reg);
> }
> i40e_global_cfg_warning(I40E_WARNING_HSYM);
> }
> @@ -7184,7 +7199,7 @@ i40e_set_hash_filter_global_config(struct
> i40e_hw *hw,
> /* Use the default, and keep it as it is */
> goto out;
>
> - i40e_write_rx_ctl(hw, I40E_GLQF_CTL, reg);
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
> i40e_global_cfg_warning(I40E_WARNING_QF_CTL);
>
> out:
> @@ -7799,6 +7814,18 @@ i40e_check_write_reg(struct i40e_hw *hw,
> uint32_t addr, uint32_t val) }
>
> static void
> +i40e_check_write_global_reg(struct i40e_hw *hw, uint32_t addr, uint32_t
> +val) {
> + uint32_t reg = i40e_read_rx_ctl(hw, addr);
> +
> + PMD_DRV_LOG(DEBUG, "[0x%08x] original: 0x%08x", addr, reg);
> + if (reg != val)
> + i40e_write_global_rx_ctl(hw, addr, val);
> + PMD_DRV_LOG(DEBUG, "[0x%08x] after: 0x%08x", addr,
> + (uint32_t)i40e_read_rx_ctl(hw, addr)); }
> +
> +static void
> i40e_filter_input_set_init(struct i40e_pf *pf) {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf); @@ -7831,24 +7858,28
> @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> pctype),
> (uint32_t)(inset_reg & UINT32_MAX));
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> pctype),
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
>
> for (i = 0; i < num; i++) {
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - mask_reg[i]);
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - mask_reg[i]);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + mask_reg[i]);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_MSK(i, pctype),
> + mask_reg[i]);
> }
> /*clear unused mask registers of the pctype */
> for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - 0);
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - 0);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + 0);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_MSK(i, pctype),
> + 0);
> }
> I40E_WRITE_FLUSH(hw);
>
> @@ -7920,20 +7951,20 @@ i40e_hash_filter_inset_select(struct i40e_hw
> *hw,
>
> inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
>
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> - (uint32_t)(inset_reg & UINT32_MAX));
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> - (uint32_t)((inset_reg >>
> - I40E_32_BIT_WIDTH) & UINT32_MAX));
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> + (uint32_t)(inset_reg & UINT32_MAX));
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> + (uint32_t)((inset_reg >>
> + I40E_32_BIT_WIDTH) & UINT32_MAX));
> i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
>
> for (i = 0; i < num; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - mask_reg[i]);
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i,
> pctype),
> + mask_reg[i]);
> /*clear unused mask registers of the pctype */
> for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - 0);
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i,
> pctype),
> + 0);
> i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> I40E_WRITE_FLUSH(hw);
>
> @@ -8007,12 +8038,12 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> I40E_32_BIT_WIDTH) & UINT32_MAX));
>
> for (i = 0; i < num; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - mask_reg[i]);
> + i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> + mask_reg[i]);
> /*clear unused mask registers of the pctype */
> for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - 0);
> + i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> + 0);
> i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> I40E_WRITE_FLUSH(hw);
>
> diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> index 1d813ef..12b6000 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -103,6 +103,14 @@
> (((vf)->version_major == I40E_VIRTCHNL_VERSION_MAJOR) && \
> ((vf)->version_minor == 1))
>
> +static inline void
> +I40E_WRITE_GLB_REG(struct i40e_hw *hw, uint32_t reg, uint32_t value) {
> + I40E_WRITE_REG(hw, reg, value);
> + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
> + "with value 0x%08x",
> + reg, value);
> +}
> +
> /* index flex payload per layer */
> enum i40e_flxpld_layer_idx {
> I40E_FLXPLD_L2_IDX = 0,
> --
> 2.5.5
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 2 Feb 2018 20:25:09 +0800
> From: Beilei Xing <beilei.xing@intel.com>
> To: dev@dpdk.org, jingjing.wu@intel.com
> Cc: stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 3/4] net/i40e: fix multiple driver
> support issue
> Message-ID: <1517574310-93096-4-git-send-email-beilei.xing@intel.com>
>
> This patch provides the option to disable writing some global registers in
> PMD, in order to avoid affecting other drivers, when multiple drivers run on
> the same NIC and control different physical ports. Because there are few
> global resources shared among different physical ports.
>
> Fixes: ec246eeb5da1 ("i40e: use default filter input set on init")
> Fixes: 98f055707685 ("i40e: configure input fields for RSS or flow director")
> Fixes: f05ec7d77e41 ("i40e: initialize flow director flexible payload setting")
> Fixes: e536c2e32883 ("net/i40e: fix parsing QinQ packets type")
> Fixes: 19b16e2f6442 ("ethdev: add vlan type when setting ether type")
> Cc: stable@dpdk.org
>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> ---
> drivers/net/i40e/i40e_ethdev.c | 215
> ++++++++++++++++++++++++++++++++---------
> drivers/net/i40e/i40e_ethdev.h | 2 +
> 2 files changed, 171 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index ef23241..ae0f31a 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -944,6 +944,67 @@ config_floating_veb(struct rte_eth_dev *dev)
> #define I40E_L2_TAGS_S_TAG_SHIFT 1 #define
> I40E_L2_TAGS_S_TAG_MASK I40E_MASK(0x1, I40E_L2_TAGS_S_TAG_SHIFT)
>
> +#define ETH_I40E_SUPPORT_MULTI_DRIVER "support-multi-driver"
> +RTE_PMD_REGISTER_PARAM_STRING(net_i40e,
> + ETH_I40E_SUPPORT_MULTI_DRIVER "=0|1");
> +
> +static int
> +i40e_parse_multi_drv_handler(__rte_unused const char *key,
> + const char *value,
> + void *opaque)
> +{
> + struct i40e_pf *pf;
> + unsigned long support_multi_driver;
> + char *end;
> +
> + pf = (struct i40e_pf *)opaque;
> +
> + errno = 0;
> + support_multi_driver = strtoul(value, &end, 10);
> + if (errno != 0 || end == value || *end != 0) {
> + PMD_DRV_LOG(WARNING, "Wrong global configuration");
> + return -(EINVAL);
> + }
> +
> + if (support_multi_driver == 1 || support_multi_driver == 0)
> + pf->support_multi_driver = (bool)support_multi_driver;
> + else
> + PMD_DRV_LOG(WARNING, "%s must be 1 or 0,",
> + "enable global configuration by default."
> + ETH_I40E_SUPPORT_MULTI_DRIVER);
> + return 0;
> +}
> +
> +static int
> +i40e_support_multi_driver(struct rte_eth_dev *dev) {
> + struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> + struct rte_pci_device *pci_dev = dev->pci_dev;
> + static const char *valid_keys[] = {
> + ETH_I40E_SUPPORT_MULTI_DRIVER, NULL};
> + struct rte_kvargs *kvlist;
> +
> + /* Enable global configuration by default */
> + pf->support_multi_driver = false;
> +
> + if (!pci_dev->device.devargs)
> + return 0;
> +
> + kvlist = rte_kvargs_parse(pci_dev->device.devargs->args, valid_keys);
> + if (!kvlist)
> + return -EINVAL;
> +
> + if (rte_kvargs_count(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER) > 1)
> + PMD_DRV_LOG(WARNING, "More than one argument \"%s\" and
> only "
> + "the first invalid or last valid one is used !",
> + ETH_I40E_SUPPORT_MULTI_DRIVER);
> +
> + rte_kvargs_process(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER,
> + i40e_parse_multi_drv_handler, pf);
> + rte_kvargs_free(kvlist);
> + return 0;
> +}
> +
> static int
> eth_i40e_dev_init(struct rte_eth_dev *dev) { @@ -993,6 +1054,9 @@
> eth_i40e_dev_init(struct rte_eth_dev *dev)
> hw->bus.func = pci_dev->addr.function;
> hw->adapter_stopped = 0;
>
> + /* Check if need to support multi-driver */
> + i40e_support_multi_driver(dev);
> +
> /* Make sure all is clean before doing PF reset */
> i40e_clear_hw(hw);
>
> @@ -1019,7 +1083,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> * software. It should be removed once issues are fixed
> * in NVM.
> */
> - i40e_GLQF_reg_init(hw);
> + if (!pf->support_multi_driver)
> + i40e_GLQF_reg_init(hw);
>
> /* Initialize the input set for filters (hash and fd) to default value */
> i40e_filter_input_set_init(pf);
> @@ -1115,11 +1180,14 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> i40e_set_fc(hw, &aq_fail, TRUE);
>
> /* Set the global registers with default ether type value */
> - ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
> ETHER_TYPE_VLAN);
> - if (ret != I40E_SUCCESS) {
> - PMD_INIT_LOG(ERR, "Failed to set the default outer "
> - "VLAN ether type");
> - goto err_setup_pf_switch;
> + if (!pf->support_multi_driver) {
> + ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
> + ETHER_TYPE_VLAN);
> + if (ret != I40E_SUCCESS) {
> + PMD_INIT_LOG(ERR, "Failed to set the default outer "
> + "VLAN ether type");
> + goto err_setup_pf_switch;
> + }
> }
>
> /* PF setup, which includes VSI setup */ @@ -2754,11 +2822,17 @@
> i40e_vlan_tpid_set(struct rte_eth_dev *dev,
> uint16_t tpid)
> {
> struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> uint64_t reg_r = 0, reg_w = 0;
> uint16_t reg_id = 0;
> int ret = 0;
> int qinq = dev->data->dev_conf.rxmode.hw_vlan_extend;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "Setting TPID is not supported.");
> + return -ENOTSUP;
> + }
> +
> switch (vlan_type) {
> case ETH_VLAN_TYPE_OUTER:
> if (qinq)
> @@ -3039,20 +3113,25 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev,
> struct rte_eth_fc_conf *fc_conf)
> I40E_WRITE_REG(hw, I40E_PRTDCB_MFLCN, mflcn_reg);
> }
>
> - /* config the water marker both based on the packets and bytes */
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> - (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> - (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> - pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT);
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> - pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT);
> - i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> + if (!pf->support_multi_driver) {
> + /* config water marker both based on the packets and bytes */
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> + (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> + (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> + pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT);
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> + pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT);
> + i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> + } else {
> + PMD_DRV_LOG(ERR,
> + "Water marker configuration is not supported.");
> + }
>
> I40E_WRITE_FLUSH(hw);
>
> @@ -6870,9 +6949,15 @@ i40e_tunnel_filter_param_check(struct i40e_pf
> *pf, static int i40e_dev_set_gre_key_len(struct i40e_hw *hw, uint8_t len)
> {
> + struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
> uint32_t val, reg;
> int ret = -EINVAL;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "GRE key length configuration is
> unsupported");
> + return -ENOTSUP;
> + }
> +
> val = I40E_READ_REG(hw, I40E_GL_PRS_FVBM(2));
> PMD_DRV_LOG(DEBUG, "Read original GL_PRS_FVBM with 0x%08x\n",
> val);
>
> @@ -7114,12 +7199,18 @@ static int
> i40e_set_hash_filter_global_config(struct i40e_hw *hw,
> struct rte_eth_hash_global_conf *g_cfg) {
> + struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
> int ret;
> uint16_t i;
> uint32_t reg;
> uint32_t mask0 = g_cfg->valid_bit_mask[0];
> enum i40e_filter_pctype pctype;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "Hash global configuration is not
> supported.");
> + return -ENOTSUP;
> + }
> +
> /* Check the input parameters */
> ret = i40e_hash_global_config_check(g_cfg);
> if (ret < 0)
> @@ -7850,6 +7941,12 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> I40E_INSET_MASK_NUM_REG);
> if (num < 0)
> return;
> +
> + if (pf->support_multi_driver && num > 0) {
> + PMD_DRV_LOG(ERR, "Input set setting is not supported.");
> + return;
> + }
> +
> inset_reg = i40e_translate_input_set_reg(hw->mac.type,
> input_set);
>
> @@ -7858,39 +7955,49 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
> - i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> pctype),
> - (uint32_t)(inset_reg & UINT32_MAX));
> - i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> pctype),
> - (uint32_t)((inset_reg >>
> - I40E_32_BIT_WIDTH) & UINT32_MAX));
> -
> - for (i = 0; i < num; i++) {
> + if (!pf->support_multi_driver) {
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_INSET(0, pctype),
> + (uint32_t)(inset_reg & UINT32_MAX));
> i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_INSET(1, pctype),
> + (uint32_t)((inset_reg >>
> + I40E_32_BIT_WIDTH) & UINT32_MAX));
> +
> + for (i = 0; i < num; i++) {
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_FD_MSK(i, pctype),
> mask_reg[i]);
> - i40e_check_write_global_reg(hw,
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_HASH_MSK(i, pctype),
> mask_reg[i]);
> - }
> - /*clear unused mask registers of the pctype */
> - for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> - i40e_check_write_global_reg(hw,
> + }
> + /*clear unused mask registers of the pctype */
> + for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_FD_MSK(i, pctype),
> 0);
> - i40e_check_write_global_reg(hw,
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_HASH_MSK(i, pctype),
> - 0);
> + 0);
> + }
> + } else {
> + PMD_DRV_LOG(ERR,
> + "Input set setting is not supported.");
> }
> I40E_WRITE_FLUSH(hw);
>
> /* store the default input set */
> - pf->hash_input_set[pctype] = input_set;
> + if (!pf->support_multi_driver)
> + pf->hash_input_set[pctype] = input_set;
> pf->fdir.input_set[pctype] = input_set;
> }
>
> - i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> - i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> - i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> + if (!pf->support_multi_driver) {
> + i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> + i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> + i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> + }
> }
>
> int
> @@ -7903,6 +8010,11 @@ i40e_hash_filter_inset_select(struct i40e_hw
> *hw,
> uint32_t mask_reg[I40E_INSET_MASK_NUM_REG] = {0};
> int ret, i, num;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "Hash input set setting is not supported.");
> + return -ENOTSUP;
> + }
> +
> if (!conf) {
> PMD_DRV_LOG(ERR, "Invalid pointer");
> return -EFAULT;
> @@ -8029,6 +8141,11 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> if (num < 0)
> return -EINVAL;
>
> + if (pf->support_multi_driver && num > 0) {
> + PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
> + return -ENOTSUP;
> + }
> +
> inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
>
> i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 0), @@
> -8037,14 +8154,20 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
>
> - for (i = 0; i < num; i++)
> - i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - mask_reg[i]);
> - /*clear unused mask registers of the pctype */
> - for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> - i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - 0);
> - i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> + if (!pf->support_multi_driver) {
> + for (i = 0; i < num; i++)
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + mask_reg[i]);
> + /*clear unused mask registers of the pctype */
> + for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + 0);
> + i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> + } else {
> + PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
> + }
> I40E_WRITE_FLUSH(hw);
>
> pf->fdir.input_set[pctype] = input_set; diff --git
> a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index
> 12b6000..82d5501 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -485,6 +485,8 @@ struct i40e_pf {
> bool floating_veb; /* The flag to use the floating VEB */
> /* The floating enable flag for the specific VF */
> bool floating_veb_list[I40E_MAX_VF];
> +
> + bool support_multi_driver; /* 1 - support multiple driver */
> };
>
> enum pending_msg {
> --
> 2.5.5
>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 2 Feb 2018 20:25:10 +0800
> From: Beilei Xing <beilei.xing@intel.com>
> To: dev@dpdk.org, jingjing.wu@intel.com
> Cc: stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 4/4] net/i40e: fix interrupt conflict
> when using multi-driver
> Message-ID: <1517574310-93096-5-git-send-email-beilei.xing@intel.com>
>
> There's interrupt conflict when using DPDK and Linux i40e on different ports
> of the same Ethernet controller, this patch fixes it by switching from IntN to
> Int0 if multiple drivers are used.
>
> Fixes: be6c228d4da3 ("i40e: support Rx interrupt")
> Cc: stable@dpdk.org
>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> ---
> drivers/net/i40e/i40e_ethdev.c | 93
> +++++++++++++++++++++++++--------------
> drivers/net/i40e/i40e_ethdev.h | 10 +++--
> drivers/net/i40e/i40e_ethdev_vf.c | 4 +-
> 3 files changed, 68 insertions(+), 39 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index ae0f31a..cae22e7 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -760,6 +760,23 @@ static inline void i40e_GLQF_reg_init(struct
> i40e_hw *hw)
> i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
> }
>
> +static inline void i40e_config_automask(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t val;
> +
> + /* INTENA flag is not auto-cleared for interrupt */
> + val = I40E_READ_REG(hw, I40E_GLINT_CTL);
> + val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
> + I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
> +
> + /* If support multi-driver, PF will use INT0. */
> + if (!pf->support_multi_driver)
> + val |= I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK;
> +
> + I40E_WRITE_REG(hw, I40E_GLINT_CTL, val); }
> +
> #define I40E_FLOW_CONTROL_ETHERTYPE 0x8808
>
> /*
> @@ -1077,6 +1094,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> return ret;
> }
>
> + i40e_config_automask(pf);
> +
> /*
> * To work around the NVM issue, initialize registers
> * for flexible payload and packet type of QinQ by @@ -1463,6 +1482,7
> @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t msix_vect,
> int i;
> uint32_t val;
> struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
>
> /* Bind all RX queues to allocated MSIX interrupt */
> for (i = 0; i < nb_queue; i++) {
> @@ -1481,7 +1501,8 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi,
> uint16_t msix_vect,
> /* Write first RX queue to Link list register as the head element */
> if (vsi->type != I40E_VSI_SRIOV) {
> uint16_t interval =
> - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
> + pf->support_multi_driver);
>
> if (msix_vect == I40E_MISC_VEC_ID) {
> I40E_WRITE_REG(hw, I40E_PFINT_LNKLST0, @@ -1539,7
> +1560,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
> uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
> uint16_t queue_idx = 0;
> int record = 0;
> - uint32_t val;
> int i;
>
> for (i = 0; i < vsi->nb_qps; i++) {
> @@ -1547,13 +1567,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
> I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), 0);
> }
>
> - /* INTENA flag is not auto-cleared for interrupt */
> - val = I40E_READ_REG(hw, I40E_GLINT_CTL);
> - val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
> - I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK |
> - I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
> - I40E_WRITE_REG(hw, I40E_GLINT_CTL, val);
> -
> /* VF bind interrupt */
> if (vsi->type == I40E_VSI_SRIOV) {
> __vsi_queues_bind_intr(vsi, msix_vect, @@ -1606,27 +1619,22
> @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
> struct rte_eth_dev *dev = vsi->adapter->eth_dev;
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> - uint16_t interval = i40e_calc_itr_interval(\
> - RTE_LIBRTE_I40E_ITR_INTERVAL);
> + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> uint16_t msix_intr, i;
>
> - if (rte_intr_allow_others(intr_handle))
> + if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
> for (i = 0; i < vsi->nb_msix; i++) {
> msix_intr = vsi->msix_intr + i;
> I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(msix_intr - 1),
> - I40E_PFINT_DYN_CTLN_INTENA_MASK |
> - I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
> - (0 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) |
> - (interval <<
> - I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT));
> + I40E_PFINT_DYN_CTLN_INTENA_MASK |
> + I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
> + I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
> }
> else
> I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> I40E_PFINT_DYN_CTL0_INTENA_MASK |
> I40E_PFINT_DYN_CTL0_CLEARPBA_MASK |
> - (0 << I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT) |
> - (interval <<
> - I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT));
> + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
>
> I40E_WRITE_FLUSH(hw);
> }
> @@ -1637,16 +1645,18 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi
> *vsi)
> struct rte_eth_dev *dev = vsi->adapter->eth_dev;
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> uint16_t msix_intr, i;
>
> - if (rte_intr_allow_others(intr_handle))
> + if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
> for (i = 0; i < vsi->nb_msix; i++) {
> msix_intr = vsi->msix_intr + i;
> I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(msix_intr - 1),
> - 0);
> + I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
> }
> else
> - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
> + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
>
> I40E_WRITE_FLUSH(hw);
> }
> @@ -4618,16 +4628,28 @@ i40e_vsi_setup(struct i40e_pf *pf,
>
> /* VF has MSIX interrupt in VF range, don't allocate here */
> if (type == I40E_VSI_MAIN) {
> - ret = i40e_res_pool_alloc(&pf->msix_pool,
> - RTE_MIN(vsi->nb_qps,
> - RTE_MAX_RXTX_INTR_VEC_ID));
> - if (ret < 0) {
> - PMD_DRV_LOG(ERR, "VSI MAIN %d get heap failed %d",
> - vsi->seid, ret);
> - goto fail_queue_alloc;
> + if (pf->support_multi_driver) {
> + /* If support multi-driver, need to use INT0 instead of
> + * allocating from msix pool. The Msix pool is init from
> + * INT1, so it's OK just set msix_intr to 0 and nb_msix
> + * to 1 without calling i40e_res_pool_alloc.
> + */
> + vsi->msix_intr = 0;
> + vsi->nb_msix = 1;
> + } else {
> + ret = i40e_res_pool_alloc(&pf->msix_pool,
> + RTE_MIN(vsi->nb_qps,
> + RTE_MAX_RXTX_INTR_VEC_ID));
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR,
> + "VSI MAIN %d get heap failed %d",
> + vsi->seid, ret);
> + goto fail_queue_alloc;
> + }
> + vsi->msix_intr = ret;
> + vsi->nb_msix = RTE_MIN(vsi->nb_qps,
> + RTE_MAX_RXTX_INTR_VEC_ID);
> }
> - vsi->msix_intr = ret;
> - vsi->nb_msix = RTE_MIN(vsi->nb_qps,
> RTE_MAX_RXTX_INTR_VEC_ID);
> } else if (type != I40E_VSI_SRIOV) {
> ret = i40e_res_pool_alloc(&pf->msix_pool, 1);
> if (ret < 0) {
> @@ -5540,7 +5562,8 @@ void
> i40e_pf_disable_irq0(struct i40e_hw *hw) {
> /* Disable all interrupt types */
> - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
> + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
> I40E_WRITE_FLUSH(hw);
> }
>
> @@ -9861,10 +9884,12 @@ i40e_dev_get_dcb_info(struct rte_eth_dev
> *dev, static int i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
> uint16_t queue_id) {
> + struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> uint16_t interval =
> - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
> + pf->support_multi_driver);
> uint16_t msix_intr;
>
> msix_intr = intr_handle->intr_vec[queue_id]; diff --git
> a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index
> 82d5501..77a4466 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -720,10 +720,14 @@ i40e_align_floor(int n) }
>
> static inline uint16_t
> -i40e_calc_itr_interval(int16_t interval)
> +i40e_calc_itr_interval(int16_t interval, bool is_multi_drv)
> {
> - if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX)
> - interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
> + if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX) {
> + if (is_multi_drv)
> + interval = I40E_QUEUE_ITR_INTERVAL_MAX;
> + else
> + interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
> + }
>
> /* Convert to hardware count, as writing each 1 represents 2 us */
> return interval / 2;
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 1686914..618c717 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1246,7 +1246,7 @@ i40evf_init_vf(struct rte_eth_dev *dev)
> struct i40e_vf *vf =
> I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> struct ether_addr *p_mac_addr;
> uint16_t interval =
> - i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX);
> + i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX, 0);
>
> vf->adapter =
> I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> vf->dev_data = dev->data;
> @@ -1986,7 +1986,7 @@ i40evf_dev_rx_queue_intr_enable(struct
> rte_eth_dev *dev, uint16_t queue_id)
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> uint16_t interval =
> - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL, 0);
> uint16_t msix_intr;
>
> msix_intr = intr_handle->intr_vec[queue_id];
> --
> 2.5.5
>
>
>
> End of dev Digest, Vol 180, Issue 152
> *************************************
Hi Beilei,
Thanks for clarifying the queries. We have been referring to following patches.
https://dpdk.org/dev/patchwork/patch/34945/
https://dpdk.org/dev/patchwork/patch/34946/
https://dpdk.org/dev/patchwork/patch/34947/
https://dpdk.org/dev/patchwork/patch/34948/
Are these final versions and merged in dpdk branch? If not, where can I find latest patches?
Regards,
Nitin
-----Original Message-----
From: Xing, Beilei [mailto:beilei.xing@intel.com]
Sent: Wednesday, February 14, 2018 6:50 AM
To: Nitin Katiyar <nitin.katiyar@ericsson.com>; dev@dpdk.org
Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
Subject: RE: Multi-driver support for Fortville
Hi Nitin,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nitin Katiyar
> Sent: Tuesday, February 13, 2018 11:48 AM
> To: dev@dpdk.org
> Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> Subject: [dpdk-dev] Multi-driver support for Fortville
>
> Hi,
> Resending the queries with change in subject line.
> 1) With these patches, we have 2 different values for some of the
> global registers depending upon whether single driver or multi-driver
> is using all ports of the NIC. Does it impact any
> functionality/performance if we use DPDK drivers in single driver vs multi-driver support?
Yes. If support multi-driver,
for functionality, some configurations will not be supported. Including flow director flexible payload, RSS input set/RSS bit mask/hash function/symmetric hash/FDIR input set/TPID/flow control watermark/GRE tunnel key length configuration, QinQ parser and QinQ cloud filter support.
For performance, PF will use INT0 instead of INTN when support multi-driver, so there'll be many interrupts costing CPU cycles during receiving packets.
> 2) Why can't we have same settings for both the cases? i.e
> Unconditionally programming the global registers in DPDK driver with
> the same values as in Kernel driver. That way we don't have to care for extra parameter.
The reason is same as above.
> 3) Does this issue need any update for kernel driver also?
As I know, there's no need to update kernel driver.
>
> Regards,
> Nitin
>
> -----Original Message-----
> From: Nitin Katiyar
> Sent: Monday, February 12, 2018 11:32 AM
> To: dev@dpdk.org
> Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> Subject: RE: dev Digest, Vol 180, Issue 152
>
> Hi Beilei,
> I was looking at the patches and have few queries regarding
> support-multi-driver.
> 1) With these patches, we have 2 different values for some of the
> global registers depending upon whether single driver or multi-driver
> is using all ports of the NIC. Does it impact any
> functionality/performance if we use DPDK drivers in single driver vs multi-driver support?
> 2) Why can't we have same settings for both the cases? That way we
> don't have to care for extra parameter.
> 3) Does this issue need any update for kernel driver also?
>
>
> Regards,
> Nitin
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> dev-request@dpdk.org
> Sent: Friday, February 02, 2018 5:55 PM
> To: dev@dpdk.org
> Subject: dev Digest, Vol 180, Issue 152
>
> Send dev mailing list submissions to
> dev@dpdk.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://dpdk.org/ml/listinfo/dev
> or, via email, send a message with subject or body 'help' to
> dev-request@dpdk.org
>
> You can reach the person managing the list at
> dev-owner@dpdk.org
>
> When replying, please edit your Subject line so it is more specific than "Re:
> Contents of dev digest..."
>
>
> Today's Topics:
>
> 1. [PATCH v3 2/4] net/i40e: add debug logs when writing global
> registers (Beilei Xing)
> 2. [PATCH v3 3/4] net/i40e: fix multiple driver support issue
> (Beilei Xing)
> 3. [PATCH v3 4/4] net/i40e: fix interrupt conflict when using
> multi-driver (Beilei Xing)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 2 Feb 2018 20:25:08 +0800
> From: Beilei Xing <beilei.xing@intel.com>
> To: dev@dpdk.org, jingjing.wu@intel.com
> Cc: stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 2/4] net/i40e: add debug logs when
> writing global registers
> Message-ID: <1517574310-93096-3-git-send-email-beilei.xing@intel.com>
>
> Add debug logs when writing global registers.
>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> Cc: stable@dpdk.org
> ---
> drivers/net/i40e/i40e_ethdev.c | 127
> +++++++++++++++++++++++++----------------
> drivers/net/i40e/i40e_ethdev.h | 8 +++
> 2 files changed, 87 insertions(+), 48 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c
> b/drivers/net/i40e/i40e_ethdev.c index 44821f2..ef23241 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -716,6 +716,15 @@ rte_i40e_dev_atomic_write_link_status(struct
> rte_eth_dev *dev,
> return 0;
> }
>
> +static inline void
> +i40e_write_global_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32
> +reg_val) {
> + i40e_write_rx_ctl(hw, reg_addr, reg_val);
> + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
> + "with value 0x%08x",
> + reg_addr, reg_val);
> +}
> +
> RTE_PMD_REGISTER_PCI(net_i40e, rte_i40e_pmd.pci_drv);
> RTE_PMD_REGISTER_PCI_TABLE(net_i40e, pci_id_i40e_map);
>
> @@ -735,9 +744,9 @@ static inline void i40e_GLQF_reg_init(struct
> i40e_hw
> *hw)
> * configuration API is added to avoid configuration conflicts
> * between ports of the same device.
> */
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
> i40e_global_cfg_warning(I40E_WARNING_ENA_FLX_PLD);
>
> /*
> @@ -746,8 +755,8 @@ static inline void i40e_GLQF_reg_init(struct
> i40e_hw
> *hw)
> * configuration API is added to avoid configuration conflicts
> * between ports of the same device.
> */
> - I40E_WRITE_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
> - I40E_WRITE_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
> + I40E_WRITE_GLB_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
> i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
> }
>
> @@ -2799,8 +2808,9 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
> "I40E_GL_SWT_L2TAGCTRL[%d]", reg_id);
> return ret;
> }
> - PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
> - "I40E_GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
> + PMD_DRV_LOG(DEBUG,
> + "Global register 0x%08x is changed with value 0x%08x",
> + I40E_GL_SWT_L2TAGCTRL(reg_id), (uint32_t)reg_w);
>
> i40e_global_cfg_warning(I40E_WARNING_TPID);
>
> @@ -3030,16 +3040,16 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev,
> struct rte_eth_fc_conf *fc_conf)
> }
>
> /* config the water marker both based on the packets and bytes */
> - I40E_WRITE_REG(hw, I40E_GLRPB_PHW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_REG(hw, I40E_GLRPB_PLW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_REG(hw, I40E_GLRPB_GHW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT);
> - I40E_WRITE_REG(hw, I40E_GLRPB_GLW,
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> << I40E_KILOSHIFT);
> i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> @@ -6880,6 +6890,9 @@ i40e_dev_set_gre_key_len(struct i40e_hw *hw,
> uint8_t len)
> reg, NULL);
> if (ret != 0)
> return ret;
> + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is changed "
> + "with value 0x%08x",
> + I40E_GL_PRS_FVBM(2), reg);
> i40e_global_cfg_warning(I40E_WARNING_GRE_KEY_LEN);
> } else {
> ret = 0;
> @@ -7124,41 +7137,43 @@ i40e_set_hash_filter_global_config(struct
> i40e_hw *hw,
> I40E_GLQF_HSYM_SYMH_ENA_MASK : 0;
> if (hw->mac.type == I40E_MAC_X722) {
> if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_UDP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV4_UDP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP),
> reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP),
> reg);
> } else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_TCP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV4_TCP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
>
> I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK),
> reg);
> } else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV6_UDP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV6_UDP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP),
> reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP),
> reg);
> } else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV6_TCP) {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
> I40E_FILTER_PCTYPE_NONF_IPV6_TCP), reg);
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
>
> I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK),
> reg);
> } else {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
> - reg);
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(pctype),
> + reg);
> }
> } else {
> - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype), reg);
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
> + reg);
> }
> i40e_global_cfg_warning(I40E_WARNING_HSYM);
> }
> @@ -7184,7 +7199,7 @@ i40e_set_hash_filter_global_config(struct
> i40e_hw *hw,
> /* Use the default, and keep it as it is */
> goto out;
>
> - i40e_write_rx_ctl(hw, I40E_GLQF_CTL, reg);
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
> i40e_global_cfg_warning(I40E_WARNING_QF_CTL);
>
> out:
> @@ -7799,6 +7814,18 @@ i40e_check_write_reg(struct i40e_hw *hw,
> uint32_t addr, uint32_t val) }
>
> static void
> +i40e_check_write_global_reg(struct i40e_hw *hw, uint32_t addr,
> +uint32_t
> +val) {
> + uint32_t reg = i40e_read_rx_ctl(hw, addr);
> +
> + PMD_DRV_LOG(DEBUG, "[0x%08x] original: 0x%08x", addr, reg);
> + if (reg != val)
> + i40e_write_global_rx_ctl(hw, addr, val);
> + PMD_DRV_LOG(DEBUG, "[0x%08x] after: 0x%08x", addr,
> + (uint32_t)i40e_read_rx_ctl(hw, addr)); }
> +
> +static void
> i40e_filter_input_set_init(struct i40e_pf *pf) {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf); @@ -7831,24 +7858,28 @@
> i40e_filter_input_set_init(struct i40e_pf *pf)
> i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> pctype),
> (uint32_t)(inset_reg & UINT32_MAX));
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> pctype),
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
>
> for (i = 0; i < num; i++) {
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - mask_reg[i]);
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - mask_reg[i]);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + mask_reg[i]);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_MSK(i, pctype),
> + mask_reg[i]);
> }
> /*clear unused mask registers of the pctype */
> for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - 0);
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - 0);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + 0);
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_MSK(i, pctype),
> + 0);
> }
> I40E_WRITE_FLUSH(hw);
>
> @@ -7920,20 +7951,20 @@ i40e_hash_filter_inset_select(struct i40e_hw
> *hw,
>
> inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
>
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> - (uint32_t)(inset_reg & UINT32_MAX));
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> - (uint32_t)((inset_reg >>
> - I40E_32_BIT_WIDTH) & UINT32_MAX));
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> + (uint32_t)(inset_reg & UINT32_MAX));
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> + (uint32_t)((inset_reg >>
> + I40E_32_BIT_WIDTH) & UINT32_MAX));
> i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
>
> for (i = 0; i < num; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - mask_reg[i]);
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i,
> pctype),
> + mask_reg[i]);
> /*clear unused mask registers of the pctype */
> for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> - 0);
> + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i,
> pctype),
> + 0);
> i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> I40E_WRITE_FLUSH(hw);
>
> @@ -8007,12 +8038,12 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> I40E_32_BIT_WIDTH) & UINT32_MAX));
>
> for (i = 0; i < num; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - mask_reg[i]);
> + i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> + mask_reg[i]);
> /*clear unused mask registers of the pctype */
> for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - 0);
> + i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> + 0);
> i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> I40E_WRITE_FLUSH(hw);
>
> diff --git a/drivers/net/i40e/i40e_ethdev.h
> b/drivers/net/i40e/i40e_ethdev.h index 1d813ef..12b6000 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -103,6 +103,14 @@
> (((vf)->version_major == I40E_VIRTCHNL_VERSION_MAJOR) && \
> ((vf)->version_minor == 1))
>
> +static inline void
> +I40E_WRITE_GLB_REG(struct i40e_hw *hw, uint32_t reg, uint32_t value) {
> + I40E_WRITE_REG(hw, reg, value);
> + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
> + "with value 0x%08x",
> + reg, value);
> +}
> +
> /* index flex payload per layer */
> enum i40e_flxpld_layer_idx {
> I40E_FLXPLD_L2_IDX = 0,
> --
> 2.5.5
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 2 Feb 2018 20:25:09 +0800
> From: Beilei Xing <beilei.xing@intel.com>
> To: dev@dpdk.org, jingjing.wu@intel.com
> Cc: stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 3/4] net/i40e: fix multiple driver
> support issue
> Message-ID: <1517574310-93096-4-git-send-email-beilei.xing@intel.com>
>
> This patch provides the option to disable writing some global
> registers in PMD, in order to avoid affecting other drivers, when
> multiple drivers run on the same NIC and control different physical
> ports. Because there are few global resources shared among different physical ports.
>
> Fixes: ec246eeb5da1 ("i40e: use default filter input set on init")
> Fixes: 98f055707685 ("i40e: configure input fields for RSS or flow
> director")
> Fixes: f05ec7d77e41 ("i40e: initialize flow director flexible payload
> setting")
> Fixes: e536c2e32883 ("net/i40e: fix parsing QinQ packets type")
> Fixes: 19b16e2f6442 ("ethdev: add vlan type when setting ether type")
> Cc: stable@dpdk.org
>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> ---
> drivers/net/i40e/i40e_ethdev.c | 215
> ++++++++++++++++++++++++++++++++---------
> drivers/net/i40e/i40e_ethdev.h | 2 +
> 2 files changed, 171 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c
> b/drivers/net/i40e/i40e_ethdev.c index ef23241..ae0f31a 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -944,6 +944,67 @@ config_floating_veb(struct rte_eth_dev *dev)
> #define I40E_L2_TAGS_S_TAG_SHIFT 1 #define I40E_L2_TAGS_S_TAG_MASK
> I40E_MASK(0x1, I40E_L2_TAGS_S_TAG_SHIFT)
>
> +#define ETH_I40E_SUPPORT_MULTI_DRIVER "support-multi-driver"
> +RTE_PMD_REGISTER_PARAM_STRING(net_i40e,
> + ETH_I40E_SUPPORT_MULTI_DRIVER "=0|1");
> +
> +static int
> +i40e_parse_multi_drv_handler(__rte_unused const char *key,
> + const char *value,
> + void *opaque)
> +{
> + struct i40e_pf *pf;
> + unsigned long support_multi_driver;
> + char *end;
> +
> + pf = (struct i40e_pf *)opaque;
> +
> + errno = 0;
> + support_multi_driver = strtoul(value, &end, 10);
> + if (errno != 0 || end == value || *end != 0) {
> + PMD_DRV_LOG(WARNING, "Wrong global configuration");
> + return -(EINVAL);
> + }
> +
> + if (support_multi_driver == 1 || support_multi_driver == 0)
> + pf->support_multi_driver = (bool)support_multi_driver;
> + else
> + PMD_DRV_LOG(WARNING, "%s must be 1 or 0,",
> + "enable global configuration by default."
> + ETH_I40E_SUPPORT_MULTI_DRIVER);
> + return 0;
> +}
> +
> +static int
> +i40e_support_multi_driver(struct rte_eth_dev *dev) {
> + struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> + struct rte_pci_device *pci_dev = dev->pci_dev;
> + static const char *valid_keys[] = {
> + ETH_I40E_SUPPORT_MULTI_DRIVER, NULL};
> + struct rte_kvargs *kvlist;
> +
> + /* Enable global configuration by default */
> + pf->support_multi_driver = false;
> +
> + if (!pci_dev->device.devargs)
> + return 0;
> +
> + kvlist = rte_kvargs_parse(pci_dev->device.devargs->args, valid_keys);
> + if (!kvlist)
> + return -EINVAL;
> +
> + if (rte_kvargs_count(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER) > 1)
> + PMD_DRV_LOG(WARNING, "More than one argument \"%s\" and
> only "
> + "the first invalid or last valid one is used !",
> + ETH_I40E_SUPPORT_MULTI_DRIVER);
> +
> + rte_kvargs_process(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER,
> + i40e_parse_multi_drv_handler, pf);
> + rte_kvargs_free(kvlist);
> + return 0;
> +}
> +
> static int
> eth_i40e_dev_init(struct rte_eth_dev *dev) { @@ -993,6 +1054,9 @@
> eth_i40e_dev_init(struct rte_eth_dev *dev)
> hw->bus.func = pci_dev->addr.function;
> hw->adapter_stopped = 0;
>
> + /* Check if need to support multi-driver */
> + i40e_support_multi_driver(dev);
> +
> /* Make sure all is clean before doing PF reset */
> i40e_clear_hw(hw);
>
> @@ -1019,7 +1083,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> * software. It should be removed once issues are fixed
> * in NVM.
> */
> - i40e_GLQF_reg_init(hw);
> + if (!pf->support_multi_driver)
> + i40e_GLQF_reg_init(hw);
>
> /* Initialize the input set for filters (hash and fd) to default value */
> i40e_filter_input_set_init(pf);
> @@ -1115,11 +1180,14 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> i40e_set_fc(hw, &aq_fail, TRUE);
>
> /* Set the global registers with default ether type value */
> - ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
> ETHER_TYPE_VLAN);
> - if (ret != I40E_SUCCESS) {
> - PMD_INIT_LOG(ERR, "Failed to set the default outer "
> - "VLAN ether type");
> - goto err_setup_pf_switch;
> + if (!pf->support_multi_driver) {
> + ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
> + ETHER_TYPE_VLAN);
> + if (ret != I40E_SUCCESS) {
> + PMD_INIT_LOG(ERR, "Failed to set the default outer "
> + "VLAN ether type");
> + goto err_setup_pf_switch;
> + }
> }
>
> /* PF setup, which includes VSI setup */ @@ -2754,11 +2822,17 @@
> i40e_vlan_tpid_set(struct rte_eth_dev *dev,
> uint16_t tpid)
> {
> struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> uint64_t reg_r = 0, reg_w = 0;
> uint16_t reg_id = 0;
> int ret = 0;
> int qinq = dev->data->dev_conf.rxmode.hw_vlan_extend;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "Setting TPID is not supported.");
> + return -ENOTSUP;
> + }
> +
> switch (vlan_type) {
> case ETH_VLAN_TYPE_OUTER:
> if (qinq)
> @@ -3039,20 +3113,25 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev,
> struct rte_eth_fc_conf *fc_conf)
> I40E_WRITE_REG(hw, I40E_PRTDCB_MFLCN, mflcn_reg);
> }
>
> - /* config the water marker both based on the packets and bytes */
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> - (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> - (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> - pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT);
> - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> - pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> - << I40E_KILOSHIFT);
> - i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> + if (!pf->support_multi_driver) {
> + /* config water marker both based on the packets and bytes */
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> + (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> + (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> + pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT);
> + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> + pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> + << I40E_KILOSHIFT);
> + i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> + } else {
> + PMD_DRV_LOG(ERR,
> + "Water marker configuration is not supported.");
> + }
>
> I40E_WRITE_FLUSH(hw);
>
> @@ -6870,9 +6949,15 @@ i40e_tunnel_filter_param_check(struct i40e_pf
> *pf, static int i40e_dev_set_gre_key_len(struct i40e_hw *hw, uint8_t
> len) {
> + struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
> uint32_t val, reg;
> int ret = -EINVAL;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "GRE key length configuration is
> unsupported");
> + return -ENOTSUP;
> + }
> +
> val = I40E_READ_REG(hw, I40E_GL_PRS_FVBM(2));
> PMD_DRV_LOG(DEBUG, "Read original GL_PRS_FVBM with 0x%08x\n", val);
>
> @@ -7114,12 +7199,18 @@ static int
> i40e_set_hash_filter_global_config(struct i40e_hw *hw,
> struct rte_eth_hash_global_conf *g_cfg) {
> + struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
> int ret;
> uint16_t i;
> uint32_t reg;
> uint32_t mask0 = g_cfg->valid_bit_mask[0];
> enum i40e_filter_pctype pctype;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "Hash global configuration is not
> supported.");
> + return -ENOTSUP;
> + }
> +
> /* Check the input parameters */
> ret = i40e_hash_global_config_check(g_cfg);
> if (ret < 0)
> @@ -7850,6 +7941,12 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> I40E_INSET_MASK_NUM_REG);
> if (num < 0)
> return;
> +
> + if (pf->support_multi_driver && num > 0) {
> + PMD_DRV_LOG(ERR, "Input set setting is not supported.");
> + return;
> + }
> +
> inset_reg = i40e_translate_input_set_reg(hw->mac.type,
> input_set);
>
> @@ -7858,39 +7955,49 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
> - i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> pctype),
> - (uint32_t)(inset_reg & UINT32_MAX));
> - i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> pctype),
> - (uint32_t)((inset_reg >>
> - I40E_32_BIT_WIDTH) & UINT32_MAX));
> -
> - for (i = 0; i < num; i++) {
> + if (!pf->support_multi_driver) {
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_INSET(0, pctype),
> + (uint32_t)(inset_reg & UINT32_MAX));
> i40e_check_write_global_reg(hw,
> + I40E_GLQF_HASH_INSET(1, pctype),
> + (uint32_t)((inset_reg >>
> + I40E_32_BIT_WIDTH) & UINT32_MAX));
> +
> + for (i = 0; i < num; i++) {
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_FD_MSK(i, pctype),
> mask_reg[i]);
> - i40e_check_write_global_reg(hw,
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_HASH_MSK(i, pctype),
> mask_reg[i]);
> - }
> - /*clear unused mask registers of the pctype */
> - for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> - i40e_check_write_global_reg(hw,
> + }
> + /*clear unused mask registers of the pctype */
> + for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_FD_MSK(i, pctype),
> 0);
> - i40e_check_write_global_reg(hw,
> + i40e_check_write_global_reg(hw,
> I40E_GLQF_HASH_MSK(i, pctype),
> - 0);
> + 0);
> + }
> + } else {
> + PMD_DRV_LOG(ERR,
> + "Input set setting is not supported.");
> }
> I40E_WRITE_FLUSH(hw);
>
> /* store the default input set */
> - pf->hash_input_set[pctype] = input_set;
> + if (!pf->support_multi_driver)
> + pf->hash_input_set[pctype] = input_set;
> pf->fdir.input_set[pctype] = input_set;
> }
>
> - i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> - i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> - i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> + if (!pf->support_multi_driver) {
> + i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> + i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> + i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> + }
> }
>
> int
> @@ -7903,6 +8010,11 @@ i40e_hash_filter_inset_select(struct i40e_hw
> *hw,
> uint32_t mask_reg[I40E_INSET_MASK_NUM_REG] = {0};
> int ret, i, num;
>
> + if (pf->support_multi_driver) {
> + PMD_DRV_LOG(ERR, "Hash input set setting is not supported.");
> + return -ENOTSUP;
> + }
> +
> if (!conf) {
> PMD_DRV_LOG(ERR, "Invalid pointer");
> return -EFAULT;
> @@ -8029,6 +8141,11 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> if (num < 0)
> return -EINVAL;
>
> + if (pf->support_multi_driver && num > 0) {
> + PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
> + return -ENOTSUP;
> + }
> +
> inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
>
> i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 0), @@
> -8037,14 +8154,20 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> (uint32_t)((inset_reg >>
> I40E_32_BIT_WIDTH) & UINT32_MAX));
>
> - for (i = 0; i < num; i++)
> - i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - mask_reg[i]);
> - /*clear unused mask registers of the pctype */
> - for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> - i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> - 0);
> - i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> + if (!pf->support_multi_driver) {
> + for (i = 0; i < num; i++)
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + mask_reg[i]);
> + /*clear unused mask registers of the pctype */
> + for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> + i40e_check_write_global_reg(hw,
> + I40E_GLQF_FD_MSK(i, pctype),
> + 0);
> + i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> + } else {
> + PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
> + }
> I40E_WRITE_FLUSH(hw);
>
> pf->fdir.input_set[pctype] = input_set; diff --git
> a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> index
> 12b6000..82d5501 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -485,6 +485,8 @@ struct i40e_pf {
> bool floating_veb; /* The flag to use the floating VEB */
> /* The floating enable flag for the specific VF */
> bool floating_veb_list[I40E_MAX_VF];
> +
> + bool support_multi_driver; /* 1 - support multiple driver */
> };
>
> enum pending_msg {
> --
> 2.5.5
>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 2 Feb 2018 20:25:10 +0800
> From: Beilei Xing <beilei.xing@intel.com>
> To: dev@dpdk.org, jingjing.wu@intel.com
> Cc: stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 4/4] net/i40e: fix interrupt conflict
> when using multi-driver
> Message-ID: <1517574310-93096-5-git-send-email-beilei.xing@intel.com>
>
> There's interrupt conflict when using DPDK and Linux i40e on different
> ports of the same Ethernet controller, this patch fixes it by
> switching from IntN to
> Int0 if multiple drivers are used.
>
> Fixes: be6c228d4da3 ("i40e: support Rx interrupt")
> Cc: stable@dpdk.org
>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> ---
> drivers/net/i40e/i40e_ethdev.c | 93
> +++++++++++++++++++++++++--------------
> drivers/net/i40e/i40e_ethdev.h | 10 +++--
> drivers/net/i40e/i40e_ethdev_vf.c | 4 +-
> 3 files changed, 68 insertions(+), 39 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c
> b/drivers/net/i40e/i40e_ethdev.c index ae0f31a..cae22e7 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -760,6 +760,23 @@ static inline void i40e_GLQF_reg_init(struct
> i40e_hw *hw)
> i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
> }
>
> +static inline void i40e_config_automask(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t val;
> +
> + /* INTENA flag is not auto-cleared for interrupt */
> + val = I40E_READ_REG(hw, I40E_GLINT_CTL);
> + val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
> + I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
> +
> + /* If support multi-driver, PF will use INT0. */
> + if (!pf->support_multi_driver)
> + val |= I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK;
> +
> + I40E_WRITE_REG(hw, I40E_GLINT_CTL, val); }
> +
> #define I40E_FLOW_CONTROL_ETHERTYPE 0x8808
>
> /*
> @@ -1077,6 +1094,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> return ret;
> }
>
> + i40e_config_automask(pf);
> +
> /*
> * To work around the NVM issue, initialize registers
> * for flexible payload and packet type of QinQ by @@ -1463,6
> +1482,7 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t msix_vect,
> int i;
> uint32_t val;
> struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
>
> /* Bind all RX queues to allocated MSIX interrupt */
> for (i = 0; i < nb_queue; i++) {
> @@ -1481,7 +1501,8 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi,
> uint16_t msix_vect,
> /* Write first RX queue to Link list register as the head element */
> if (vsi->type != I40E_VSI_SRIOV) {
> uint16_t interval =
> - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
> + pf->support_multi_driver);
>
> if (msix_vect == I40E_MISC_VEC_ID) {
> I40E_WRITE_REG(hw, I40E_PFINT_LNKLST0, @@ -1539,7
> +1560,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
> uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
> uint16_t queue_idx = 0;
> int record = 0;
> - uint32_t val;
> int i;
>
> for (i = 0; i < vsi->nb_qps; i++) {
> @@ -1547,13 +1567,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
> I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), 0);
> }
>
> - /* INTENA flag is not auto-cleared for interrupt */
> - val = I40E_READ_REG(hw, I40E_GLINT_CTL);
> - val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
> - I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK |
> - I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
> - I40E_WRITE_REG(hw, I40E_GLINT_CTL, val);
> -
> /* VF bind interrupt */
> if (vsi->type == I40E_VSI_SRIOV) {
> __vsi_queues_bind_intr(vsi, msix_vect, @@ -1606,27 +1619,22 @@
> i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
> struct rte_eth_dev *dev = vsi->adapter->eth_dev;
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> - uint16_t interval = i40e_calc_itr_interval(\
> - RTE_LIBRTE_I40E_ITR_INTERVAL);
> + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> uint16_t msix_intr, i;
>
> - if (rte_intr_allow_others(intr_handle))
> + if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
> for (i = 0; i < vsi->nb_msix; i++) {
> msix_intr = vsi->msix_intr + i;
> I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(msix_intr - 1),
> - I40E_PFINT_DYN_CTLN_INTENA_MASK |
> - I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
> - (0 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) |
> - (interval <<
> - I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT));
> + I40E_PFINT_DYN_CTLN_INTENA_MASK |
> + I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
> + I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
> }
> else
> I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> I40E_PFINT_DYN_CTL0_INTENA_MASK |
> I40E_PFINT_DYN_CTL0_CLEARPBA_MASK |
> - (0 << I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT) |
> - (interval <<
> - I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT));
> + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
>
> I40E_WRITE_FLUSH(hw);
> }
> @@ -1637,16 +1645,18 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi
> *vsi)
> struct rte_eth_dev *dev = vsi->adapter->eth_dev;
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> uint16_t msix_intr, i;
>
> - if (rte_intr_allow_others(intr_handle))
> + if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
> for (i = 0; i < vsi->nb_msix; i++) {
> msix_intr = vsi->msix_intr + i;
> I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(msix_intr - 1),
> - 0);
> + I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
> }
> else
> - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
> + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
>
> I40E_WRITE_FLUSH(hw);
> }
> @@ -4618,16 +4628,28 @@ i40e_vsi_setup(struct i40e_pf *pf,
>
> /* VF has MSIX interrupt in VF range, don't allocate here */
> if (type == I40E_VSI_MAIN) {
> - ret = i40e_res_pool_alloc(&pf->msix_pool,
> - RTE_MIN(vsi->nb_qps,
> - RTE_MAX_RXTX_INTR_VEC_ID));
> - if (ret < 0) {
> - PMD_DRV_LOG(ERR, "VSI MAIN %d get heap failed %d",
> - vsi->seid, ret);
> - goto fail_queue_alloc;
> + if (pf->support_multi_driver) {
> + /* If support multi-driver, need to use INT0 instead of
> + * allocating from msix pool. The Msix pool is init from
> + * INT1, so it's OK just set msix_intr to 0 and nb_msix
> + * to 1 without calling i40e_res_pool_alloc.
> + */
> + vsi->msix_intr = 0;
> + vsi->nb_msix = 1;
> + } else {
> + ret = i40e_res_pool_alloc(&pf->msix_pool,
> + RTE_MIN(vsi->nb_qps,
> + RTE_MAX_RXTX_INTR_VEC_ID));
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR,
> + "VSI MAIN %d get heap failed %d",
> + vsi->seid, ret);
> + goto fail_queue_alloc;
> + }
> + vsi->msix_intr = ret;
> + vsi->nb_msix = RTE_MIN(vsi->nb_qps,
> + RTE_MAX_RXTX_INTR_VEC_ID);
> }
> - vsi->msix_intr = ret;
> - vsi->nb_msix = RTE_MIN(vsi->nb_qps,
> RTE_MAX_RXTX_INTR_VEC_ID);
> } else if (type != I40E_VSI_SRIOV) {
> ret = i40e_res_pool_alloc(&pf->msix_pool, 1);
> if (ret < 0) {
> @@ -5540,7 +5562,8 @@ void
> i40e_pf_disable_irq0(struct i40e_hw *hw) {
> /* Disable all interrupt types */
> - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
> + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
> I40E_WRITE_FLUSH(hw);
> }
>
> @@ -9861,10 +9884,12 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
> static int i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
> uint16_t queue_id) {
> + struct i40e_pf *pf =
> I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> uint16_t interval =
> - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
> + pf->support_multi_driver);
> uint16_t msix_intr;
>
> msix_intr = intr_handle->intr_vec[queue_id]; diff --git
> a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> index
> 82d5501..77a4466 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -720,10 +720,14 @@ i40e_align_floor(int n) }
>
> static inline uint16_t
> -i40e_calc_itr_interval(int16_t interval)
> +i40e_calc_itr_interval(int16_t interval, bool is_multi_drv)
> {
> - if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX)
> - interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
> + if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX) {
> + if (is_multi_drv)
> + interval = I40E_QUEUE_ITR_INTERVAL_MAX;
> + else
> + interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
> + }
>
> /* Convert to hardware count, as writing each 1 represents 2 us */
> return interval / 2;
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 1686914..618c717 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1246,7 +1246,7 @@ i40evf_init_vf(struct rte_eth_dev *dev)
> struct i40e_vf *vf =
> I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> struct ether_addr *p_mac_addr;
> uint16_t interval =
> - i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX);
> + i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX, 0);
>
> vf->adapter =
> I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> vf->dev_data = dev->data;
> @@ -1986,7 +1986,7 @@ i40evf_dev_rx_queue_intr_enable(struct
> rte_eth_dev *dev, uint16_t queue_id)
> struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> uint16_t interval =
> - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL, 0);
> uint16_t msix_intr;
>
> msix_intr = intr_handle->intr_vec[queue_id];
> --
> 2.5.5
>
>
>
> End of dev Digest, Vol 180, Issue 152
> *************************************
Hi Nitin,
> -----Original Message-----
> From: Nitin Katiyar [mailto:nitin.katiyar@ericsson.com]
> Sent: Wednesday, February 14, 2018 6:53 PM
> To: Xing, Beilei <beilei.xing@intel.com>; dev@dpdk.org
> Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> Subject: RE: Multi-driver support for Fortville
>
> Hi Beilei,
> Thanks for clarifying the queries. We have been referring to following
> patches.
> https://dpdk.org/dev/patchwork/patch/34945/
> https://dpdk.org/dev/patchwork/patch/34946/
> https://dpdk.org/dev/patchwork/patch/34947/
> https://dpdk.org/dev/patchwork/patch/34948/
>
> Are these final versions and merged in dpdk branch? If not, where can I find
> latest patches?
Sorry for late reply due to Chinese New Year.
Yes, they are the final versions applied in DPDK master branch.
>
> Regards,
> Nitin
>
>
>
>
> -----Original Message-----
> From: Xing, Beilei [mailto:beilei.xing@intel.com]
> Sent: Wednesday, February 14, 2018 6:50 AM
> To: Nitin Katiyar <nitin.katiyar@ericsson.com>; dev@dpdk.org
> Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> Subject: RE: Multi-driver support for Fortville
>
> Hi Nitin,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nitin Katiyar
> > Sent: Tuesday, February 13, 2018 11:48 AM
> > To: dev@dpdk.org
> > Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> > Subject: [dpdk-dev] Multi-driver support for Fortville
> >
> > Hi,
> > Resending the queries with change in subject line.
> > 1) With these patches, we have 2 different values for some of the
> > global registers depending upon whether single driver or multi-driver
> > is using all ports of the NIC. Does it impact any
> > functionality/performance if we use DPDK drivers in single driver vs multi-
> driver support?
>
> Yes. If support multi-driver,
> for functionality, some configurations will not be supported. Including flow
> director flexible payload, RSS input set/RSS bit mask/hash
> function/symmetric hash/FDIR input set/TPID/flow control watermark/GRE
> tunnel key length configuration, QinQ parser and QinQ cloud filter support.
> For performance, PF will use INT0 instead of INTN when support multi-driver,
> so there'll be many interrupts costing CPU cycles during receiving packets.
>
> > 2) Why can't we have same settings for both the cases? i.e
> > Unconditionally programming the global registers in DPDK driver with
> > the same values as in Kernel driver. That way we don't have to care for
> extra parameter.
>
> The reason is same as above.
>
> > 3) Does this issue need any update for kernel driver also?
>
> As I know, there's no need to update kernel driver.
>
> >
> > Regards,
> > Nitin
> >
> > -----Original Message-----
> > From: Nitin Katiyar
> > Sent: Monday, February 12, 2018 11:32 AM
> > To: dev@dpdk.org
> > Cc: Venkatesan Pradeep <venkatesan.pradeep@ericsson.com>
> > Subject: RE: dev Digest, Vol 180, Issue 152
> >
> > Hi Beilei,
> > I was looking at the patches and have few queries regarding
> > support-multi-driver.
> > 1) With these patches, we have 2 different values for some of the
> > global registers depending upon whether single driver or multi-driver
> > is using all ports of the NIC. Does it impact any
> > functionality/performance if we use DPDK drivers in single driver vs multi-
> driver support?
> > 2) Why can't we have same settings for both the cases? That way we
> > don't have to care for extra parameter.
> > 3) Does this issue need any update for kernel driver also?
> >
> >
> > Regards,
> > Nitin
> >
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> > dev-request@dpdk.org
> > Sent: Friday, February 02, 2018 5:55 PM
> > To: dev@dpdk.org
> > Subject: dev Digest, Vol 180, Issue 152
> >
> > Send dev mailing list submissions to
> > dev@dpdk.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > https://dpdk.org/ml/listinfo/dev
> > or, via email, send a message with subject or body 'help' to
> > dev-request@dpdk.org
> >
> > You can reach the person managing the list at
> > dev-owner@dpdk.org
> >
> > When replying, please edit your Subject line so it is more specific than "Re:
> > Contents of dev digest..."
> >
> >
> > Today's Topics:
> >
> > 1. [PATCH v3 2/4] net/i40e: add debug logs when writing global
> > registers (Beilei Xing)
> > 2. [PATCH v3 3/4] net/i40e: fix multiple driver support issue
> > (Beilei Xing)
> > 3. [PATCH v3 4/4] net/i40e: fix interrupt conflict when using
> > multi-driver (Beilei Xing)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Fri, 2 Feb 2018 20:25:08 +0800
> > From: Beilei Xing <beilei.xing@intel.com>
> > To: dev@dpdk.org, jingjing.wu@intel.com
> > Cc: stable@dpdk.org
> > Subject: [dpdk-dev] [PATCH v3 2/4] net/i40e: add debug logs when
> > writing global registers
> > Message-ID: <1517574310-93096-3-git-send-email-beilei.xing@intel.com>
> >
> > Add debug logs when writing global registers.
> >
> > Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> > Cc: stable@dpdk.org
> > ---
> > drivers/net/i40e/i40e_ethdev.c | 127
> > +++++++++++++++++++++++++----------------
> > drivers/net/i40e/i40e_ethdev.h | 8 +++
> > 2 files changed, 87 insertions(+), 48 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > b/drivers/net/i40e/i40e_ethdev.c index 44821f2..ef23241 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -716,6 +716,15 @@ rte_i40e_dev_atomic_write_link_status(struct
> > rte_eth_dev *dev,
> > return 0;
> > }
> >
> > +static inline void
> > +i40e_write_global_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32
> > +reg_val) {
> > + i40e_write_rx_ctl(hw, reg_addr, reg_val);
> > + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
> > + "with value 0x%08x",
> > + reg_addr, reg_val);
> > +}
> > +
> > RTE_PMD_REGISTER_PCI(net_i40e, rte_i40e_pmd.pci_drv);
> > RTE_PMD_REGISTER_PCI_TABLE(net_i40e, pci_id_i40e_map);
> >
> > @@ -735,9 +744,9 @@ static inline void i40e_GLQF_reg_init(struct
> > i40e_hw
> > *hw)
> > * configuration API is added to avoid configuration conflicts
> > * between ports of the same device.
> > */
> > - I40E_WRITE_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
> > - I40E_WRITE_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
> > - I40E_WRITE_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
> > i40e_global_cfg_warning(I40E_WARNING_ENA_FLX_PLD);
> >
> > /*
> > @@ -746,8 +755,8 @@ static inline void i40e_GLQF_reg_init(struct
> > i40e_hw
> > *hw)
> > * configuration API is added to avoid configuration conflicts
> > * between ports of the same device.
> > */
> > - I40E_WRITE_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
> > - I40E_WRITE_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
> > i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
> > }
> >
> > @@ -2799,8 +2808,9 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
> > "I40E_GL_SWT_L2TAGCTRL[%d]", reg_id);
> > return ret;
> > }
> > - PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
> > - "I40E_GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
> > + PMD_DRV_LOG(DEBUG,
> > + "Global register 0x%08x is changed with value 0x%08x",
> > + I40E_GL_SWT_L2TAGCTRL(reg_id), (uint32_t)reg_w);
> >
> > i40e_global_cfg_warning(I40E_WARNING_TPID);
> >
> > @@ -3030,16 +3040,16 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev,
> > struct rte_eth_fc_conf *fc_conf)
> > }
> >
> > /* config the water marker both based on the packets and bytes */
> > - I40E_WRITE_REG(hw, I40E_GLRPB_PHW,
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> > (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> > << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> > - I40E_WRITE_REG(hw, I40E_GLRPB_PLW,
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> > (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> > << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> > - I40E_WRITE_REG(hw, I40E_GLRPB_GHW,
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> > pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> > << I40E_KILOSHIFT);
> > - I40E_WRITE_REG(hw, I40E_GLRPB_GLW,
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> > pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> > << I40E_KILOSHIFT);
> > i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> > @@ -6880,6 +6890,9 @@ i40e_dev_set_gre_key_len(struct i40e_hw *hw,
> > uint8_t len)
> > reg, NULL);
> > if (ret != 0)
> > return ret;
> > + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is changed "
> > + "with value 0x%08x",
> > + I40E_GL_PRS_FVBM(2), reg);
> > i40e_global_cfg_warning(I40E_WARNING_GRE_KEY_LEN);
> > } else {
> > ret = 0;
> > @@ -7124,41 +7137,43 @@ i40e_set_hash_filter_global_config(struct
> > i40e_hw *hw,
> > I40E_GLQF_HSYM_SYMH_ENA_MASK : 0;
> > if (hw->mac.type == I40E_MAC_X722) {
> > if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_UDP)
> {
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> > I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
> reg);
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> >
> I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP),
> > reg);
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> >
> I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP),
> > reg);
> > } else if (pctype ==
> I40E_FILTER_PCTYPE_NONF_IPV4_TCP) {
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> > I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
> reg);
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> >
> > I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK),
> > reg);
> > } else if (pctype ==
> I40E_FILTER_PCTYPE_NONF_IPV6_UDP) {
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> > I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
> reg);
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> >
> I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP),
> > reg);
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> >
> I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP),
> > reg);
> > } else if (pctype ==
> I40E_FILTER_PCTYPE_NONF_IPV6_TCP) {
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> > I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
> reg);
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(
> >
> > I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK),
> > reg);
> > } else {
> > - i40e_write_rx_ctl(hw,
> I40E_GLQF_HSYM(pctype),
> > - reg);
> > + i40e_write_global_rx_ctl(hw,
> > +
> I40E_GLQF_HSYM(pctype),
> > + reg);
> > }
> > } else {
> > - i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
> reg);
> > + i40e_write_global_rx_ctl(hw,
> I40E_GLQF_HSYM(pctype),
> > + reg);
> > }
> > i40e_global_cfg_warning(I40E_WARNING_HSYM);
> > }
> > @@ -7184,7 +7199,7 @@ i40e_set_hash_filter_global_config(struct
> > i40e_hw *hw,
> > /* Use the default, and keep it as it is */
> > goto out;
> >
> > - i40e_write_rx_ctl(hw, I40E_GLQF_CTL, reg);
> > + i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
> > i40e_global_cfg_warning(I40E_WARNING_QF_CTL);
> >
> > out:
> > @@ -7799,6 +7814,18 @@ i40e_check_write_reg(struct i40e_hw *hw,
> > uint32_t addr, uint32_t val) }
> >
> > static void
> > +i40e_check_write_global_reg(struct i40e_hw *hw, uint32_t addr,
> > +uint32_t
> > +val) {
> > + uint32_t reg = i40e_read_rx_ctl(hw, addr);
> > +
> > + PMD_DRV_LOG(DEBUG, "[0x%08x] original: 0x%08x", addr, reg);
> > + if (reg != val)
> > + i40e_write_global_rx_ctl(hw, addr, val);
> > + PMD_DRV_LOG(DEBUG, "[0x%08x] after: 0x%08x", addr,
> > + (uint32_t)i40e_read_rx_ctl(hw, addr)); }
> > +
> > +static void
> > i40e_filter_input_set_init(struct i40e_pf *pf) {
> > struct i40e_hw *hw = I40E_PF_TO_HW(pf); @@ -7831,24 +7858,28
> @@
> > i40e_filter_input_set_init(struct i40e_pf *pf)
> > i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
> > (uint32_t)((inset_reg >>
> > I40E_32_BIT_WIDTH) & UINT32_MAX));
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0,
> pctype),
> > + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> > pctype),
> > (uint32_t)(inset_reg & UINT32_MAX));
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1,
> pctype),
> > + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> > pctype),
> > (uint32_t)((inset_reg >>
> > I40E_32_BIT_WIDTH) & UINT32_MAX));
> >
> > for (i = 0; i < num; i++) {
> > - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i,
> pctype),
> > - mask_reg[i]);
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i,
> pctype),
> > - mask_reg[i]);
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_FD_MSK(i,
> pctype),
> > + mask_reg[i]);
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_HASH_MSK(i,
> pctype),
> > + mask_reg[i]);
> > }
> > /*clear unused mask registers of the pctype */
> > for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> > - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i,
> pctype),
> > - 0);
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i,
> pctype),
> > - 0);
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_FD_MSK(i,
> pctype),
> > + 0);
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_HASH_MSK(i,
> pctype),
> > + 0);
> > }
> > I40E_WRITE_FLUSH(hw);
> >
> > @@ -7920,20 +7951,20 @@ i40e_hash_filter_inset_select(struct i40e_hw
> > *hw,
> >
> > inset_reg |= i40e_translate_input_set_reg(hw->mac.type,
> input_set);
> >
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
> > - (uint32_t)(inset_reg & UINT32_MAX));
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
> > - (uint32_t)((inset_reg >>
> > - I40E_32_BIT_WIDTH) & UINT32_MAX));
> > + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> pctype),
> > + (uint32_t)(inset_reg & UINT32_MAX));
> > + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> pctype),
> > + (uint32_t)((inset_reg >>
> > + I40E_32_BIT_WIDTH) & UINT32_MAX));
> > i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> >
> > for (i = 0; i < num; i++)
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> > - mask_reg[i]);
> > + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i,
> > pctype),
> > + mask_reg[i]);
> > /*clear unused mask registers of the pctype */
> > for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> > - i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
> > - 0);
> > + i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i,
> > pctype),
> > + 0);
> > i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> > I40E_WRITE_FLUSH(hw);
> >
> > @@ -8007,12 +8038,12 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> > I40E_32_BIT_WIDTH) & UINT32_MAX));
> >
> > for (i = 0; i < num; i++)
> > - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> > - mask_reg[i]);
> > + i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i,
> pctype),
> > + mask_reg[i]);
> > /*clear unused mask registers of the pctype */
> > for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> > - i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
> > - 0);
> > + i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i,
> pctype),
> > + 0);
> > i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> > I40E_WRITE_FLUSH(hw);
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev.h
> > b/drivers/net/i40e/i40e_ethdev.h index 1d813ef..12b6000 100644
> > --- a/drivers/net/i40e/i40e_ethdev.h
> > +++ b/drivers/net/i40e/i40e_ethdev.h
> > @@ -103,6 +103,14 @@
> > (((vf)->version_major == I40E_VIRTCHNL_VERSION_MAJOR) && \
> > ((vf)->version_minor == 1))
> >
> > +static inline void
> > +I40E_WRITE_GLB_REG(struct i40e_hw *hw, uint32_t reg, uint32_t value) {
> > + I40E_WRITE_REG(hw, reg, value);
> > + PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
> > + "with value 0x%08x",
> > + reg, value);
> > +}
> > +
> > /* index flex payload per layer */
> > enum i40e_flxpld_layer_idx {
> > I40E_FLXPLD_L2_IDX = 0,
> > --
> > 2.5.5
> >
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Fri, 2 Feb 2018 20:25:09 +0800
> > From: Beilei Xing <beilei.xing@intel.com>
> > To: dev@dpdk.org, jingjing.wu@intel.com
> > Cc: stable@dpdk.org
> > Subject: [dpdk-dev] [PATCH v3 3/4] net/i40e: fix multiple driver
> > support issue
> > Message-ID: <1517574310-93096-4-git-send-email-beilei.xing@intel.com>
> >
> > This patch provides the option to disable writing some global
> > registers in PMD, in order to avoid affecting other drivers, when
> > multiple drivers run on the same NIC and control different physical
> > ports. Because there are few global resources shared among different
> physical ports.
> >
> > Fixes: ec246eeb5da1 ("i40e: use default filter input set on init")
> > Fixes: 98f055707685 ("i40e: configure input fields for RSS or flow
> > director")
> > Fixes: f05ec7d77e41 ("i40e: initialize flow director flexible payload
> > setting")
> > Fixes: e536c2e32883 ("net/i40e: fix parsing QinQ packets type")
> > Fixes: 19b16e2f6442 ("ethdev: add vlan type when setting ether type")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> > ---
> > drivers/net/i40e/i40e_ethdev.c | 215
> > ++++++++++++++++++++++++++++++++---------
> > drivers/net/i40e/i40e_ethdev.h | 2 +
> > 2 files changed, 171 insertions(+), 46 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > b/drivers/net/i40e/i40e_ethdev.c index ef23241..ae0f31a 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -944,6 +944,67 @@ config_floating_veb(struct rte_eth_dev *dev)
> > #define I40E_L2_TAGS_S_TAG_SHIFT 1 #define
> I40E_L2_TAGS_S_TAG_MASK
> > I40E_MASK(0x1, I40E_L2_TAGS_S_TAG_SHIFT)
> >
> > +#define ETH_I40E_SUPPORT_MULTI_DRIVER "support-multi-
> driver"
> > +RTE_PMD_REGISTER_PARAM_STRING(net_i40e,
> > + ETH_I40E_SUPPORT_MULTI_DRIVER "=0|1");
> > +
> > +static int
> > +i40e_parse_multi_drv_handler(__rte_unused const char *key,
> > + const char *value,
> > + void *opaque)
> > +{
> > + struct i40e_pf *pf;
> > + unsigned long support_multi_driver;
> > + char *end;
> > +
> > + pf = (struct i40e_pf *)opaque;
> > +
> > + errno = 0;
> > + support_multi_driver = strtoul(value, &end, 10);
> > + if (errno != 0 || end == value || *end != 0) {
> > + PMD_DRV_LOG(WARNING, "Wrong global configuration");
> > + return -(EINVAL);
> > + }
> > +
> > + if (support_multi_driver == 1 || support_multi_driver == 0)
> > + pf->support_multi_driver = (bool)support_multi_driver;
> > + else
> > + PMD_DRV_LOG(WARNING, "%s must be 1 or 0,",
> > + "enable global configuration by default."
> > + ETH_I40E_SUPPORT_MULTI_DRIVER);
> > + return 0;
> > +}
> > +
> > +static int
> > +i40e_support_multi_driver(struct rte_eth_dev *dev) {
> > + struct i40e_pf *pf =
> > I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > + struct rte_pci_device *pci_dev = dev->pci_dev;
> > + static const char *valid_keys[] = {
> > + ETH_I40E_SUPPORT_MULTI_DRIVER, NULL};
> > + struct rte_kvargs *kvlist;
> > +
> > + /* Enable global configuration by default */
> > + pf->support_multi_driver = false;
> > +
> > + if (!pci_dev->device.devargs)
> > + return 0;
> > +
> > + kvlist = rte_kvargs_parse(pci_dev->device.devargs->args,
> valid_keys);
> > + if (!kvlist)
> > + return -EINVAL;
> > +
> > + if (rte_kvargs_count(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER) > 1)
> > + PMD_DRV_LOG(WARNING, "More than one argument
> \"%s\" and
> > only "
> > + "the first invalid or last valid one is used !",
> > + ETH_I40E_SUPPORT_MULTI_DRIVER);
> > +
> > + rte_kvargs_process(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER,
> > + i40e_parse_multi_drv_handler, pf);
> > + rte_kvargs_free(kvlist);
> > + return 0;
> > +}
> > +
> > static int
> > eth_i40e_dev_init(struct rte_eth_dev *dev) { @@ -993,6 +1054,9 @@
> > eth_i40e_dev_init(struct rte_eth_dev *dev)
> > hw->bus.func = pci_dev->addr.function;
> > hw->adapter_stopped = 0;
> >
> > + /* Check if need to support multi-driver */
> > + i40e_support_multi_driver(dev);
> > +
> > /* Make sure all is clean before doing PF reset */
> > i40e_clear_hw(hw);
> >
> > @@ -1019,7 +1083,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> > * software. It should be removed once issues are fixed
> > * in NVM.
> > */
> > - i40e_GLQF_reg_init(hw);
> > + if (!pf->support_multi_driver)
> > + i40e_GLQF_reg_init(hw);
> >
> > /* Initialize the input set for filters (hash and fd) to default value */
> > i40e_filter_input_set_init(pf);
> > @@ -1115,11 +1180,14 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> > i40e_set_fc(hw, &aq_fail, TRUE);
> >
> > /* Set the global registers with default ether type value */
> > - ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
> > ETHER_TYPE_VLAN);
> > - if (ret != I40E_SUCCESS) {
> > - PMD_INIT_LOG(ERR, "Failed to set the default outer "
> > - "VLAN ether type");
> > - goto err_setup_pf_switch;
> > + if (!pf->support_multi_driver) {
> > + ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
> > + ETHER_TYPE_VLAN);
> > + if (ret != I40E_SUCCESS) {
> > + PMD_INIT_LOG(ERR, "Failed to set the default outer
> "
> > + "VLAN ether type");
> > + goto err_setup_pf_switch;
> > + }
> > }
> >
> > /* PF setup, which includes VSI setup */ @@ -2754,11 +2822,17 @@
> > i40e_vlan_tpid_set(struct rte_eth_dev *dev,
> > uint16_t tpid)
> > {
> > struct i40e_hw *hw =
> > I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > + struct i40e_pf *pf =
> > I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > uint64_t reg_r = 0, reg_w = 0;
> > uint16_t reg_id = 0;
> > int ret = 0;
> > int qinq = dev->data->dev_conf.rxmode.hw_vlan_extend;
> >
> > + if (pf->support_multi_driver) {
> > + PMD_DRV_LOG(ERR, "Setting TPID is not supported.");
> > + return -ENOTSUP;
> > + }
> > +
> > switch (vlan_type) {
> > case ETH_VLAN_TYPE_OUTER:
> > if (qinq)
> > @@ -3039,20 +3113,25 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev,
> > struct rte_eth_fc_conf *fc_conf)
> > I40E_WRITE_REG(hw, I40E_PRTDCB_MFLCN, mflcn_reg);
> > }
> >
> > - /* config the water marker both based on the packets and bytes */
> > - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> > - (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> > - << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> > - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> > - (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> > - << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
> > - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> > - pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> > - << I40E_KILOSHIFT);
> > - I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> > - pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> > - << I40E_KILOSHIFT);
> > - i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> > + if (!pf->support_multi_driver) {
> > + /* config water marker both based on the packets and bytes
> */
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
> > + (pf-
> >fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> > + << I40E_KILOSHIFT) /
> I40E_PACKET_AVERAGE_SIZE);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
> > + (pf-
> >fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> > + << I40E_KILOSHIFT) /
> I40E_PACKET_AVERAGE_SIZE);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
> > + pf-
> >fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
> > + << I40E_KILOSHIFT);
> > + I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
> > + pf-
> >fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
> > + << I40E_KILOSHIFT);
> > + i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
> > + } else {
> > + PMD_DRV_LOG(ERR,
> > + "Water marker configuration is not supported.");
> > + }
> >
> > I40E_WRITE_FLUSH(hw);
> >
> > @@ -6870,9 +6949,15 @@ i40e_tunnel_filter_param_check(struct i40e_pf
> > *pf, static int i40e_dev_set_gre_key_len(struct i40e_hw *hw, uint8_t
> > len) {
> > + struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
> > uint32_t val, reg;
> > int ret = -EINVAL;
> >
> > + if (pf->support_multi_driver) {
> > + PMD_DRV_LOG(ERR, "GRE key length configuration is
> > unsupported");
> > + return -ENOTSUP;
> > + }
> > +
> > val = I40E_READ_REG(hw, I40E_GL_PRS_FVBM(2));
> > PMD_DRV_LOG(DEBUG, "Read original GL_PRS_FVBM with
> 0x%08x\n", val);
> >
> > @@ -7114,12 +7199,18 @@ static int
> > i40e_set_hash_filter_global_config(struct i40e_hw *hw,
> > struct rte_eth_hash_global_conf *g_cfg) {
> > + struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
> > int ret;
> > uint16_t i;
> > uint32_t reg;
> > uint32_t mask0 = g_cfg->valid_bit_mask[0];
> > enum i40e_filter_pctype pctype;
> >
> > + if (pf->support_multi_driver) {
> > + PMD_DRV_LOG(ERR, "Hash global configuration is not
> > supported.");
> > + return -ENOTSUP;
> > + }
> > +
> > /* Check the input parameters */
> > ret = i40e_hash_global_config_check(g_cfg);
> > if (ret < 0)
> > @@ -7850,6 +7941,12 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> >
> I40E_INSET_MASK_NUM_REG);
> > if (num < 0)
> > return;
> > +
> > + if (pf->support_multi_driver && num > 0) {
> > + PMD_DRV_LOG(ERR, "Input set setting is not
> supported.");
> > + return;
> > + }
> > +
> > inset_reg = i40e_translate_input_set_reg(hw->mac.type,
> > input_set);
> >
> > @@ -7858,39 +7955,49 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
> > i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
> > (uint32_t)((inset_reg >>
> > I40E_32_BIT_WIDTH) & UINT32_MAX));
> > - i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0,
> > pctype),
> > - (uint32_t)(inset_reg & UINT32_MAX));
> > - i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1,
> > pctype),
> > - (uint32_t)((inset_reg >>
> > - I40E_32_BIT_WIDTH) & UINT32_MAX));
> > -
> > - for (i = 0; i < num; i++) {
> > + if (!pf->support_multi_driver) {
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_HASH_INSET(0, pctype),
> > + (uint32_t)(inset_reg &
> UINT32_MAX));
> > i40e_check_write_global_reg(hw,
> > + I40E_GLQF_HASH_INSET(1, pctype),
> > + (uint32_t)((inset_reg >>
> > + I40E_32_BIT_WIDTH) &
> UINT32_MAX));
> > +
> > + for (i = 0; i < num; i++) {
> > + i40e_check_write_global_reg(hw,
> > I40E_GLQF_FD_MSK(i,
> pctype),
> > mask_reg[i]);
> > - i40e_check_write_global_reg(hw,
> > + i40e_check_write_global_reg(hw,
> > I40E_GLQF_HASH_MSK(i,
> pctype),
> > mask_reg[i]);
> > - }
> > - /*clear unused mask registers of the pctype */
> > - for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> > - i40e_check_write_global_reg(hw,
> > + }
> > + /*clear unused mask registers of the pctype */
> > + for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
> > + i40e_check_write_global_reg(hw,
> > I40E_GLQF_FD_MSK(i,
> pctype),
> > 0);
> > - i40e_check_write_global_reg(hw,
> > + i40e_check_write_global_reg(hw,
> > I40E_GLQF_HASH_MSK(i,
> pctype),
> > - 0);
> > + 0);
> > + }
> > + } else {
> > + PMD_DRV_LOG(ERR,
> > + "Input set setting is not supported.");
> > }
> > I40E_WRITE_FLUSH(hw);
> >
> > /* store the default input set */
> > - pf->hash_input_set[pctype] = input_set;
> > + if (!pf->support_multi_driver)
> > + pf->hash_input_set[pctype] = input_set;
> > pf->fdir.input_set[pctype] = input_set;
> > }
> >
> > - i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> > - i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> > - i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> > + if (!pf->support_multi_driver) {
> > + i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
> > + i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> > + i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
> > + }
> > }
> >
> > int
> > @@ -7903,6 +8010,11 @@ i40e_hash_filter_inset_select(struct i40e_hw
> > *hw,
> > uint32_t mask_reg[I40E_INSET_MASK_NUM_REG] = {0};
> > int ret, i, num;
> >
> > + if (pf->support_multi_driver) {
> > + PMD_DRV_LOG(ERR, "Hash input set setting is not
> supported.");
> > + return -ENOTSUP;
> > + }
> > +
> > if (!conf) {
> > PMD_DRV_LOG(ERR, "Invalid pointer");
> > return -EFAULT;
> > @@ -8029,6 +8141,11 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> > if (num < 0)
> > return -EINVAL;
> >
> > + if (pf->support_multi_driver && num > 0) {
> > + PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
> > + return -ENOTSUP;
> > + }
> > +
> > inset_reg |= i40e_translate_input_set_reg(hw->mac.type,
> input_set);
> >
> > i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 0), @@
> > -8037,14 +8154,20 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
> > (uint32_t)((inset_reg >>
> > I40E_32_BIT_WIDTH) & UINT32_MAX));
> >
> > - for (i = 0; i < num; i++)
> > - i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i,
> pctype),
> > - mask_reg[i]);
> > - /*clear unused mask registers of the pctype */
> > - for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> > - i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i,
> pctype),
> > - 0);
> > - i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> > + if (!pf->support_multi_driver) {
> > + for (i = 0; i < num; i++)
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_FD_MSK(i,
> pctype),
> > + mask_reg[i]);
> > + /*clear unused mask registers of the pctype */
> > + for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
> > + i40e_check_write_global_reg(hw,
> > + I40E_GLQF_FD_MSK(i,
> pctype),
> > + 0);
> > + i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
> > + } else {
> > + PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
> > + }
> > I40E_WRITE_FLUSH(hw);
> >
> > pf->fdir.input_set[pctype] = input_set; diff --git
> > a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> > index
> > 12b6000..82d5501 100644
> > --- a/drivers/net/i40e/i40e_ethdev.h
> > +++ b/drivers/net/i40e/i40e_ethdev.h
> > @@ -485,6 +485,8 @@ struct i40e_pf {
> > bool floating_veb; /* The flag to use the floating VEB */
> > /* The floating enable flag for the specific VF */
> > bool floating_veb_list[I40E_MAX_VF];
> > +
> > + bool support_multi_driver; /* 1 - support multiple driver */
> > };
> >
> > enum pending_msg {
> > --
> > 2.5.5
> >
> >
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Fri, 2 Feb 2018 20:25:10 +0800
> > From: Beilei Xing <beilei.xing@intel.com>
> > To: dev@dpdk.org, jingjing.wu@intel.com
> > Cc: stable@dpdk.org
> > Subject: [dpdk-dev] [PATCH v3 4/4] net/i40e: fix interrupt conflict
> > when using multi-driver
> > Message-ID: <1517574310-93096-5-git-send-email-beilei.xing@intel.com>
> >
> > There's interrupt conflict when using DPDK and Linux i40e on different
> > ports of the same Ethernet controller, this patch fixes it by
> > switching from IntN to
> > Int0 if multiple drivers are used.
> >
> > Fixes: be6c228d4da3 ("i40e: support Rx interrupt")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> > ---
> > drivers/net/i40e/i40e_ethdev.c | 93
> > +++++++++++++++++++++++++--------------
> > drivers/net/i40e/i40e_ethdev.h | 10 +++--
> > drivers/net/i40e/i40e_ethdev_vf.c | 4 +-
> > 3 files changed, 68 insertions(+), 39 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > b/drivers/net/i40e/i40e_ethdev.c index ae0f31a..cae22e7 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -760,6 +760,23 @@ static inline void i40e_GLQF_reg_init(struct
> > i40e_hw *hw)
> > i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
> > }
> >
> > +static inline void i40e_config_automask(struct i40e_pf *pf) {
> > + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> > + uint32_t val;
> > +
> > + /* INTENA flag is not auto-cleared for interrupt */
> > + val = I40E_READ_REG(hw, I40E_GLINT_CTL);
> > + val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
> > + I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
> > +
> > + /* If support multi-driver, PF will use INT0. */
> > + if (!pf->support_multi_driver)
> > + val |= I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK;
> > +
> > + I40E_WRITE_REG(hw, I40E_GLINT_CTL, val); }
> > +
> > #define I40E_FLOW_CONTROL_ETHERTYPE 0x8808
> >
> > /*
> > @@ -1077,6 +1094,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
> > return ret;
> > }
> >
> > + i40e_config_automask(pf);
> > +
> > /*
> > * To work around the NVM issue, initialize registers
> > * for flexible payload and packet type of QinQ by @@ -1463,6
> > +1482,7 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t
> > +msix_vect,
> > int i;
> > uint32_t val;
> > struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> > + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> >
> > /* Bind all RX queues to allocated MSIX interrupt */
> > for (i = 0; i < nb_queue; i++) {
> > @@ -1481,7 +1501,8 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi,
> > uint16_t msix_vect,
> > /* Write first RX queue to Link list register as the head element */
> > if (vsi->type != I40E_VSI_SRIOV) {
> > uint16_t interval =
> > -
> i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> > +
> i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
> > + pf->support_multi_driver);
> >
> > if (msix_vect == I40E_MISC_VEC_ID) {
> > I40E_WRITE_REG(hw, I40E_PFINT_LNKLST0, @@ -
> 1539,7
> > +1560,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
> > uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
> > uint16_t queue_idx = 0;
> > int record = 0;
> > - uint32_t val;
> > int i;
> >
> > for (i = 0; i < vsi->nb_qps; i++) {
> > @@ -1547,13 +1567,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
> > I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue +
> i), 0);
> > }
> >
> > - /* INTENA flag is not auto-cleared for interrupt */
> > - val = I40E_READ_REG(hw, I40E_GLINT_CTL);
> > - val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
> > - I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK |
> > - I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
> > - I40E_WRITE_REG(hw, I40E_GLINT_CTL, val);
> > -
> > /* VF bind interrupt */
> > if (vsi->type == I40E_VSI_SRIOV) {
> > __vsi_queues_bind_intr(vsi, msix_vect, @@ -1606,27
> +1619,22 @@
> > i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
> > struct rte_eth_dev *dev = vsi->adapter->eth_dev;
> > struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> > struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> > - uint16_t interval = i40e_calc_itr_interval(\
> > - RTE_LIBRTE_I40E_ITR_INTERVAL);
> > + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> > uint16_t msix_intr, i;
> >
> > - if (rte_intr_allow_others(intr_handle))
> > + if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
> > for (i = 0; i < vsi->nb_msix; i++) {
> > msix_intr = vsi->msix_intr + i;
> > I40E_WRITE_REG(hw,
> I40E_PFINT_DYN_CTLN(msix_intr - 1),
> > - I40E_PFINT_DYN_CTLN_INTENA_MASK |
> > - I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
> > - (0 <<
> I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) |
> > - (interval <<
> > - I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT));
> > + I40E_PFINT_DYN_CTLN_INTENA_MASK |
> > +
> I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
> > +
> I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
> > }
> > else
> > I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> > I40E_PFINT_DYN_CTL0_INTENA_MASK |
> > I40E_PFINT_DYN_CTL0_CLEARPBA_MASK |
> > - (0 << I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT) |
> > - (interval <<
> > - I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT));
> > + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
> >
> > I40E_WRITE_FLUSH(hw);
> > }
> > @@ -1637,16 +1645,18 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi
> > *vsi)
> > struct rte_eth_dev *dev = vsi->adapter->eth_dev;
> > struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> > struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
> > + struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
> > uint16_t msix_intr, i;
> >
> > - if (rte_intr_allow_others(intr_handle))
> > + if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
> > for (i = 0; i < vsi->nb_msix; i++) {
> > msix_intr = vsi->msix_intr + i;
> > I40E_WRITE_REG(hw,
> I40E_PFINT_DYN_CTLN(msix_intr - 1),
> > - 0);
> > +
> I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
> > }
> > else
> > - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
> > + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> > + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
> >
> > I40E_WRITE_FLUSH(hw);
> > }
> > @@ -4618,16 +4628,28 @@ i40e_vsi_setup(struct i40e_pf *pf,
> >
> > /* VF has MSIX interrupt in VF range, don't allocate here */
> > if (type == I40E_VSI_MAIN) {
> > - ret = i40e_res_pool_alloc(&pf->msix_pool,
> > - RTE_MIN(vsi->nb_qps,
> > -
> RTE_MAX_RXTX_INTR_VEC_ID));
> > - if (ret < 0) {
> > - PMD_DRV_LOG(ERR, "VSI MAIN %d get heap
> failed %d",
> > - vsi->seid, ret);
> > - goto fail_queue_alloc;
> > + if (pf->support_multi_driver) {
> > + /* If support multi-driver, need to use INT0 instead
> of
> > + * allocating from msix pool. The Msix pool is init from
> > + * INT1, so it's OK just set msix_intr to 0 and nb_msix
> > + * to 1 without calling i40e_res_pool_alloc.
> > + */
> > + vsi->msix_intr = 0;
> > + vsi->nb_msix = 1;
> > + } else {
> > + ret = i40e_res_pool_alloc(&pf->msix_pool,
> > + RTE_MIN(vsi->nb_qps,
> > +
> RTE_MAX_RXTX_INTR_VEC_ID));
> > + if (ret < 0) {
> > + PMD_DRV_LOG(ERR,
> > + "VSI MAIN %d get heap failed %d",
> > + vsi->seid, ret);
> > + goto fail_queue_alloc;
> > + }
> > + vsi->msix_intr = ret;
> > + vsi->nb_msix = RTE_MIN(vsi->nb_qps,
> > + RTE_MAX_RXTX_INTR_VEC_ID);
> > }
> > - vsi->msix_intr = ret;
> > - vsi->nb_msix = RTE_MIN(vsi->nb_qps,
> > RTE_MAX_RXTX_INTR_VEC_ID);
> > } else if (type != I40E_VSI_SRIOV) {
> > ret = i40e_res_pool_alloc(&pf->msix_pool, 1);
> > if (ret < 0) {
> > @@ -5540,7 +5562,8 @@ void
> > i40e_pf_disable_irq0(struct i40e_hw *hw) {
> > /* Disable all interrupt types */
> > - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
> > + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
> > + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
> > I40E_WRITE_FLUSH(hw);
> > }
> >
> > @@ -9861,10 +9884,12 @@ i40e_dev_get_dcb_info(struct rte_eth_dev
> *dev,
> > static int i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
> > uint16_t queue_id) {
> > + struct i40e_pf *pf =
> > I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> > struct i40e_hw *hw =
> > I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > uint16_t interval =
> > - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> > + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
> > + pf->support_multi_driver);
> > uint16_t msix_intr;
> >
> > msix_intr = intr_handle->intr_vec[queue_id]; diff --git
> > a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> > index
> > 82d5501..77a4466 100644
> > --- a/drivers/net/i40e/i40e_ethdev.h
> > +++ b/drivers/net/i40e/i40e_ethdev.h
> > @@ -720,10 +720,14 @@ i40e_align_floor(int n) }
> >
> > static inline uint16_t
> > -i40e_calc_itr_interval(int16_t interval)
> > +i40e_calc_itr_interval(int16_t interval, bool is_multi_drv)
> > {
> > - if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX)
> > - interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
> > + if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX) {
> > + if (is_multi_drv)
> > + interval = I40E_QUEUE_ITR_INTERVAL_MAX;
> > + else
> > + interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
> > + }
> >
> > /* Convert to hardware count, as writing each 1 represents 2 us */
> > return interval / 2;
> > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> > b/drivers/net/i40e/i40e_ethdev_vf.c
> > index 1686914..618c717 100644
> > --- a/drivers/net/i40e/i40e_ethdev_vf.c
> > +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> > @@ -1246,7 +1246,7 @@ i40evf_init_vf(struct rte_eth_dev *dev)
> > struct i40e_vf *vf =
> > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > struct ether_addr *p_mac_addr;
> > uint16_t interval =
> > - i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX);
> > + i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX, 0);
> >
> > vf->adapter =
> > I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> > vf->dev_data = dev->data;
> > @@ -1986,7 +1986,7 @@ i40evf_dev_rx_queue_intr_enable(struct
> > rte_eth_dev *dev, uint16_t queue_id)
> > struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
> > struct i40e_hw *hw =
> > I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > uint16_t interval =
> > - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
> > + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL, 0);
> > uint16_t msix_intr;
> >
> > msix_intr = intr_handle->intr_vec[queue_id];
> > --
> > 2.5.5
> >
> >
> >
> > End of dev Digest, Vol 180, Issue 152
> > *************************************
@@ -716,6 +716,15 @@ rte_i40e_dev_atomic_write_link_status(struct rte_eth_dev *dev,
return 0;
}
+static inline void
+i40e_write_global_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val)
+{
+ i40e_write_rx_ctl(hw, reg_addr, reg_val);
+ PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
+ "with value 0x%08x",
+ reg_addr, reg_val);
+}
+
RTE_PMD_REGISTER_PCI(net_i40e, rte_i40e_pmd.pci_drv); RTE_PMD_REGISTER_PCI_TABLE(net_i40e, pci_id_i40e_map);
@@ -735,9 +744,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
* configuration API is added to avoid configuration conflicts
* between ports of the same device.
*/
- I40E_WRITE_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
- I40E_WRITE_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
- I40E_WRITE_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
+ I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(33), 0x000000E0);
+ I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(34), 0x000000E3);
+ I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(35), 0x000000E6);
i40e_global_cfg_warning(I40E_WARNING_ENA_FLX_PLD);
/*
@@ -746,8 +755,8 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
* configuration API is added to avoid configuration conflicts
* between ports of the same device.
*/
- I40E_WRITE_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
- I40E_WRITE_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
+ I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(40), 0x00000029);
+ I40E_WRITE_GLB_REG(hw, I40E_GLQF_PIT(9), 0x00009420);
i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
}
@@ -2799,8 +2808,9 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
"I40E_GL_SWT_L2TAGCTRL[%d]", reg_id);
return ret;
}
- PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
- "I40E_GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+ PMD_DRV_LOG(DEBUG,
+ "Global register 0x%08x is changed with value 0x%08x",
+ I40E_GL_SWT_L2TAGCTRL(reg_id), (uint32_t)reg_w);
i40e_global_cfg_warning(I40E_WARNING_TPID);
@@ -3030,16 +3040,16 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
/* config the water marker both based on the packets and bytes */
- I40E_WRITE_REG(hw, I40E_GLRPB_PHW,
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
(pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
<< I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
- I40E_WRITE_REG(hw, I40E_GLRPB_PLW,
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
(pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
<< I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
- I40E_WRITE_REG(hw, I40E_GLRPB_GHW,
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
<< I40E_KILOSHIFT);
- I40E_WRITE_REG(hw, I40E_GLRPB_GLW,
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
<< I40E_KILOSHIFT);
i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
@@ -6880,6 +6890,9 @@ i40e_dev_set_gre_key_len(struct i40e_hw *hw, uint8_t len)
reg, NULL);
if (ret != 0)
return ret;
+ PMD_DRV_LOG(DEBUG, "Global register 0x%08x is changed "
+ "with value 0x%08x",
+ I40E_GL_PRS_FVBM(2), reg);
i40e_global_cfg_warning(I40E_WARNING_GRE_KEY_LEN);
} else {
ret = 0;
@@ -7124,41 +7137,43 @@ i40e_set_hash_filter_global_config(struct i40e_hw *hw,
I40E_GLQF_HSYM_SYMH_ENA_MASK : 0;
if (hw->mac.type == I40E_MAC_X722) {
if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_UDP) {
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_IPV4_UDP), reg);
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP),
reg);
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP),
reg);
} else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV4_TCP) {
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_IPV4_TCP), reg);
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK),
reg);
} else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV6_UDP) {
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_IPV6_UDP), reg);
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP),
reg);
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP),
reg);
} else if (pctype == I40E_FILTER_PCTYPE_NONF_IPV6_TCP) {
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_IPV6_TCP), reg);
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(
I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK),
reg);
} else {
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
- reg);
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(pctype),
+ reg);
}
} else {
- i40e_write_rx_ctl(hw, I40E_GLQF_HSYM(pctype), reg);
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_HSYM(pctype),
+ reg);
}
i40e_global_cfg_warning(I40E_WARNING_HSYM);
}
@@ -7184,7 +7199,7 @@ i40e_set_hash_filter_global_config(struct i40e_hw *hw,
/* Use the default, and keep it as it is */
goto out;
- i40e_write_rx_ctl(hw, I40E_GLQF_CTL, reg);
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
i40e_global_cfg_warning(I40E_WARNING_QF_CTL);
out:
@@ -7799,6 +7814,18 @@ i40e_check_write_reg(struct i40e_hw *hw, uint32_t addr, uint32_t val) }
static void
+i40e_check_write_global_reg(struct i40e_hw *hw, uint32_t addr, uint32_t
+val) {
+ uint32_t reg = i40e_read_rx_ctl(hw, addr);
+
+ PMD_DRV_LOG(DEBUG, "[0x%08x] original: 0x%08x", addr, reg);
+ if (reg != val)
+ i40e_write_global_rx_ctl(hw, addr, val);
+ PMD_DRV_LOG(DEBUG, "[0x%08x] after: 0x%08x", addr,
+ (uint32_t)i40e_read_rx_ctl(hw, addr)); }
+
+static void
i40e_filter_input_set_init(struct i40e_pf *pf) {
struct i40e_hw *hw = I40E_PF_TO_HW(pf); @@ -7831,24 +7858,28 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
(uint32_t)((inset_reg >>
I40E_32_BIT_WIDTH) & UINT32_MAX));
- i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
+ i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
(uint32_t)(inset_reg & UINT32_MAX));
- i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
+ i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
(uint32_t)((inset_reg >>
I40E_32_BIT_WIDTH) & UINT32_MAX));
for (i = 0; i < num; i++) {
- i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
- mask_reg[i]);
- i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
- mask_reg[i]);
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_FD_MSK(i, pctype),
+ mask_reg[i]);
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_HASH_MSK(i, pctype),
+ mask_reg[i]);
}
/*clear unused mask registers of the pctype */
for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
- i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
- 0);
- i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
- 0);
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_FD_MSK(i, pctype),
+ 0);
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_HASH_MSK(i, pctype),
+ 0);
}
I40E_WRITE_FLUSH(hw);
@@ -7920,20 +7951,20 @@ i40e_hash_filter_inset_select(struct i40e_hw *hw,
inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
- i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
- (uint32_t)(inset_reg & UINT32_MAX));
- i40e_check_write_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
- (uint32_t)((inset_reg >>
- I40E_32_BIT_WIDTH) & UINT32_MAX));
+ i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
+ (uint32_t)(inset_reg & UINT32_MAX));
+ i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
+ (uint32_t)((inset_reg >>
+ I40E_32_BIT_WIDTH) & UINT32_MAX));
i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
for (i = 0; i < num; i++)
- i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
- mask_reg[i]);
+ i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
+ mask_reg[i]);
/*clear unused mask registers of the pctype */
for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
- i40e_check_write_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
- 0);
+ i40e_check_write_global_reg(hw, I40E_GLQF_HASH_MSK(i, pctype),
+ 0);
i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
I40E_WRITE_FLUSH(hw);
@@ -8007,12 +8038,12 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
I40E_32_BIT_WIDTH) & UINT32_MAX));
for (i = 0; i < num; i++)
- i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
- mask_reg[i]);
+ i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
+ mask_reg[i]);
/*clear unused mask registers of the pctype */
for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
- i40e_check_write_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
- 0);
+ i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
+ 0);
i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
I40E_WRITE_FLUSH(hw);
@@ -103,6 +103,14 @@
(((vf)->version_major == I40E_VIRTCHNL_VERSION_MAJOR) && \
((vf)->version_minor == 1))
+static inline void
+I40E_WRITE_GLB_REG(struct i40e_hw *hw, uint32_t reg, uint32_t value) {
+ I40E_WRITE_REG(hw, reg, value);
+ PMD_DRV_LOG(DEBUG, "Global register 0x%08x is modified "
+ "with value 0x%08x",
+ reg, value);
+}
+
/* index flex payload per layer */
enum i40e_flxpld_layer_idx {
I40E_FLXPLD_L2_IDX = 0,
--
2.5.5
------------------------------
Message: 2
Date: Fri, 2 Feb 2018 20:25:09 +0800
From: Beilei Xing <beilei.xing@intel.com>
To: dev@dpdk.org, jingjing.wu@intel.com
Cc: stable@dpdk.org
Subject: [dpdk-dev] [PATCH v3 3/4] net/i40e: fix multiple driver
support issue
Message-ID: <1517574310-93096-4-git-send-email-beilei.xing@intel.com>
This patch provides the option to disable writing some global registers in PMD, in order to avoid affecting other drivers, when multiple drivers run on the same NIC and control different physical ports. Because there are few global resources shared among different physical ports.
Fixes: ec246eeb5da1 ("i40e: use default filter input set on init")
Fixes: 98f055707685 ("i40e: configure input fields for RSS or flow director")
Fixes: f05ec7d77e41 ("i40e: initialize flow director flexible payload setting")
Fixes: e536c2e32883 ("net/i40e: fix parsing QinQ packets type")
Fixes: 19b16e2f6442 ("ethdev: add vlan type when setting ether type")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 215 ++++++++++++++++++++++++++++++++---------
drivers/net/i40e/i40e_ethdev.h | 2 +
2 files changed, 171 insertions(+), 46 deletions(-)
@@ -944,6 +944,67 @@ config_floating_veb(struct rte_eth_dev *dev) #define I40E_L2_TAGS_S_TAG_SHIFT 1 #define I40E_L2_TAGS_S_TAG_MASK I40E_MASK(0x1, I40E_L2_TAGS_S_TAG_SHIFT)
+#define ETH_I40E_SUPPORT_MULTI_DRIVER "support-multi-driver"
+RTE_PMD_REGISTER_PARAM_STRING(net_i40e,
+ ETH_I40E_SUPPORT_MULTI_DRIVER "=0|1");
+
+static int
+i40e_parse_multi_drv_handler(__rte_unused const char *key,
+ const char *value,
+ void *opaque)
+{
+ struct i40e_pf *pf;
+ unsigned long support_multi_driver;
+ char *end;
+
+ pf = (struct i40e_pf *)opaque;
+
+ errno = 0;
+ support_multi_driver = strtoul(value, &end, 10);
+ if (errno != 0 || end == value || *end != 0) {
+ PMD_DRV_LOG(WARNING, "Wrong global configuration");
+ return -(EINVAL);
+ }
+
+ if (support_multi_driver == 1 || support_multi_driver == 0)
+ pf->support_multi_driver = (bool)support_multi_driver;
+ else
+ PMD_DRV_LOG(WARNING, "%s must be 1 or 0,",
+ "enable global configuration by default."
+ ETH_I40E_SUPPORT_MULTI_DRIVER);
+ return 0;
+}
+
+static int
+i40e_support_multi_driver(struct rte_eth_dev *dev) {
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_pci_device *pci_dev = dev->pci_dev;
+ static const char *valid_keys[] = {
+ ETH_I40E_SUPPORT_MULTI_DRIVER, NULL};
+ struct rte_kvargs *kvlist;
+
+ /* Enable global configuration by default */
+ pf->support_multi_driver = false;
+
+ if (!pci_dev->device.devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(pci_dev->device.devargs->args, valid_keys);
+ if (!kvlist)
+ return -EINVAL;
+
+ if (rte_kvargs_count(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER) > 1)
+ PMD_DRV_LOG(WARNING, "More than one argument \"%s\" and only "
+ "the first invalid or last valid one is used !",
+ ETH_I40E_SUPPORT_MULTI_DRIVER);
+
+ rte_kvargs_process(kvlist, ETH_I40E_SUPPORT_MULTI_DRIVER,
+ i40e_parse_multi_drv_handler, pf);
+ rte_kvargs_free(kvlist);
+ return 0;
+}
+
static int
eth_i40e_dev_init(struct rte_eth_dev *dev) { @@ -993,6 +1054,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
hw->bus.func = pci_dev->addr.function;
hw->adapter_stopped = 0;
+ /* Check if need to support multi-driver */
+ i40e_support_multi_driver(dev);
+
/* Make sure all is clean before doing PF reset */
i40e_clear_hw(hw);
@@ -1019,7 +1083,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
* software. It should be removed once issues are fixed
* in NVM.
*/
- i40e_GLQF_reg_init(hw);
+ if (!pf->support_multi_driver)
+ i40e_GLQF_reg_init(hw);
/* Initialize the input set for filters (hash and fd) to default value */
i40e_filter_input_set_init(pf);
@@ -1115,11 +1180,14 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
i40e_set_fc(hw, &aq_fail, TRUE);
/* Set the global registers with default ether type value */
- ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER, ETHER_TYPE_VLAN);
- if (ret != I40E_SUCCESS) {
- PMD_INIT_LOG(ERR, "Failed to set the default outer "
- "VLAN ether type");
- goto err_setup_pf_switch;
+ if (!pf->support_multi_driver) {
+ ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ ETHER_TYPE_VLAN);
+ if (ret != I40E_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to set the default outer "
+ "VLAN ether type");
+ goto err_setup_pf_switch;
+ }
}
/* PF setup, which includes VSI setup */ @@ -2754,11 +2822,17 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
uint16_t tpid)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint64_t reg_r = 0, reg_w = 0;
uint16_t reg_id = 0;
int ret = 0;
int qinq = dev->data->dev_conf.rxmode.hw_vlan_extend;
+ if (pf->support_multi_driver) {
+ PMD_DRV_LOG(ERR, "Setting TPID is not supported.");
+ return -ENOTSUP;
+ }
+
switch (vlan_type) {
case ETH_VLAN_TYPE_OUTER:
if (qinq)
@@ -3039,20 +3113,25 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
I40E_WRITE_REG(hw, I40E_PRTDCB_MFLCN, mflcn_reg);
}
- /* config the water marker both based on the packets and bytes */
- I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
- (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
- << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
- I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
- (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
- << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
- I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
- pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
- << I40E_KILOSHIFT);
- I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
- pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
- << I40E_KILOSHIFT);
- i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
+ if (!pf->support_multi_driver) {
+ /* config water marker both based on the packets and bytes */
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PHW,
+ (pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
+ << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_PLW,
+ (pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
+ << I40E_KILOSHIFT) / I40E_PACKET_AVERAGE_SIZE);
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GHW,
+ pf->fc_conf.high_water[I40E_MAX_TRAFFIC_CLASS]
+ << I40E_KILOSHIFT);
+ I40E_WRITE_GLB_REG(hw, I40E_GLRPB_GLW,
+ pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS]
+ << I40E_KILOSHIFT);
+ i40e_global_cfg_warning(I40E_WARNING_FLOW_CTL);
+ } else {
+ PMD_DRV_LOG(ERR,
+ "Water marker configuration is not supported.");
+ }
I40E_WRITE_FLUSH(hw);
@@ -6870,9 +6949,15 @@ i40e_tunnel_filter_param_check(struct i40e_pf *pf, static int i40e_dev_set_gre_key_len(struct i40e_hw *hw, uint8_t len) {
+ struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
uint32_t val, reg;
int ret = -EINVAL;
+ if (pf->support_multi_driver) {
+ PMD_DRV_LOG(ERR, "GRE key length configuration is unsupported");
+ return -ENOTSUP;
+ }
+
val = I40E_READ_REG(hw, I40E_GL_PRS_FVBM(2));
PMD_DRV_LOG(DEBUG, "Read original GL_PRS_FVBM with 0x%08x\n", val);
@@ -7114,12 +7199,18 @@ static int
i40e_set_hash_filter_global_config(struct i40e_hw *hw,
struct rte_eth_hash_global_conf *g_cfg) {
+ struct i40e_pf *pf = &((struct i40e_adapter *)hw->back)->pf;
int ret;
uint16_t i;
uint32_t reg;
uint32_t mask0 = g_cfg->valid_bit_mask[0];
enum i40e_filter_pctype pctype;
+ if (pf->support_multi_driver) {
+ PMD_DRV_LOG(ERR, "Hash global configuration is not supported.");
+ return -ENOTSUP;
+ }
+
/* Check the input parameters */
ret = i40e_hash_global_config_check(g_cfg);
if (ret < 0)
@@ -7850,6 +7941,12 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
I40E_INSET_MASK_NUM_REG);
if (num < 0)
return;
+
+ if (pf->support_multi_driver && num > 0) {
+ PMD_DRV_LOG(ERR, "Input set setting is not supported.");
+ return;
+ }
+
inset_reg = i40e_translate_input_set_reg(hw->mac.type,
input_set);
@@ -7858,39 +7955,49 @@ i40e_filter_input_set_init(struct i40e_pf *pf)
i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
(uint32_t)((inset_reg >>
I40E_32_BIT_WIDTH) & UINT32_MAX));
- i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(0, pctype),
- (uint32_t)(inset_reg & UINT32_MAX));
- i40e_check_write_global_reg(hw, I40E_GLQF_HASH_INSET(1, pctype),
- (uint32_t)((inset_reg >>
- I40E_32_BIT_WIDTH) & UINT32_MAX));
-
- for (i = 0; i < num; i++) {
+ if (!pf->support_multi_driver) {
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_HASH_INSET(0, pctype),
+ (uint32_t)(inset_reg & UINT32_MAX));
i40e_check_write_global_reg(hw,
+ I40E_GLQF_HASH_INSET(1, pctype),
+ (uint32_t)((inset_reg >>
+ I40E_32_BIT_WIDTH) & UINT32_MAX));
+
+ for (i = 0; i < num; i++) {
+ i40e_check_write_global_reg(hw,
I40E_GLQF_FD_MSK(i, pctype),
mask_reg[i]);
- i40e_check_write_global_reg(hw,
+ i40e_check_write_global_reg(hw,
I40E_GLQF_HASH_MSK(i, pctype),
mask_reg[i]);
- }
- /*clear unused mask registers of the pctype */
- for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
- i40e_check_write_global_reg(hw,
+ }
+ /*clear unused mask registers of the pctype */
+ for (i = num; i < I40E_INSET_MASK_NUM_REG; i++) {
+ i40e_check_write_global_reg(hw,
I40E_GLQF_FD_MSK(i, pctype),
0);
- i40e_check_write_global_reg(hw,
+ i40e_check_write_global_reg(hw,
I40E_GLQF_HASH_MSK(i, pctype),
- 0);
+ 0);
+ }
+ } else {
+ PMD_DRV_LOG(ERR,
+ "Input set setting is not supported.");
}
I40E_WRITE_FLUSH(hw);
/* store the default input set */
- pf->hash_input_set[pctype] = input_set;
+ if (!pf->support_multi_driver)
+ pf->hash_input_set[pctype] = input_set;
pf->fdir.input_set[pctype] = input_set;
}
- i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
- i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
- i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
+ if (!pf->support_multi_driver) {
+ i40e_global_cfg_warning(I40E_WARNING_HASH_INSET);
+ i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
+ i40e_global_cfg_warning(I40E_WARNING_HASH_MSK);
+ }
}
int
@@ -7903,6 +8010,11 @@ i40e_hash_filter_inset_select(struct i40e_hw *hw,
uint32_t mask_reg[I40E_INSET_MASK_NUM_REG] = {0};
int ret, i, num;
+ if (pf->support_multi_driver) {
+ PMD_DRV_LOG(ERR, "Hash input set setting is not supported.");
+ return -ENOTSUP;
+ }
+
if (!conf) {
PMD_DRV_LOG(ERR, "Invalid pointer");
return -EFAULT;
@@ -8029,6 +8141,11 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
if (num < 0)
return -EINVAL;
+ if (pf->support_multi_driver && num > 0) {
+ PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
+ return -ENOTSUP;
+ }
+
inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 0), @@ -8037,14 +8154,20 @@ i40e_fdir_filter_inset_select(struct i40e_pf *pf,
(uint32_t)((inset_reg >>
I40E_32_BIT_WIDTH) & UINT32_MAX));
- for (i = 0; i < num; i++)
- i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
- mask_reg[i]);
- /*clear unused mask registers of the pctype */
- for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
- i40e_check_write_global_reg(hw, I40E_GLQF_FD_MSK(i, pctype),
- 0);
- i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
+ if (!pf->support_multi_driver) {
+ for (i = 0; i < num; i++)
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_FD_MSK(i, pctype),
+ mask_reg[i]);
+ /*clear unused mask registers of the pctype */
+ for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
+ i40e_check_write_global_reg(hw,
+ I40E_GLQF_FD_MSK(i, pctype),
+ 0);
+ i40e_global_cfg_warning(I40E_WARNING_FD_MSK);
+ } else {
+ PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
+ }
I40E_WRITE_FLUSH(hw);
pf->fdir.input_set[pctype] = input_set; diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 12b6000..82d5501 100644
@@ -485,6 +485,8 @@ struct i40e_pf {
bool floating_veb; /* The flag to use the floating VEB */
/* The floating enable flag for the specific VF */
bool floating_veb_list[I40E_MAX_VF];
+
+ bool support_multi_driver; /* 1 - support multiple driver */
};
enum pending_msg {
--
2.5.5
------------------------------
Message: 3
Date: Fri, 2 Feb 2018 20:25:10 +0800
From: Beilei Xing <beilei.xing@intel.com>
To: dev@dpdk.org, jingjing.wu@intel.com
Cc: stable@dpdk.org
Subject: [dpdk-dev] [PATCH v3 4/4] net/i40e: fix interrupt conflict
when using multi-driver
Message-ID: <1517574310-93096-5-git-send-email-beilei.xing@intel.com>
There's interrupt conflict when using DPDK and Linux i40e
on different ports of the same Ethernet controller, this
patch fixes it by switching from IntN to Int0 if multiple
drivers are used.
Fixes: be6c228d4da3 ("i40e: support Rx interrupt")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 93 +++++++++++++++++++++++++--------------
drivers/net/i40e/i40e_ethdev.h | 10 +++--
drivers/net/i40e/i40e_ethdev_vf.c | 4 +-
3 files changed, 68 insertions(+), 39 deletions(-)
@@ -760,6 +760,23 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
i40e_global_cfg_warning(I40E_WARNING_QINQ_PARSER);
}
+static inline void i40e_config_automask(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t val;
+
+ /* INTENA flag is not auto-cleared for interrupt */
+ val = I40E_READ_REG(hw, I40E_GLINT_CTL);
+ val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
+ I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
+
+ /* If support multi-driver, PF will use INT0. */
+ if (!pf->support_multi_driver)
+ val |= I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK;
+
+ I40E_WRITE_REG(hw, I40E_GLINT_CTL, val);
+}
+
#define I40E_FLOW_CONTROL_ETHERTYPE 0x8808
/*
@@ -1077,6 +1094,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
return ret;
}
+ i40e_config_automask(pf);
+
/*
* To work around the NVM issue, initialize registers
* for flexible payload and packet type of QinQ by
@@ -1463,6 +1482,7 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t msix_vect,
int i;
uint32_t val;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+ struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
/* Bind all RX queues to allocated MSIX interrupt */
for (i = 0; i < nb_queue; i++) {
@@ -1481,7 +1501,8 @@ __vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t msix_vect,
/* Write first RX queue to Link list register as the head element */
if (vsi->type != I40E_VSI_SRIOV) {
uint16_t interval =
- i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
+ i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
+ pf->support_multi_driver);
if (msix_vect == I40E_MISC_VEC_ID) {
I40E_WRITE_REG(hw, I40E_PFINT_LNKLST0,
@@ -1539,7 +1560,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
uint16_t queue_idx = 0;
int record = 0;
- uint32_t val;
int i;
for (i = 0; i < vsi->nb_qps; i++) {
@@ -1547,13 +1567,6 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi)
I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), 0);
}
- /* INTENA flag is not auto-cleared for interrupt */
- val = I40E_READ_REG(hw, I40E_GLINT_CTL);
- val |= I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK |
- I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK |
- I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;
- I40E_WRITE_REG(hw, I40E_GLINT_CTL, val);
-
/* VF bind interrupt */
if (vsi->type == I40E_VSI_SRIOV) {
__vsi_queues_bind_intr(vsi, msix_vect,
@@ -1606,27 +1619,22 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
struct rte_eth_dev *dev = vsi->adapter->eth_dev;
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t interval = i40e_calc_itr_interval(\
- RTE_LIBRTE_I40E_ITR_INTERVAL);
+ struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
- if (rte_intr_allow_others(intr_handle))
+ if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
for (i = 0; i < vsi->nb_msix; i++) {
msix_intr = vsi->msix_intr + i;
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(msix_intr - 1),
- I40E_PFINT_DYN_CTLN_INTENA_MASK |
- I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
- (0 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) |
- (interval <<
- I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT));
+ I40E_PFINT_DYN_CTLN_INTENA_MASK |
+ I40E_PFINT_DYN_CTLN_CLEARPBA_MASK |
+ I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
}
else
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
I40E_PFINT_DYN_CTL0_CLEARPBA_MASK |
- (0 << I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT) |
- (interval <<
- I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT));
+ I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
}
@@ -1637,16 +1645,18 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
struct rte_eth_dev *dev = vsi->adapter->eth_dev;
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+ struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
- if (rte_intr_allow_others(intr_handle))
+ if (rte_intr_allow_others(intr_handle) || !pf->support_multi_driver)
for (i = 0; i < vsi->nb_msix; i++) {
msix_intr = vsi->msix_intr + i;
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(msix_intr - 1),
- 0);
+ I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
}
else
- I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
+ I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
+ I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
}
@@ -4618,16 +4628,28 @@ i40e_vsi_setup(struct i40e_pf *pf,
/* VF has MSIX interrupt in VF range, don't allocate here */
if (type == I40E_VSI_MAIN) {
- ret = i40e_res_pool_alloc(&pf->msix_pool,
- RTE_MIN(vsi->nb_qps,
- RTE_MAX_RXTX_INTR_VEC_ID));
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "VSI MAIN %d get heap failed %d",
- vsi->seid, ret);
- goto fail_queue_alloc;
+ if (pf->support_multi_driver) {
+ /* If support multi-driver, need to use INT0 instead of
+ * allocating from msix pool. The Msix pool is init from
+ * INT1, so it's OK just set msix_intr to 0 and nb_msix
+ * to 1 without calling i40e_res_pool_alloc.
+ */
+ vsi->msix_intr = 0;
+ vsi->nb_msix = 1;
+ } else {
+ ret = i40e_res_pool_alloc(&pf->msix_pool,
+ RTE_MIN(vsi->nb_qps,
+ RTE_MAX_RXTX_INTR_VEC_ID));
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR,
+ "VSI MAIN %d get heap failed %d",
+ vsi->seid, ret);
+ goto fail_queue_alloc;
+ }
+ vsi->msix_intr = ret;
+ vsi->nb_msix = RTE_MIN(vsi->nb_qps,
+ RTE_MAX_RXTX_INTR_VEC_ID);
}
- vsi->msix_intr = ret;
- vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
} else if (type != I40E_VSI_SRIOV) {
ret = i40e_res_pool_alloc(&pf->msix_pool, 1);
if (ret < 0) {
@@ -5540,7 +5562,8 @@ void
i40e_pf_disable_irq0(struct i40e_hw *hw)
{
/* Disable all interrupt types */
- I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0);
+ I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
+ I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
}
@@ -9861,10 +9884,12 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t interval =
- i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
+ i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL,
+ pf->support_multi_driver);
uint16_t msix_intr;
msix_intr = intr_handle->intr_vec[queue_id];
@@ -720,10 +720,14 @@ i40e_align_floor(int n)
}
static inline uint16_t
-i40e_calc_itr_interval(int16_t interval)
+i40e_calc_itr_interval(int16_t interval, bool is_multi_drv)
{
- if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX)
- interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
+ if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX) {
+ if (is_multi_drv)
+ interval = I40E_QUEUE_ITR_INTERVAL_MAX;
+ else
+ interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT;
+ }
/* Convert to hardware count, as writing each 1 represents 2 us */
return interval / 2;
@@ -1246,7 +1246,7 @@ i40evf_init_vf(struct rte_eth_dev *dev)
struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct ether_addr *p_mac_addr;
uint16_t interval =
- i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX);
+ i40e_calc_itr_interval(I40E_QUEUE_ITR_INTERVAL_MAX, 0);
vf->adapter = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
vf->dev_data = dev->data;
@@ -1986,7 +1986,7 @@ i40evf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t interval =
- i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL);
+ i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL, 0);
uint16_t msix_intr;
msix_intr = intr_handle->intr_vec[queue_id];