[dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd
Message ID | 2601191342CEEE43887BDE71AB97725836A69E3D@irsmsx105.ger.corp.intel.com (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 0D673C390; Wed, 29 Jul 2015 20:12:22 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 744ECC372 for <dev@dpdk.org>; Wed, 29 Jul 2015 20:12:19 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP; 29 Jul 2015 11:12:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,572,1432623600"; d="scan'208";a="738064036" Received: from irsmsx110.ger.corp.intel.com ([163.33.3.25]) by orsmga001.jf.intel.com with ESMTP; 29 Jul 2015 11:12:16 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.223]) by irsmsx110.ger.corp.intel.com ([169.254.15.29]) with mapi id 14.03.0224.002; Wed, 29 Jul 2015 19:12:15 +0100 From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com> To: Martin Weiser <martin.weiser@allegro-packets.com>, "Zhang, Helin" <helin.zhang@intel.com>, "olivier.matz@6wind.com" <olivier.matz@6wind.com> Thread-Topic: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd Thread-Index: AQHQyhBbcJ5kz811kUqP+U4E1qdbJZ3ymcUg Date: Wed, 29 Jul 2015 18:12:15 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836A69E3D@irsmsx105.ger.corp.intel.com> References: <55B8EC16.60404@allegro-packets.com> In-Reply-To: <55B8EC16.60404@allegro-packets.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 MIME-Version: 1.0 Cc: "dev@dpdk.org" <dev@dpdk.org> Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Ananyev, Konstantin
July 29, 2015, 6:12 p.m. UTC
Hi Martin, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser > Sent: Wednesday, July 29, 2015 4:07 PM > To: Zhang, Helin; olivier.matz@6wind.com > Cc: dev@dpdk.org > Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd > > Hi Helin, Hi Olivier, > > we are seeing an issue with the ixgbe and i40e drivers which we could > track down to our setting of the private area size of the mbufs. > The issue can be easily reproduced with the l2fwd example application > when a small modification is done: just set the priv_size parameter in > the call to the rte_pktmbuf_pool_create function to an odd number like > 1. In our tests this causes every call to rte_eth_rx_burst to return 32 > (which is the setting of nb_pkts) nonsense mbufs although no packets are > received on the interface and the hardware counters do not report any > received packets. From Niantic datasheet: "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the advanced receive descriptor programming by the software. The ... Packet Buffer Address (64) This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the address). Header Buffer Address (64) The physical address of the header buffer with the lowest bit being Descriptor Done (DD). When a packet spans in multiple descriptors, the header buffer address is used only on the first descriptor. During the programming phase, software must set the DD bit to zero (see the description of the DD bit in this section). This means that header buffer addresses are always word aligned." Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and Header Buffer Address (HBA) to the same value: buf_physaddr + RTE_PKTMBUF_HEADROOM So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then SW considers that HW already done with it. In other words, right now for ixgbe you can't use RX buffer that is not aligned on word boundary. So the advice would be, right now - don't set priv_size to the odd value. As we don't support split header feature anyway, I think we can fix it just by always setting HBA in the RXD to zero. Could you try the fix for ixgbe below? Same story with FVL, I believe. Konstantin > Interestingly this does not happen if we force the scattered rx path. > > I assume the drivers have some expectations regarding the alignment of > the buf_addr in the mbuf and setting an odd private are size breaks this > alignment in the rte_pktmbuf_init function. If this is the case then one > possible fix might be to enforce an alignment on the private area size. > > Best regards, > Martin
Comments
Hi Martin Thank you very much for the good catch! The similar situation in i40e, as explained by Konstantin. As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said specifically as below. "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to ensure that during mempool/mbuf allocation. Regards, Helin > -----Original Message----- > From: Ananyev, Konstantin > Sent: Wednesday, July 29, 2015 11:12 AM > To: Martin Weiser; Zhang, Helin; olivier.matz@6wind.com > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf > private area size is odd > > Hi Martin, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser > > Sent: Wednesday, July 29, 2015 4:07 PM > > To: Zhang, Helin; olivier.matz@6wind.com > > Cc: dev@dpdk.org > > Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when > > mbuf private area size is odd > > > > Hi Helin, Hi Olivier, > > > > we are seeing an issue with the ixgbe and i40e drivers which we could > > track down to our setting of the private area size of the mbufs. > > The issue can be easily reproduced with the l2fwd example application > > when a small modification is done: just set the priv_size parameter in > > the call to the rte_pktmbuf_pool_create function to an odd number like > > 1. In our tests this causes every call to rte_eth_rx_burst to return > > 32 (which is the setting of nb_pkts) nonsense mbufs although no > > packets are received on the interface and the hardware counters do not > > report any received packets. > > From Niantic datasheet: > > "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the > advanced receive descriptor programming by the software. The ... > Packet Buffer Address (64) > This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the > address). > Header Buffer Address (64) > The physical address of the header buffer with the lowest bit being Descriptor > Done (DD). > When a packet spans in multiple descriptors, the header buffer address is used > only on the first descriptor. During the programming phase, software must set > the DD bit to zero (see the description of the DD bit in this section). This means > that header buffer addresses are always word aligned." > > Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and > Header Buffer Address (HBA) to the same value: > buf_physaddr + RTE_PKTMBUF_HEADROOM > So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then > SW considers that HW already done with it. > In other words, right now for ixgbe you can't use RX buffer that is not aligned on > word boundary. > > So the advice would be, right now - don't set priv_size to the odd value. > As we don't support split header feature anyway, I think we can fix it just by > always setting HBA in the RXD to zero. > Could you try the fix for ixgbe below? > > Same story with FVL, I believe. > Konstantin > > > > Interestingly this does not happen if we force the scattered rx path. > > > > I assume the drivers have some expectations regarding the alignment of > > the buf_addr in the mbuf and setting an odd private are size breaks > > this alignment in the rte_pktmbuf_init function. If this is the case > > then one possible fix might be to enforce an alignment on the private area size. > > > > Best regards, > > Martin > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c > index a0c8847..94967c5 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool > reset_mbuf) > > /* populate the descriptors */ > dma_addr = > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > - rxdp[i].read.hdr_addr = dma_addr; > + rxdp[i].read.hdr_addr = 0; > rxdp[i].read.pkt_addr = dma_addr; > } > > @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > **rx_pkts, > rxe->mbuf = nmb; > dma_addr = > > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > - rxdp->read.hdr_addr = dma_addr; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma_addr; > > /* > @@ -1741,7 +1741,7 @@ next_desc: > rxe->mbuf = nmb; > > rxm->data_off = RTE_PKTMBUF_HEADROOM; > - rxdp->read.hdr_addr = dma; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma; > } else > rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ > ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > dma_addr = > > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > rxd = &rxq->rx_ring[i]; > - rxd->read.hdr_addr = dma_addr; > + rxd->read.hdr_addr = 0; > rxd->read.pkt_addr = dma_addr; > rxe[i].mbuf = mbuf; > } > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > index 6c1647e..16a9c64 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > RTE_PKTMBUF_HEADROOM); > __m128i dma_addr0, dma_addr1; > > + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > + > rxdp = rxq->rx_ring + rxq->rxrearm_start; > > /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ > ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > > + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > + > /* flush desc with pa dma_addr */ > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); > bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c > b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool > reset_mbuf) > > /* populate the descriptors */ > dma_addr = > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > - rxdp[i].read.hdr_addr = dma_addr; > + rxdp[i].read.hdr_addr = 0; > rxdp[i].read.pkt_addr = dma_addr; > } > > @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > **rx_pkts, > rxe->mbuf = nmb; > dma_addr = > > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > - rxdp->read.hdr_addr = dma_addr; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma_addr; > > /* > @@ -1741,7 +1741,7 @@ next_desc: > rxe->mbuf = nmb; > > rxm->data_off = RTE_PKTMBUF_HEADROOM; > - rxdp->read.hdr_addr = dma; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma; > } else > rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ > ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > dma_addr = > > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > rxd = &rxq->rx_ring[i]; > - rxd->read.hdr_addr = dma_addr; > + rxd->read.hdr_addr = 0; > rxd->read.pkt_addr = dma_addr; > rxe[i].mbuf = mbuf; > } > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > index 6c1647e..16a9c64 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > RTE_PKTMBUF_HEADROOM); > __m128i dma_addr0, dma_addr1; > > + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > + > rxdp = rxq->rx_ring + rxq->rxrearm_start; > > /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ > ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > > + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > + > /* flush desc with pa dma_addr */ > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
Hi, On 07/29/2015 10:24 PM, Zhang, Helin wrote: > Hi Martin > > Thank you very much for the good catch! > > The similar situation in i40e, as explained by Konstantin. > As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. > But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said specifically as below. > "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to ensure that during mempool/mbuf allocation. Indeed it would be good to force the priv_size to be aligned. The priv_size could be aligned automatically in rte_pktmbuf_pool_create(). The only possible problem I could see is that it would break applications that access to the data buffer by doing (sizeof(mbuf) + sizeof(priv)), which is probably not the best thing to do (I didn't find any applications like this in dpdk). For applications that directly use rte_mempool_create() instead of rte_pktmbuf_pool_create(), we could add a check using an assert in rte_pktmbuf_init() and some words in the documentation. The question is: what should be the proper alignment? I would say at least 8 bytes, but maybe cache_aligned is an option too. Regards, Olivier > > Regards, > Helin > >> -----Original Message----- >> From: Ananyev, Konstantin >> Sent: Wednesday, July 29, 2015 11:12 AM >> To: Martin Weiser; Zhang, Helin; olivier.matz@6wind.com >> Cc: dev@dpdk.org >> Subject: RE: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf >> private area size is odd >> >> Hi Martin, >> >>> -----Original Message----- >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser >>> Sent: Wednesday, July 29, 2015 4:07 PM >>> To: Zhang, Helin; olivier.matz@6wind.com >>> Cc: dev@dpdk.org >>> Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when >>> mbuf private area size is odd >>> >>> Hi Helin, Hi Olivier, >>> >>> we are seeing an issue with the ixgbe and i40e drivers which we could >>> track down to our setting of the private area size of the mbufs. >>> The issue can be easily reproduced with the l2fwd example application >>> when a small modification is done: just set the priv_size parameter in >>> the call to the rte_pktmbuf_pool_create function to an odd number like >>> 1. In our tests this causes every call to rte_eth_rx_burst to return >>> 32 (which is the setting of nb_pkts) nonsense mbufs although no >>> packets are received on the interface and the hardware counters do not >>> report any received packets. >> >> From Niantic datasheet: >> >> "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the >> advanced receive descriptor programming by the software. The ... >> Packet Buffer Address (64) >> This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the >> address). >> Header Buffer Address (64) >> The physical address of the header buffer with the lowest bit being Descriptor >> Done (DD). >> When a packet spans in multiple descriptors, the header buffer address is used >> only on the first descriptor. During the programming phase, software must set >> the DD bit to zero (see the description of the DD bit in this section). This means >> that header buffer addresses are always word aligned." >> >> Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and >> Header Buffer Address (HBA) to the same value: >> buf_physaddr + RTE_PKTMBUF_HEADROOM >> So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then >> SW considers that HW already done with it. >> In other words, right now for ixgbe you can't use RX buffer that is not aligned on >> word boundary. >> >> So the advice would be, right now - don't set priv_size to the odd value. >> As we don't support split header feature anyway, I think we can fix it just by >> always setting HBA in the RXD to zero. >> Could you try the fix for ixgbe below? >> >> Same story with FVL, I believe. >> Konstantin >> >> >>> Interestingly this does not happen if we force the scattered rx path. >>> >>> I assume the drivers have some expectations regarding the alignment of >>> the buf_addr in the mbuf and setting an odd private are size breaks >>> this alignment in the rte_pktmbuf_init function. If this is the case >>> then one possible fix might be to enforce an alignment on the private area size. >>> >>> Best regards, >>> Martin >> >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c >> index a0c8847..94967c5 100644 >> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool >> reset_mbuf) >> >> /* populate the descriptors */ >> dma_addr = >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); >> - rxdp[i].read.hdr_addr = dma_addr; >> + rxdp[i].read.hdr_addr = 0; >> rxdp[i].read.pkt_addr = dma_addr; >> } >> >> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >> **rx_pkts, >> rxe->mbuf = nmb; >> dma_addr = >> >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); >> - rxdp->read.hdr_addr = dma_addr; >> + rxdp->read.hdr_addr = 0; >> rxdp->read.pkt_addr = dma_addr; >> >> /* >> @@ -1741,7 +1741,7 @@ next_desc: >> rxe->mbuf = nmb; >> >> rxm->data_off = RTE_PKTMBUF_HEADROOM; >> - rxdp->read.hdr_addr = dma; >> + rxdp->read.hdr_addr = 0; >> rxdp->read.pkt_addr = dma; >> } else >> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ >> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) >> dma_addr = >> >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); >> rxd = &rxq->rx_ring[i]; >> - rxd->read.hdr_addr = dma_addr; >> + rxd->read.hdr_addr = 0; >> rxd->read.pkt_addr = dma_addr; >> rxe[i].mbuf = mbuf; >> } >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> index 6c1647e..16a9c64 100644 >> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >> RTE_PKTMBUF_HEADROOM); >> __m128i dma_addr0, dma_addr1; >> >> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); >> + >> rxdp = rxq->rx_ring + rxq->rxrearm_start; >> >> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ >> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); >> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); >> >> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); >> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); >> + >> /* flush desc with pa dma_addr */ >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); >> bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c >> b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 >> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool >> reset_mbuf) >> >> /* populate the descriptors */ >> dma_addr = >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); >> - rxdp[i].read.hdr_addr = dma_addr; >> + rxdp[i].read.hdr_addr = 0; >> rxdp[i].read.pkt_addr = dma_addr; >> } >> >> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >> **rx_pkts, >> rxe->mbuf = nmb; >> dma_addr = >> >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); >> - rxdp->read.hdr_addr = dma_addr; >> + rxdp->read.hdr_addr = 0; >> rxdp->read.pkt_addr = dma_addr; >> >> /* >> @@ -1741,7 +1741,7 @@ next_desc: >> rxe->mbuf = nmb; >> >> rxm->data_off = RTE_PKTMBUF_HEADROOM; >> - rxdp->read.hdr_addr = dma; >> + rxdp->read.hdr_addr = 0; >> rxdp->read.pkt_addr = dma; >> } else >> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ >> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) >> dma_addr = >> >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); >> rxd = &rxq->rx_ring[i]; >> - rxd->read.hdr_addr = dma_addr; >> + rxd->read.hdr_addr = 0; >> rxd->read.pkt_addr = dma_addr; >> rxe[i].mbuf = mbuf; >> } >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> index 6c1647e..16a9c64 100644 >> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >> RTE_PKTMBUF_HEADROOM); >> __m128i dma_addr0, dma_addr1; >> >> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); >> + >> rxdp = rxq->rx_ring + rxq->rxrearm_start; >> >> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ >> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); >> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); >> >> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); >> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); >> + >> /* flush desc with pa dma_addr */ >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
Hi Olivier, > -----Original Message----- > From: Olivier MATZ [mailto:olivier.matz@6wind.com] > Sent: Thursday, July 30, 2015 9:12 AM > To: Zhang, Helin; Ananyev, Konstantin; Martin Weiser > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd > > Hi, > > On 07/29/2015 10:24 PM, Zhang, Helin wrote: > > Hi Martin > > > > Thank you very much for the good catch! > > > > The similar situation in i40e, as explained by Konstantin. > > As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. > > But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said > specifically as below. > > "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to > ensure that during mempool/mbuf allocation. > > Indeed it would be good to force the priv_size to be aligned. > > The priv_size could be aligned automatically in > rte_pktmbuf_pool_create(). The only possible problem I could see > is that it would break applications that access to the data buffer > by doing (sizeof(mbuf) + sizeof(priv)), which is probably not the > best thing to do (I didn't find any applications like this in dpdk). Might be just make rte_pktmbuf_pool_create() fail if input priv_size % MIN_ALIGN != 0? > > For applications that directly use rte_mempool_create() instead of > rte_pktmbuf_pool_create(), we could add a check using an assert in > rte_pktmbuf_init() and some words in the documentation. > > The question is: what should be the proper alignment? I would say > at least 8 bytes, but maybe cache_aligned is an option too. 8 bytes seems enough to me. Konstantin > > Regards, > Olivier > > > > > > Regards, > > Helin > > > >> -----Original Message----- > >> From: Ananyev, Konstantin > >> Sent: Wednesday, July 29, 2015 11:12 AM > >> To: Martin Weiser; Zhang, Helin; olivier.matz@6wind.com > >> Cc: dev@dpdk.org > >> Subject: RE: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf > >> private area size is odd > >> > >> Hi Martin, > >> > >>> -----Original Message----- > >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser > >>> Sent: Wednesday, July 29, 2015 4:07 PM > >>> To: Zhang, Helin; olivier.matz@6wind.com > >>> Cc: dev@dpdk.org > >>> Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when > >>> mbuf private area size is odd > >>> > >>> Hi Helin, Hi Olivier, > >>> > >>> we are seeing an issue with the ixgbe and i40e drivers which we could > >>> track down to our setting of the private area size of the mbufs. > >>> The issue can be easily reproduced with the l2fwd example application > >>> when a small modification is done: just set the priv_size parameter in > >>> the call to the rte_pktmbuf_pool_create function to an odd number like > >>> 1. In our tests this causes every call to rte_eth_rx_burst to return > >>> 32 (which is the setting of nb_pkts) nonsense mbufs although no > >>> packets are received on the interface and the hardware counters do not > >>> report any received packets. > >> > >> From Niantic datasheet: > >> > >> "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the > >> advanced receive descriptor programming by the software. The ... > >> Packet Buffer Address (64) > >> This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the > >> address). > >> Header Buffer Address (64) > >> The physical address of the header buffer with the lowest bit being Descriptor > >> Done (DD). > >> When a packet spans in multiple descriptors, the header buffer address is used > >> only on the first descriptor. During the programming phase, software must set > >> the DD bit to zero (see the description of the DD bit in this section). This means > >> that header buffer addresses are always word aligned." > >> > >> Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and > >> Header Buffer Address (HBA) to the same value: > >> buf_physaddr + RTE_PKTMBUF_HEADROOM > >> So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then > >> SW considers that HW already done with it. > >> In other words, right now for ixgbe you can't use RX buffer that is not aligned on > >> word boundary. > >> > >> So the advice would be, right now - don't set priv_size to the odd value. > >> As we don't support split header feature anyway, I think we can fix it just by > >> always setting HBA in the RXD to zero. > >> Could you try the fix for ixgbe below? > >> > >> Same story with FVL, I believe. > >> Konstantin > >> > >> > >>> Interestingly this does not happen if we force the scattered rx path. > >>> > >>> I assume the drivers have some expectations regarding the alignment of > >>> the buf_addr in the mbuf and setting an odd private are size breaks > >>> this alignment in the rte_pktmbuf_init function. If this is the case > >>> then one possible fix might be to enforce an alignment on the private area size. > >>> > >>> Best regards, > >>> Martin > >> > >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c > >> index a0c8847..94967c5 100644 > >> --- a/drivers/net/ixgbe/ixgbe_rxtx.c > >> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > >> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool > >> reset_mbuf) > >> > >> /* populate the descriptors */ > >> dma_addr = > >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > >> - rxdp[i].read.hdr_addr = dma_addr; > >> + rxdp[i].read.hdr_addr = 0; > >> rxdp[i].read.pkt_addr = dma_addr; > >> } > >> > >> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > >> **rx_pkts, > >> rxe->mbuf = nmb; > >> dma_addr = > >> > >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > >> - rxdp->read.hdr_addr = dma_addr; > >> + rxdp->read.hdr_addr = 0; > >> rxdp->read.pkt_addr = dma_addr; > >> > >> /* > >> @@ -1741,7 +1741,7 @@ next_desc: > >> rxe->mbuf = nmb; > >> > >> rxm->data_off = RTE_PKTMBUF_HEADROOM; > >> - rxdp->read.hdr_addr = dma; > >> + rxdp->read.hdr_addr = 0; > >> rxdp->read.pkt_addr = dma; > >> } else > >> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ > >> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > >> dma_addr = > >> > >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > >> rxd = &rxq->rx_ring[i]; > >> - rxd->read.hdr_addr = dma_addr; > >> + rxd->read.hdr_addr = 0; > >> rxd->read.pkt_addr = dma_addr; > >> rxe[i].mbuf = mbuf; > >> } > >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> index 6c1647e..16a9c64 100644 > >> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >> RTE_PKTMBUF_HEADROOM); > >> __m128i dma_addr0, dma_addr1; > >> > >> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > >> + > >> rxdp = rxq->rx_ring + rxq->rxrearm_start; > >> > >> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ > >> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > >> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > >> > >> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > >> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > >> + > >> /* flush desc with pa dma_addr */ > >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); > >> bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c > >> b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 > >> --- a/drivers/net/ixgbe/ixgbe_rxtx.c > >> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > >> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool > >> reset_mbuf) > >> > >> /* populate the descriptors */ > >> dma_addr = > >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > >> - rxdp[i].read.hdr_addr = dma_addr; > >> + rxdp[i].read.hdr_addr = 0; > >> rxdp[i].read.pkt_addr = dma_addr; > >> } > >> > >> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > >> **rx_pkts, > >> rxe->mbuf = nmb; > >> dma_addr = > >> > >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > >> - rxdp->read.hdr_addr = dma_addr; > >> + rxdp->read.hdr_addr = 0; > >> rxdp->read.pkt_addr = dma_addr; > >> > >> /* > >> @@ -1741,7 +1741,7 @@ next_desc: > >> rxe->mbuf = nmb; > >> > >> rxm->data_off = RTE_PKTMBUF_HEADROOM; > >> - rxdp->read.hdr_addr = dma; > >> + rxdp->read.hdr_addr = 0; > >> rxdp->read.pkt_addr = dma; > >> } else > >> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ > >> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > >> dma_addr = > >> > >> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > >> rxd = &rxq->rx_ring[i]; > >> - rxd->read.hdr_addr = dma_addr; > >> + rxd->read.hdr_addr = 0; > >> rxd->read.pkt_addr = dma_addr; > >> rxe[i].mbuf = mbuf; > >> } > >> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> index 6c1647e..16a9c64 100644 > >> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >> RTE_PKTMBUF_HEADROOM); > >> __m128i dma_addr0, dma_addr1; > >> > >> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > >> + > >> rxdp = rxq->rx_ring + rxq->rxrearm_start; > >> > >> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ > >> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > >> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > >> > >> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > >> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > >> + > >> /* flush desc with pa dma_addr */ > >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > >> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
Hi Konstantin, On 07/30/2015 11:00 AM, Ananyev, Konstantin wrote: > Hi Olivier, > >> -----Original Message----- >> From: Olivier MATZ [mailto:olivier.matz@6wind.com] >> Sent: Thursday, July 30, 2015 9:12 AM >> To: Zhang, Helin; Ananyev, Konstantin; Martin Weiser >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd >> >> Hi, >> >> On 07/29/2015 10:24 PM, Zhang, Helin wrote: >>> Hi Martin >>> >>> Thank you very much for the good catch! >>> >>> The similar situation in i40e, as explained by Konstantin. >>> As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. >>> But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said >> specifically as below. >>> "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to >> ensure that during mempool/mbuf allocation. >> >> Indeed it would be good to force the priv_size to be aligned. >> >> The priv_size could be aligned automatically in >> rte_pktmbuf_pool_create(). The only possible problem I could see >> is that it would break applications that access to the data buffer >> by doing (sizeof(mbuf) + sizeof(priv)), which is probably not the >> best thing to do (I didn't find any applications like this in dpdk). > > > Might be just make rte_pktmbuf_pool_create() fail if input priv_size % MIN_ALIGN != 0? Hmm maybe it would break more applications: an odd priv_size is probably rare, but a priv_size that is not aligned to 8 bytes is maybe more common. It's maybe safer to align the size transparently? Regards, Olivier > >> >> For applications that directly use rte_mempool_create() instead of >> rte_pktmbuf_pool_create(), we could add a check using an assert in >> rte_pktmbuf_init() and some words in the documentation. >> >> The question is: what should be the proper alignment? I would say >> at least 8 bytes, but maybe cache_aligned is an option too. > > 8 bytes seems enough to me. > > Konstantin > >> >> Regards, >> Olivier >> >> >>> >>> Regards, >>> Helin >>> >>>> -----Original Message----- >>>> From: Ananyev, Konstantin >>>> Sent: Wednesday, July 29, 2015 11:12 AM >>>> To: Martin Weiser; Zhang, Helin; olivier.matz@6wind.com >>>> Cc: dev@dpdk.org >>>> Subject: RE: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf >>>> private area size is odd >>>> >>>> Hi Martin, >>>> >>>>> -----Original Message----- >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser >>>>> Sent: Wednesday, July 29, 2015 4:07 PM >>>>> To: Zhang, Helin; olivier.matz@6wind.com >>>>> Cc: dev@dpdk.org >>>>> Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when >>>>> mbuf private area size is odd >>>>> >>>>> Hi Helin, Hi Olivier, >>>>> >>>>> we are seeing an issue with the ixgbe and i40e drivers which we could >>>>> track down to our setting of the private area size of the mbufs. >>>>> The issue can be easily reproduced with the l2fwd example application >>>>> when a small modification is done: just set the priv_size parameter in >>>>> the call to the rte_pktmbuf_pool_create function to an odd number like >>>>> 1. In our tests this causes every call to rte_eth_rx_burst to return >>>>> 32 (which is the setting of nb_pkts) nonsense mbufs although no >>>>> packets are received on the interface and the hardware counters do not >>>>> report any received packets. >>>> >>>> From Niantic datasheet: >>>> >>>> "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the >>>> advanced receive descriptor programming by the software. The ... >>>> Packet Buffer Address (64) >>>> This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the >>>> address). >>>> Header Buffer Address (64) >>>> The physical address of the header buffer with the lowest bit being Descriptor >>>> Done (DD). >>>> When a packet spans in multiple descriptors, the header buffer address is used >>>> only on the first descriptor. During the programming phase, software must set >>>> the DD bit to zero (see the description of the DD bit in this section). This means >>>> that header buffer addresses are always word aligned." >>>> >>>> Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and >>>> Header Buffer Address (HBA) to the same value: >>>> buf_physaddr + RTE_PKTMBUF_HEADROOM >>>> So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then >>>> SW considers that HW already done with it. >>>> In other words, right now for ixgbe you can't use RX buffer that is not aligned on >>>> word boundary. >>>> >>>> So the advice would be, right now - don't set priv_size to the odd value. >>>> As we don't support split header feature anyway, I think we can fix it just by >>>> always setting HBA in the RXD to zero. >>>> Could you try the fix for ixgbe below? >>>> >>>> Same story with FVL, I believe. >>>> Konstantin >>>> >>>> >>>>> Interestingly this does not happen if we force the scattered rx path. >>>>> >>>>> I assume the drivers have some expectations regarding the alignment of >>>>> the buf_addr in the mbuf and setting an odd private are size breaks >>>>> this alignment in the rte_pktmbuf_init function. If this is the case >>>>> then one possible fix might be to enforce an alignment on the private area size. >>>>> >>>>> Best regards, >>>>> Martin >>>> >>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c >>>> index a0c8847..94967c5 100644 >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >>>> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool >>>> reset_mbuf) >>>> >>>> /* populate the descriptors */ >>>> dma_addr = >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); >>>> - rxdp[i].read.hdr_addr = dma_addr; >>>> + rxdp[i].read.hdr_addr = 0; >>>> rxdp[i].read.pkt_addr = dma_addr; >>>> } >>>> >>>> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >>>> **rx_pkts, >>>> rxe->mbuf = nmb; >>>> dma_addr = >>>> >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); >>>> - rxdp->read.hdr_addr = dma_addr; >>>> + rxdp->read.hdr_addr = 0; >>>> rxdp->read.pkt_addr = dma_addr; >>>> >>>> /* >>>> @@ -1741,7 +1741,7 @@ next_desc: >>>> rxe->mbuf = nmb; >>>> >>>> rxm->data_off = RTE_PKTMBUF_HEADROOM; >>>> - rxdp->read.hdr_addr = dma; >>>> + rxdp->read.hdr_addr = 0; >>>> rxdp->read.pkt_addr = dma; >>>> } else >>>> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ >>>> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) >>>> dma_addr = >>>> >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); >>>> rxd = &rxq->rx_ring[i]; >>>> - rxd->read.hdr_addr = dma_addr; >>>> + rxd->read.hdr_addr = 0; >>>> rxd->read.pkt_addr = dma_addr; >>>> rxe[i].mbuf = mbuf; >>>> } >>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> index 6c1647e..16a9c64 100644 >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>> RTE_PKTMBUF_HEADROOM); >>>> __m128i dma_addr0, dma_addr1; >>>> >>>> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); >>>> + >>>> rxdp = rxq->rx_ring + rxq->rxrearm_start; >>>> >>>> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ >>>> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); >>>> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); >>>> >>>> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); >>>> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); >>>> + >>>> /* flush desc with pa dma_addr */ >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); >>>> bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c >>>> b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >>>> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool >>>> reset_mbuf) >>>> >>>> /* populate the descriptors */ >>>> dma_addr = >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); >>>> - rxdp[i].read.hdr_addr = dma_addr; >>>> + rxdp[i].read.hdr_addr = 0; >>>> rxdp[i].read.pkt_addr = dma_addr; >>>> } >>>> >>>> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >>>> **rx_pkts, >>>> rxe->mbuf = nmb; >>>> dma_addr = >>>> >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); >>>> - rxdp->read.hdr_addr = dma_addr; >>>> + rxdp->read.hdr_addr = 0; >>>> rxdp->read.pkt_addr = dma_addr; >>>> >>>> /* >>>> @@ -1741,7 +1741,7 @@ next_desc: >>>> rxe->mbuf = nmb; >>>> >>>> rxm->data_off = RTE_PKTMBUF_HEADROOM; >>>> - rxdp->read.hdr_addr = dma; >>>> + rxdp->read.hdr_addr = 0; >>>> rxdp->read.pkt_addr = dma; >>>> } else >>>> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ >>>> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) >>>> dma_addr = >>>> >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); >>>> rxd = &rxq->rx_ring[i]; >>>> - rxd->read.hdr_addr = dma_addr; >>>> + rxd->read.hdr_addr = 0; >>>> rxd->read.pkt_addr = dma_addr; >>>> rxe[i].mbuf = mbuf; >>>> } >>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> index 6c1647e..16a9c64 100644 >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>> RTE_PKTMBUF_HEADROOM); >>>> __m128i dma_addr0, dma_addr1; >>>> >>>> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); >>>> + >>>> rxdp = rxq->rx_ring + rxq->rxrearm_start; >>>> >>>> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ >>>> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); >>>> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); >>>> >>>> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); >>>> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); >>>> + >>>> /* flush desc with pa dma_addr */ >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); >
> -----Original Message----- > From: Olivier MATZ [mailto:olivier.matz@6wind.com] > Sent: Thursday, July 30, 2015 10:10 AM > To: Ananyev, Konstantin; Zhang, Helin; Martin Weiser > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd > > Hi Konstantin, > > On 07/30/2015 11:00 AM, Ananyev, Konstantin wrote: > > Hi Olivier, > > > >> -----Original Message----- > >> From: Olivier MATZ [mailto:olivier.matz@6wind.com] > >> Sent: Thursday, July 30, 2015 9:12 AM > >> To: Zhang, Helin; Ananyev, Konstantin; Martin Weiser > >> Cc: dev@dpdk.org > >> Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd > >> > >> Hi, > >> > >> On 07/29/2015 10:24 PM, Zhang, Helin wrote: > >>> Hi Martin > >>> > >>> Thank you very much for the good catch! > >>> > >>> The similar situation in i40e, as explained by Konstantin. > >>> As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. > >>> But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said > >> specifically as below. > >>> "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to > >> ensure that during mempool/mbuf allocation. > >> > >> Indeed it would be good to force the priv_size to be aligned. > >> > >> The priv_size could be aligned automatically in > >> rte_pktmbuf_pool_create(). The only possible problem I could see > >> is that it would break applications that access to the data buffer > >> by doing (sizeof(mbuf) + sizeof(priv)), which is probably not the > >> best thing to do (I didn't find any applications like this in dpdk). > > > > > > Might be just make rte_pktmbuf_pool_create() fail if input priv_size % MIN_ALIGN != 0? > > Hmm maybe it would break more applications: an odd priv_size is > probably rare, but a priv_size that is not aligned to 8 bytes is > maybe more common. My thought was that rte_mempool_create() was just introduced in 2.1, so if we add extra requirement for the input parameter now - there would be no ABI breakage, and not many people started to use it already. For me just seems a bit easier and more straightforward then silent alignment - user would not have wrong assumptions here. Though if you think that a silent alignment would be more convenient for most users - I wouldn't insist. Konstantin > It's maybe safer to align the size transparently? > > > Regards, > Olivier > > > > > > >> > >> For applications that directly use rte_mempool_create() instead of > >> rte_pktmbuf_pool_create(), we could add a check using an assert in > >> rte_pktmbuf_init() and some words in the documentation. > >> > >> The question is: what should be the proper alignment? I would say > >> at least 8 bytes, but maybe cache_aligned is an option too. > > > > 8 bytes seems enough to me. > > > > Konstantin > > > >> > >> Regards, > >> Olivier > >> > >> > >>> > >>> Regards, > >>> Helin > >>> > >>>> -----Original Message----- > >>>> From: Ananyev, Konstantin > >>>> Sent: Wednesday, July 29, 2015 11:12 AM > >>>> To: Martin Weiser; Zhang, Helin; olivier.matz@6wind.com > >>>> Cc: dev@dpdk.org > >>>> Subject: RE: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf > >>>> private area size is odd > >>>> > >>>> Hi Martin, > >>>> > >>>>> -----Original Message----- > >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser > >>>>> Sent: Wednesday, July 29, 2015 4:07 PM > >>>>> To: Zhang, Helin; olivier.matz@6wind.com > >>>>> Cc: dev@dpdk.org > >>>>> Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when > >>>>> mbuf private area size is odd > >>>>> > >>>>> Hi Helin, Hi Olivier, > >>>>> > >>>>> we are seeing an issue with the ixgbe and i40e drivers which we could > >>>>> track down to our setting of the private area size of the mbufs. > >>>>> The issue can be easily reproduced with the l2fwd example application > >>>>> when a small modification is done: just set the priv_size parameter in > >>>>> the call to the rte_pktmbuf_pool_create function to an odd number like > >>>>> 1. In our tests this causes every call to rte_eth_rx_burst to return > >>>>> 32 (which is the setting of nb_pkts) nonsense mbufs although no > >>>>> packets are received on the interface and the hardware counters do not > >>>>> report any received packets. > >>>> > >>>> From Niantic datasheet: > >>>> > >>>> "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the > >>>> advanced receive descriptor programming by the software. The ... > >>>> Packet Buffer Address (64) > >>>> This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the > >>>> address). > >>>> Header Buffer Address (64) > >>>> The physical address of the header buffer with the lowest bit being Descriptor > >>>> Done (DD). > >>>> When a packet spans in multiple descriptors, the header buffer address is used > >>>> only on the first descriptor. During the programming phase, software must set > >>>> the DD bit to zero (see the description of the DD bit in this section). This means > >>>> that header buffer addresses are always word aligned." > >>>> > >>>> Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and > >>>> Header Buffer Address (HBA) to the same value: > >>>> buf_physaddr + RTE_PKTMBUF_HEADROOM > >>>> So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then > >>>> SW considers that HW already done with it. > >>>> In other words, right now for ixgbe you can't use RX buffer that is not aligned on > >>>> word boundary. > >>>> > >>>> So the advice would be, right now - don't set priv_size to the odd value. > >>>> As we don't support split header feature anyway, I think we can fix it just by > >>>> always setting HBA in the RXD to zero. > >>>> Could you try the fix for ixgbe below? > >>>> > >>>> Same story with FVL, I believe. > >>>> Konstantin > >>>> > >>>> > >>>>> Interestingly this does not happen if we force the scattered rx path. > >>>>> > >>>>> I assume the drivers have some expectations regarding the alignment of > >>>>> the buf_addr in the mbuf and setting an odd private are size breaks > >>>>> this alignment in the rte_pktmbuf_init function. If this is the case > >>>>> then one possible fix might be to enforce an alignment on the private area size. > >>>>> > >>>>> Best regards, > >>>>> Martin > >>>> > >>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c > >>>> index a0c8847..94967c5 100644 > >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c > >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > >>>> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool > >>>> reset_mbuf) > >>>> > >>>> /* populate the descriptors */ > >>>> dma_addr = > >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > >>>> - rxdp[i].read.hdr_addr = dma_addr; > >>>> + rxdp[i].read.hdr_addr = 0; > >>>> rxdp[i].read.pkt_addr = dma_addr; > >>>> } > >>>> > >>>> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > >>>> **rx_pkts, > >>>> rxe->mbuf = nmb; > >>>> dma_addr = > >>>> > >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > >>>> - rxdp->read.hdr_addr = dma_addr; > >>>> + rxdp->read.hdr_addr = 0; > >>>> rxdp->read.pkt_addr = dma_addr; > >>>> > >>>> /* > >>>> @@ -1741,7 +1741,7 @@ next_desc: > >>>> rxe->mbuf = nmb; > >>>> > >>>> rxm->data_off = RTE_PKTMBUF_HEADROOM; > >>>> - rxdp->read.hdr_addr = dma; > >>>> + rxdp->read.hdr_addr = 0; > >>>> rxdp->read.pkt_addr = dma; > >>>> } else > >>>> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ > >>>> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > >>>> dma_addr = > >>>> > >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > >>>> rxd = &rxq->rx_ring[i]; > >>>> - rxd->read.hdr_addr = dma_addr; > >>>> + rxd->read.hdr_addr = 0; > >>>> rxd->read.pkt_addr = dma_addr; > >>>> rxe[i].mbuf = mbuf; > >>>> } > >>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> index 6c1647e..16a9c64 100644 > >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >>>> RTE_PKTMBUF_HEADROOM); > >>>> __m128i dma_addr0, dma_addr1; > >>>> > >>>> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > >>>> + > >>>> rxdp = rxq->rx_ring + rxq->rxrearm_start; > >>>> > >>>> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ > >>>> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >>>> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > >>>> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > >>>> > >>>> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > >>>> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > >>>> + > >>>> /* flush desc with pa dma_addr */ > >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); > >>>> bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c > >>>> b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 > >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c > >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > >>>> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool > >>>> reset_mbuf) > >>>> > >>>> /* populate the descriptors */ > >>>> dma_addr = > >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > >>>> - rxdp[i].read.hdr_addr = dma_addr; > >>>> + rxdp[i].read.hdr_addr = 0; > >>>> rxdp[i].read.pkt_addr = dma_addr; > >>>> } > >>>> > >>>> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > >>>> **rx_pkts, > >>>> rxe->mbuf = nmb; > >>>> dma_addr = > >>>> > >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > >>>> - rxdp->read.hdr_addr = dma_addr; > >>>> + rxdp->read.hdr_addr = 0; > >>>> rxdp->read.pkt_addr = dma_addr; > >>>> > >>>> /* > >>>> @@ -1741,7 +1741,7 @@ next_desc: > >>>> rxe->mbuf = nmb; > >>>> > >>>> rxm->data_off = RTE_PKTMBUF_HEADROOM; > >>>> - rxdp->read.hdr_addr = dma; > >>>> + rxdp->read.hdr_addr = 0; > >>>> rxdp->read.pkt_addr = dma; > >>>> } else > >>>> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ > >>>> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > >>>> dma_addr = > >>>> > >>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > >>>> rxd = &rxq->rx_ring[i]; > >>>> - rxd->read.hdr_addr = dma_addr; > >>>> + rxd->read.hdr_addr = 0; > >>>> rxd->read.pkt_addr = dma_addr; > >>>> rxe[i].mbuf = mbuf; > >>>> } > >>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> index 6c1647e..16a9c64 100644 > >>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > >>>> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >>>> RTE_PKTMBUF_HEADROOM); > >>>> __m128i dma_addr0, dma_addr1; > >>>> > >>>> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > >>>> + > >>>> rxdp = rxq->rx_ring + rxq->rxrearm_start; > >>>> > >>>> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ > >>>> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > >>>> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > >>>> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > >>>> > >>>> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > >>>> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > >>>> + > >>>> /* flush desc with pa dma_addr */ > >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > >>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); > >
Hi, On 07/30/2015 11:43 AM, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: Olivier MATZ [mailto:olivier.matz@6wind.com] >> Sent: Thursday, July 30, 2015 10:10 AM >> To: Ananyev, Konstantin; Zhang, Helin; Martin Weiser >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd >> >> Hi Konstantin, >> >> On 07/30/2015 11:00 AM, Ananyev, Konstantin wrote: >>> Hi Olivier, >>> >>>> -----Original Message----- >>>> From: Olivier MATZ [mailto:olivier.matz@6wind.com] >>>> Sent: Thursday, July 30, 2015 9:12 AM >>>> To: Zhang, Helin; Ananyev, Konstantin; Martin Weiser >>>> Cc: dev@dpdk.org >>>> Subject: Re: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf private area size is odd >>>> >>>> Hi, >>>> >>>> On 07/29/2015 10:24 PM, Zhang, Helin wrote: >>>>> Hi Martin >>>>> >>>>> Thank you very much for the good catch! >>>>> >>>>> The similar situation in i40e, as explained by Konstantin. >>>>> As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. >>>>> But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said >>>> specifically as below. >>>>> "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to >>>> ensure that during mempool/mbuf allocation. >>>> >>>> Indeed it would be good to force the priv_size to be aligned. >>>> >>>> The priv_size could be aligned automatically in >>>> rte_pktmbuf_pool_create(). The only possible problem I could see >>>> is that it would break applications that access to the data buffer >>>> by doing (sizeof(mbuf) + sizeof(priv)), which is probably not the >>>> best thing to do (I didn't find any applications like this in dpdk). >>> >>> >>> Might be just make rte_pktmbuf_pool_create() fail if input priv_size % MIN_ALIGN != 0? >> >> Hmm maybe it would break more applications: an odd priv_size is >> probably rare, but a priv_size that is not aligned to 8 bytes is >> maybe more common. > > My thought was that rte_mempool_create() was just introduced in 2.1, > so if we add extra requirement for the input parameter now - > there would be no ABI breakage, and not many people started to use it already. > For me just seems a bit easier and more straightforward then silent alignment - > user would not have wrong assumptions here. > Though if you think that a silent alignment would be more convenient > for most users - I wouldn't insist. Yes, I agree on the principle, but it depends whether this fix is integrated for 2.1 or not. I think it may already be a bit late for that, especially as it is not a very critical bug. Thomas, what do you think? Olivier > Konstantin > >> It's maybe safer to align the size transparently? >> >> >> Regards, >> Olivier >> >> >> >>> >>>> >>>> For applications that directly use rte_mempool_create() instead of >>>> rte_pktmbuf_pool_create(), we could add a check using an assert in >>>> rte_pktmbuf_init() and some words in the documentation. >>>> >>>> The question is: what should be the proper alignment? I would say >>>> at least 8 bytes, but maybe cache_aligned is an option too. >>> >>> 8 bytes seems enough to me. >>> >>> Konstantin >>> >>>> >>>> Regards, >>>> Olivier >>>> >>>> >>>>> >>>>> Regards, >>>>> Helin >>>>> >>>>>> -----Original Message----- >>>>>> From: Ananyev, Konstantin >>>>>> Sent: Wednesday, July 29, 2015 11:12 AM >>>>>> To: Martin Weiser; Zhang, Helin; olivier.matz@6wind.com >>>>>> Cc: dev@dpdk.org >>>>>> Subject: RE: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when mbuf >>>>>> private area size is odd >>>>>> >>>>>> Hi Martin, >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Martin Weiser >>>>>>> Sent: Wednesday, July 29, 2015 4:07 PM >>>>>>> To: Zhang, Helin; olivier.matz@6wind.com >>>>>>> Cc: dev@dpdk.org >>>>>>> Subject: [dpdk-dev] Issue with non-scattered rx in ixgbe and i40e when >>>>>>> mbuf private area size is odd >>>>>>> >>>>>>> Hi Helin, Hi Olivier, >>>>>>> >>>>>>> we are seeing an issue with the ixgbe and i40e drivers which we could >>>>>>> track down to our setting of the private area size of the mbufs. >>>>>>> The issue can be easily reproduced with the l2fwd example application >>>>>>> when a small modification is done: just set the priv_size parameter in >>>>>>> the call to the rte_pktmbuf_pool_create function to an odd number like >>>>>>> 1. In our tests this causes every call to rte_eth_rx_burst to return >>>>>>> 32 (which is the setting of nb_pkts) nonsense mbufs although no >>>>>>> packets are received on the interface and the hardware counters do not >>>>>>> report any received packets. >>>>>> >>>>>> From Niantic datasheet: >>>>>> >>>>>> "7.1.6.1 Advanced Receive Descriptors — Read Format Table 7-15 lists the >>>>>> advanced receive descriptor programming by the software. The ... >>>>>> Packet Buffer Address (64) >>>>>> This is the physical address of the packet buffer. The lowest bit is A0 (LSB of the >>>>>> address). >>>>>> Header Buffer Address (64) >>>>>> The physical address of the header buffer with the lowest bit being Descriptor >>>>>> Done (DD). >>>>>> When a packet spans in multiple descriptors, the header buffer address is used >>>>>> only on the first descriptor. During the programming phase, software must set >>>>>> the DD bit to zero (see the description of the DD bit in this section). This means >>>>>> that header buffer addresses are always word aligned." >>>>>> >>>>>> Right now, in ixgbe PMD we always setup Packet Buffer Address (PBA)and >>>>>> Header Buffer Address (HBA) to the same value: >>>>>> buf_physaddr + RTE_PKTMBUF_HEADROOM >>>>>> So when pirv_size==1, DD bit in RXD is always set to one by SW itself, and then >>>>>> SW considers that HW already done with it. >>>>>> In other words, right now for ixgbe you can't use RX buffer that is not aligned on >>>>>> word boundary. >>>>>> >>>>>> So the advice would be, right now - don't set priv_size to the odd value. >>>>>> As we don't support split header feature anyway, I think we can fix it just by >>>>>> always setting HBA in the RXD to zero. >>>>>> Could you try the fix for ixgbe below? >>>>>> >>>>>> Same story with FVL, I believe. >>>>>> Konstantin >>>>>> >>>>>> >>>>>>> Interestingly this does not happen if we force the scattered rx path. >>>>>>> >>>>>>> I assume the drivers have some expectations regarding the alignment of >>>>>>> the buf_addr in the mbuf and setting an odd private are size breaks >>>>>>> this alignment in the rte_pktmbuf_init function. If this is the case >>>>>>> then one possible fix might be to enforce an alignment on the private area size. >>>>>>> >>>>>>> Best regards, >>>>>>> Martin >>>>>> >>>>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c >>>>>> index a0c8847..94967c5 100644 >>>>>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >>>>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >>>>>> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool >>>>>> reset_mbuf) >>>>>> >>>>>> /* populate the descriptors */ >>>>>> dma_addr = >>>>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); >>>>>> - rxdp[i].read.hdr_addr = dma_addr; >>>>>> + rxdp[i].read.hdr_addr = 0; >>>>>> rxdp[i].read.pkt_addr = dma_addr; >>>>>> } >>>>>> >>>>>> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >>>>>> **rx_pkts, >>>>>> rxe->mbuf = nmb; >>>>>> dma_addr = >>>>>> >>>>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); >>>>>> - rxdp->read.hdr_addr = dma_addr; >>>>>> + rxdp->read.hdr_addr = 0; >>>>>> rxdp->read.pkt_addr = dma_addr; >>>>>> >>>>>> /* >>>>>> @@ -1741,7 +1741,7 @@ next_desc: >>>>>> rxe->mbuf = nmb; >>>>>> >>>>>> rxm->data_off = RTE_PKTMBUF_HEADROOM; >>>>>> - rxdp->read.hdr_addr = dma; >>>>>> + rxdp->read.hdr_addr = 0; >>>>>> rxdp->read.pkt_addr = dma; >>>>>> } else >>>>>> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ >>>>>> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) >>>>>> dma_addr = >>>>>> >>>>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); >>>>>> rxd = &rxq->rx_ring[i]; >>>>>> - rxd->read.hdr_addr = dma_addr; >>>>>> + rxd->read.hdr_addr = 0; >>>>>> rxd->read.pkt_addr = dma_addr; >>>>>> rxe[i].mbuf = mbuf; >>>>>> } >>>>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> index 6c1647e..16a9c64 100644 >>>>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>>>> RTE_PKTMBUF_HEADROOM); >>>>>> __m128i dma_addr0, dma_addr1; >>>>>> >>>>>> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); >>>>>> + >>>>>> rxdp = rxq->rx_ring + rxq->rxrearm_start; >>>>>> >>>>>> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ >>>>>> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>>>> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); >>>>>> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); >>>>>> >>>>>> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); >>>>>> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); >>>>>> + >>>>>> /* flush desc with pa dma_addr */ >>>>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); >>>>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); >>>>>> bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c >>>>>> b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 >>>>>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >>>>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >>>>>> @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool >>>>>> reset_mbuf) >>>>>> >>>>>> /* populate the descriptors */ >>>>>> dma_addr = >>>>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); >>>>>> - rxdp[i].read.hdr_addr = dma_addr; >>>>>> + rxdp[i].read.hdr_addr = 0; >>>>>> rxdp[i].read.pkt_addr = dma_addr; >>>>>> } >>>>>> >>>>>> @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >>>>>> **rx_pkts, >>>>>> rxe->mbuf = nmb; >>>>>> dma_addr = >>>>>> >>>>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); >>>>>> - rxdp->read.hdr_addr = dma_addr; >>>>>> + rxdp->read.hdr_addr = 0; >>>>>> rxdp->read.pkt_addr = dma_addr; >>>>>> >>>>>> /* >>>>>> @@ -1741,7 +1741,7 @@ next_desc: >>>>>> rxe->mbuf = nmb; >>>>>> >>>>>> rxm->data_off = RTE_PKTMBUF_HEADROOM; >>>>>> - rxdp->read.hdr_addr = dma; >>>>>> + rxdp->read.hdr_addr = 0; >>>>>> rxdp->read.pkt_addr = dma; >>>>>> } else >>>>>> rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ >>>>>> ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) >>>>>> dma_addr = >>>>>> >>>>>> rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); >>>>>> rxd = &rxq->rx_ring[i]; >>>>>> - rxd->read.hdr_addr = dma_addr; >>>>>> + rxd->read.hdr_addr = 0; >>>>>> rxd->read.pkt_addr = dma_addr; >>>>>> rxe[i].mbuf = mbuf; >>>>>> } >>>>>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> index 6c1647e..16a9c64 100644 >>>>>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c >>>>>> @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>>>> RTE_PKTMBUF_HEADROOM); >>>>>> __m128i dma_addr0, dma_addr1; >>>>>> >>>>>> + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); >>>>>> + >>>>>> rxdp = rxq->rx_ring + rxq->rxrearm_start; >>>>>> >>>>>> /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ >>>>>> ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) >>>>>> dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); >>>>>> dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); >>>>>> >>>>>> + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); >>>>>> + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); >>>>>> + >>>>>> /* flush desc with pa dma_addr */ >>>>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); >>>>>> _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); >>> >
2015-07-30 13:22, Olivier MATZ: > On 07/30/2015 11:43 AM, Ananyev, Konstantin wrote: > > From: Olivier MATZ [mailto:olivier.matz@6wind.com] > >> On 07/30/2015 11:00 AM, Ananyev, Konstantin wrote: > >>> From: Olivier MATZ [mailto:olivier.matz@6wind.com] > >>>> On 07/29/2015 10:24 PM, Zhang, Helin wrote: > >>>>> The similar situation in i40e, as explained by Konstantin. > >>>>> As header split hasn't been supported by DPDK till now. It would be better to put the header address in RX descriptor to 0. > >>>>> But in the future, during header split enabling. We may need to pay extra attention to that. As at least x710 datasheet said > >>>> specifically as below. > >>>>> "The header address should be set by the software to an even number (word aligned address)". We may need to find a way to > >>>> ensure that during mempool/mbuf allocation. > >>>> > >>>> Indeed it would be good to force the priv_size to be aligned. > >>>> > >>>> The priv_size could be aligned automatically in > >>>> rte_pktmbuf_pool_create(). The only possible problem I could see > >>>> is that it would break applications that access to the data buffer > >>>> by doing (sizeof(mbuf) + sizeof(priv)), which is probably not the > >>>> best thing to do (I didn't find any applications like this in dpdk). > >>> > >>> > >>> Might be just make rte_pktmbuf_pool_create() fail if input priv_size % MIN_ALIGN != 0? > >> > >> Hmm maybe it would break more applications: an odd priv_size is > >> probably rare, but a priv_size that is not aligned to 8 bytes is > >> maybe more common. > > > > My thought was that rte_mempool_create() was just introduced in 2.1, > > so if we add extra requirement for the input parameter now - > > there would be no ABI breakage, and not many people started to use it already. > > For me just seems a bit easier and more straightforward then silent alignment - > > user would not have wrong assumptions here. > > Though if you think that a silent alignment would be more convenient > > for most users - I wouldn't insist. > > > Yes, I agree on the principle, but it depends whether this fix > is integrated for 2.1 or not. > I think it may already be a bit late for that, especially as it > is not a very critical bug. > > Thomas, what do you think? It is a fix. Adding a doc comment, an assert and an alignment constraint or a new automatic alignment in the not yet released function shouldn't hurt. A patch would be welcome for 2.1. Thanks
2015-07-29 18:12, Ananyev, Konstantin: > As we don't support split header feature anyway, I think we can fix it just by always setting HBA in the RXD to zero. > Could you try the fix for ixgbe below? Please Konstantin, could you send a proper patch for this fix? Thanks > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool reset_mbuf) > > /* populate the descriptors */ > dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > - rxdp[i].read.hdr_addr = dma_addr; > + rxdp[i].read.hdr_addr = 0; > rxdp[i].read.pkt_addr = dma_addr; > } > > @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > rxe->mbuf = nmb; > dma_addr = > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > - rxdp->read.hdr_addr = dma_addr; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma_addr; > > /* > @@ -1741,7 +1741,7 @@ next_desc: > rxe->mbuf = nmb; > > rxm->data_off = RTE_PKTMBUF_HEADROOM; > - rxdp->read.hdr_addr = dma; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma; > } else > rxe->mbuf = NULL; > @@ -3633,7 +3633,7 @@ ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > dma_addr = > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > rxd = &rxq->rx_ring[i]; > - rxd->read.hdr_addr = dma_addr; > + rxd->read.hdr_addr = 0; > rxd->read.pkt_addr = dma_addr; > rxe[i].mbuf = mbuf; > } > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > index 6c1647e..16a9c64 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > RTE_PKTMBUF_HEADROOM); > __m128i dma_addr0, dma_addr1; > > + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > + > rxdp = rxq->rx_ring + rxq->rxrearm_start; > > /* Pull 'n' more MBUFs into the software ring */ > @@ -108,6 +110,9 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > > + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > + > /* flush desc with pa dma_addr */ > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); > bash-4.2$ cat patch1 > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c > index a0c8847..94967c5 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool reset_mbuf) > > /* populate the descriptors */ > dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); > - rxdp[i].read.hdr_addr = dma_addr; > + rxdp[i].read.hdr_addr = 0; > rxdp[i].read.pkt_addr = dma_addr; > } > > @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > rxe->mbuf = nmb; > dma_addr = > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); > - rxdp->read.hdr_addr = dma_addr; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma_addr; > > /* > @@ -1741,7 +1741,7 @@ next_desc: > rxe->mbuf = nmb; > > rxm->data_off = RTE_PKTMBUF_HEADROOM; > - rxdp->read.hdr_addr = dma; > + rxdp->read.hdr_addr = 0; > rxdp->read.pkt_addr = dma; > } else > rxe->mbuf = NULL; > @@ -3633,7 +3633,7 @@ ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) > dma_addr = > rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); > rxd = &rxq->rx_ring[i]; > - rxd->read.hdr_addr = dma_addr; > + rxd->read.hdr_addr = 0; > rxd->read.pkt_addr = dma_addr; > rxe[i].mbuf = mbuf; > } > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > index 6c1647e..16a9c64 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > RTE_PKTMBUF_HEADROOM); > __m128i dma_addr0, dma_addr1; > > + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); > + > rxdp = rxq->rx_ring + rxq->rxrearm_start; > > /* Pull 'n' more MBUFs into the software ring */ > @@ -108,6 +110,9 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) > dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); > dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); > > + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); > + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); > + > /* flush desc with pa dma_addr */ > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); > _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool reset_mbuf) /* populate the descriptors */ dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); - rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.hdr_addr = 0; rxdp[i].read.pkt_addr = dma_addr; } @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxe->mbuf = nmb; dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); - rxdp->read.hdr_addr = dma_addr; + rxdp->read.hdr_addr = 0; rxdp->read.pkt_addr = dma_addr; /* @@ -1741,7 +1741,7 @@ next_desc: rxe->mbuf = nmb; rxm->data_off = RTE_PKTMBUF_HEADROOM; - rxdp->read.hdr_addr = dma; + rxdp->read.hdr_addr = 0; rxdp->read.pkt_addr = dma; } else rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); rxd = &rxq->rx_ring[i]; - rxd->read.hdr_addr = dma_addr; + rxd->read.hdr_addr = 0; rxd->read.pkt_addr = dma_addr; rxe[i].mbuf = mbuf; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c index 6c1647e..16a9c64 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) RTE_PKTMBUF_HEADROOM); __m128i dma_addr0, dma_addr1; + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); + rxdp = rxq->rx_ring + rxq->rxrearm_start; /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); + /* flush desc with pa dma_addr */ _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1); bash-4.2$ cat patch1 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index a0c8847..94967c5 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -1183,7 +1183,7 @@ ixgbe_rx_alloc_bufs(struct ixgbe_rx_queue *rxq, bool reset_mbuf) /* populate the descriptors */ dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); - rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.hdr_addr = 0; rxdp[i].read.pkt_addr = dma_addr; } @@ -1414,7 +1414,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxe->mbuf = nmb; dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); - rxdp->read.hdr_addr = dma_addr; + rxdp->read.hdr_addr = 0; rxdp->read.pkt_addr = dma_addr; /* @@ -1741,7 +1741,7 @@ next_desc: rxe->mbuf = nmb; rxm->data_off = RTE_PKTMBUF_HEADROOM; - rxdp->read.hdr_addr = dma; + rxdp->read.hdr_addr = 0; rxdp->read.pkt_addr = dma; } else rxe->mbuf = NULL; @@ -3633,7 +3633,7 @@ ixgbe_alloc_rx_queue_mbufs(struct ixgbe_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); rxd = &rxq->rx_ring[i]; - rxd->read.hdr_addr = dma_addr; + rxd->read.hdr_addr = 0; rxd->read.pkt_addr = dma_addr; rxe[i].mbuf = mbuf; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c index 6c1647e..16a9c64 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c @@ -56,6 +56,8 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) RTE_PKTMBUF_HEADROOM); __m128i dma_addr0, dma_addr1; + const __m128i hba_msk = _mm_set_epi64x(0, UINT64_MAX); + rxdp = rxq->rx_ring + rxq->rxrearm_start; /* Pull 'n' more MBUFs into the software ring */ @@ -108,6 +110,9 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq) dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + dma_addr0 = _mm_and_si128(dma_addr0, hba_msk); + dma_addr1 = _mm_and_si128(dma_addr1, hba_msk); + /* flush desc with pa dma_addr */ _mm_store_si128((__m128i *)&rxdp++->read, dma_addr0); _mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);