[1/1] net/mana: add 32 bit short doorbell

Message ID 20230909122347.2043969-1-weh@microsoft.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series [1/1] net/mana: add 32 bit short doorbell |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/iol-sample-apps-testing success Testing PASS

Commit Message

Wei Hu Sept. 9, 2023, 12:23 p.m. UTC
  Add 32 bit short doorbell support. Ring short doorbell when running
in 32 bit applicactions.

Cc: stable@dpdk.org

Signed-off-by: Wei Hu <weh@microsoft.com>
---
 drivers/net/mana/gdma.c | 95 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/mana/mana.h | 25 +++++++++++
 drivers/net/mana/rx.c   | 52 ++++++++++++++++++++++
 drivers/net/mana/tx.c   | 28 ++++++++++++
 4 files changed, 200 insertions(+)
  

Comments

Long Li Sept. 13, 2023, 9:11 p.m. UTC | #1
> Subject: [PATCH 1/1] net/mana: add 32 bit short doorbell
> 
> Add 32 bit short doorbell support. Ring short doorbell when running in 32 bit
> applicactions.
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Hu <weh@microsoft.com>

Please send this patch to

Ferruh Yigit <ferruh.yigit@amd.com>
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


> ---
>  drivers/net/mana/gdma.c | 95
> +++++++++++++++++++++++++++++++++++++++++
>  drivers/net/mana/mana.h | 25 +++++++++++
>  drivers/net/mana/rx.c   | 52 ++++++++++++++++++++++
>  drivers/net/mana/tx.c   | 28 ++++++++++++
>  4 files changed, 200 insertions(+)
> 
> diff --git a/drivers/net/mana/gdma.c b/drivers/net/mana/gdma.c index
> 65685fe236..d1da025d1b 100644
> --- a/drivers/net/mana/gdma.c
> +++ b/drivers/net/mana/gdma.c
> @@ -166,6 +166,97 @@ gdma_post_work_request(struct
> mana_gdma_queue *queue,
>  	return 0;
>  }
> 
> +#ifdef RTE_ARCH_32
> +union gdma_short_doorbell_entry {
> +	uint32_t     as_uint32;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* Number of CQEs */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 3;
> +		uint32_t arm		: 1;
> +	} cq;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 4;
> +	} rq;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 4;
> +	} sq;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* Number of EQEs */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 3;
> +		uint32_t arm		: 1;
> +	} eq;
> +}; /* HW DATA */
> +
> +enum {
> +	DOORBELL_SHORT_OFFSET_SQ = 0x10,
> +	DOORBELL_SHORT_OFFSET_RQ = 0x410,
> +	DOORBELL_SHORT_OFFSET_CQ = 0x810,
> +	DOORBELL_SHORT_OFFSET_EQ = 0xFF0,
> +};
> +
> +/*
> + * Write to hardware doorbell to notify new activity.
> + */
> +int
> +mana_ring_short_doorbell(void *db_page, enum gdma_queue_types
> queue_type,
> +			 uint32_t queue_id, uint32_t tail_incr, uint8_t arm) {
> +	uint8_t *addr = db_page;
> +	union gdma_short_doorbell_entry e = {};
> +
> +	if ((queue_id & ~GDMA_SHORT_DB_QID_MASK) ||
> +	    (tail_incr & ~GDMA_SHORT_DB_INC_MASK)) {
> +		DP_LOG(ERR, "%s: queue_id %u or "
> +		       "tail_incr %u overflowed, queue type %d",
> +		       __func__, queue_id, tail_incr, queue_type);

This should never happen.

What does "overflowed" mean? Is it a hardware error or software error? If this is a software error, the calling code needs to make sure it never overflows when using short doorbells.

Thanks,

Long
  
Wei Hu Sept. 14, 2023, 5:11 a.m. UTC | #2
> -----Original Message-----
> From: Long Li <longli@microsoft.com>
> Sent: Thursday, September 14, 2023 5:11 AM
> To: Wei Hu <weh@microsoft.com>; dev@dpdk.org; Ferruh Yigit
> <ferruh.yigit@amd.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: stable@dpdk.org
> Subject: RE: [PATCH 1/1] net/mana: add 32 bit short doorbell
> 
> > +
> > +/*
> > + * Write to hardware doorbell to notify new activity.
> > + */
> > +int
> > +mana_ring_short_doorbell(void *db_page, enum gdma_queue_types
> > queue_type,
> > +			 uint32_t queue_id, uint32_t tail_incr, uint8_t arm) {
> > +	uint8_t *addr = db_page;
> > +	union gdma_short_doorbell_entry e = {};
> > +
> > +	if ((queue_id & ~GDMA_SHORT_DB_QID_MASK) ||
> > +	    (tail_incr & ~GDMA_SHORT_DB_INC_MASK)) {
> > +		DP_LOG(ERR, "%s: queue_id %u or "
> > +		       "tail_incr %u overflowed, queue type %d",
> > +		       __func__, queue_id, tail_incr, queue_type);
> 
> This should never happen.
> 
> What does "overflowed" mean? Is it a hardware error or software error? If this
> is a software error, the calling code needs to make sure it never overflows
> when using short doorbells.
> 
This is to guard any hardware and software bugs or weirdness, which could lead
to silent failures if it doesn't log. 

The software code has already been made to avoid any intentional behaviors which
could cause tail_incr overflow. 

Thanks,
Wei
  
Ferruh Yigit Sept. 18, 2023, 6:02 p.m. UTC | #3
On 9/9/2023 1:23 PM, Wei Hu wrote:
> Add 32 bit short doorbell support. Ring short doorbell when running
> in 32 bit applicactions.
> 

Hi Wei,

Is this performance improvement for 32 bit, or is short doorbell support
required for 32 bit support?


This patch is using RTE_ARCH_32 compile time macro to enable short
doorbell support, so need to decide to support 32 bit or 64 bit in
compile time.

Also I guess 32 bit driver can run on 64 bit arch, what will be the
result in that case?

My point is, instead of using compile time flag, what do you think to
detect execution platform on runtime and use preferred doorbell
according platform?

I can see short descriptor support touches multiple functions, can the
support be abstracted to let to use it based on runtime detection?

> Cc: stable@dpdk.org
> 

Similar comment as previous patch, this patch is not a fix but adding
new support, not sure about backporting it.

> Signed-off-by: Wei Hu <weh@microsoft.com>
>

<...>

> @@ -97,6 +110,7 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
>  		/* update queue for tracking pending packets */
>  		desc->pkt = mbuf;
>  		desc->wqe_size_in_bu = wqe_size_in_bu;
> +		rxq->wqe_cnt_to_short_db += wqe_size_in_bu;
>

This variable always used within RTE_ARCH_32 block, but set here without
RTE_ARCH_32 ifdef, is this intentional?
  
Long Li Sept. 18, 2023, 8:01 p.m. UTC | #4
> Subject: [PATCH 1/1] net/mana: add 32 bit short doorbell
> 
> Add 32 bit short doorbell support. Ring short doorbell when running in 32 bit
> applicactions.
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Hu <weh@microsoft.com>
> ---
>  drivers/net/mana/gdma.c | 95
> +++++++++++++++++++++++++++++++++++++++++
>  drivers/net/mana/mana.h | 25 +++++++++++
>  drivers/net/mana/rx.c   | 52 ++++++++++++++++++++++
>  drivers/net/mana/tx.c   | 28 ++++++++++++
>  4 files changed, 200 insertions(+)
> 
> diff --git a/drivers/net/mana/gdma.c b/drivers/net/mana/gdma.c index
> 65685fe236..d1da025d1b 100644
> --- a/drivers/net/mana/gdma.c
> +++ b/drivers/net/mana/gdma.c
> @@ -166,6 +166,97 @@ gdma_post_work_request(struct
> mana_gdma_queue *queue,
>  	return 0;
>  }
> 
> +#ifdef RTE_ARCH_32
> +union gdma_short_doorbell_entry {
> +	uint32_t     as_uint32;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* Number of CQEs */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 3;
> +		uint32_t arm		: 1;
> +	} cq;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 4;
> +	} rq;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 4;
> +	} sq;
> +
> +	struct {
> +		uint32_t tail_ptr_incr	: 16; /* Number of EQEs */
> +		uint32_t id		: 12;
> +		uint32_t reserved	: 3;
> +		uint32_t arm		: 1;
> +	} eq;
> +}; /* HW DATA */
> +
> +enum {
> +	DOORBELL_SHORT_OFFSET_SQ = 0x10,
> +	DOORBELL_SHORT_OFFSET_RQ = 0x410,
> +	DOORBELL_SHORT_OFFSET_CQ = 0x810,
> +	DOORBELL_SHORT_OFFSET_EQ = 0xFF0,
> +};
> +
> +/*
> + * Write to hardware doorbell to notify new activity.
> + */
> +int
> +mana_ring_short_doorbell(void *db_page, enum gdma_queue_types
> queue_type,
> +			 uint32_t queue_id, uint32_t tail_incr, uint8_t arm) {
> +	uint8_t *addr = db_page;
> +	union gdma_short_doorbell_entry e = {};
> +
> +	if ((queue_id & ~GDMA_SHORT_DB_QID_MASK) ||
> +	    (tail_incr & ~GDMA_SHORT_DB_INC_MASK)) {
> +		DP_LOG(ERR, "%s: queue_id %u or "
> +		       "tail_incr %u overflowed, queue type %d",
> +		       __func__, queue_id, tail_incr, queue_type);
> +		return -EINVAL;
> +	}
> +
> +	switch (queue_type) {
> +	case GDMA_QUEUE_SEND:
> +		e.sq.id = queue_id;
> +		e.sq.tail_ptr_incr = tail_incr;
> +		addr += DOORBELL_SHORT_OFFSET_SQ;
> +		break;
> +
> +	case GDMA_QUEUE_RECEIVE:
> +		e.rq.id = queue_id;
> +		e.rq.tail_ptr_incr = tail_incr;
> +		addr += DOORBELL_SHORT_OFFSET_RQ;
> +		break;
> +
> +	case GDMA_QUEUE_COMPLETION:
> +		e.cq.id = queue_id;
> +		e.cq.tail_ptr_incr = tail_incr;
> +		e.cq.arm = arm;
> +		addr += DOORBELL_SHORT_OFFSET_CQ;
> +		break;
> +
> +	default:
> +		DP_LOG(ERR, "Unsupported queue type %d", queue_type);
> +		return -1;
> +	}
> +
> +	/* Ensure all writes are done before ringing doorbell */
> +	rte_wmb();
> +
> +	DP_LOG(DEBUG, "db_page %p addr %p queue_id %u type %u tail %u
> arm %u",
> +	       db_page, addr, queue_id, queue_type, tail_incr, arm);
> +
> +	rte_write32(e.as_uint32, addr);
> +	return 0;
> +}
> +#else
>  union gdma_doorbell_entry {
>  	uint64_t     as_uint64;
> 
> @@ -248,6 +339,7 @@ mana_ring_doorbell(void *db_page, enum
> gdma_queue_types queue_type,
>  	rte_write64(e.as_uint64, addr);
>  	return 0;
>  }
> +#endif
> 
>  /*
>   * Poll completion queue for completions.
> @@ -287,6 +379,9 @@ gdma_poll_completion_queue(struct
> mana_gdma_queue *cq,
>  		num_comp++;
> 
>  		cq->head++;
> +#ifdef RTE_ARCH_32
> +		cq->head_incr_to_short_db++;
> +#endif
> 
>  		DP_LOG(DEBUG, "comp new 0x%x old 0x%x cqe 0x%x wq %u
> sq %u head %u",
>  		       new_owner_bits, old_owner_bits, cqe_owner_bits, diff --
> git a/drivers/net/mana/mana.h b/drivers/net/mana/mana.h index
> 5801491d75..848d87c096 100644
> --- a/drivers/net/mana/mana.h
> +++ b/drivers/net/mana/mana.h
> @@ -50,6 +50,19 @@ struct mana_shared_data {  #define
> MAX_TX_WQE_SIZE 512  #define MAX_RX_WQE_SIZE 256
> 
> +/* For 32 bit only */
> +#ifdef RTE_ARCH_32
> +#define	GDMA_SHORT_DB_INC_MASK 0xffff
> +#define	GDMA_SHORT_DB_QID_MASK 0xfff
> +
> +#define GDMA_SHORT_DB_MAX_WQE	(0x10000 /
> GDMA_WQE_ALIGNMENT_UNIT_SIZE)
> +
> +#define TX_WQE_SHORT_DB_THRESHOLD \
> +	(GDMA_SHORT_DB_MAX_WQE - (2 * MAX_TX_WQE_SIZE)) #define
> +RX_WQE_SHORT_DB_THRESHOLD \

GDMA_SHORT_DB_MAX_WQE is in BU, MAX_TX_WQE_SIZE is in bytes.

Doing math on two different units...

And why using "2*", using "1*" is good enough?




> +	(GDMA_SHORT_DB_MAX_WQE - (2 * MAX_RX_WQE_SIZE)) #endif
> +
>  /* Values from the GDMA specification document, WQE format description */
> #define INLINE_OOB_SMALL_SIZE_IN_BYTES 8  #define
> INLINE_OOB_LARGE_SIZE_IN_BYTES 24 @@ -375,6 +388,9 @@ struct
> mana_gdma_queue {
>  	uint32_t id;
>  	uint32_t head;
>  	uint32_t tail;
> +#ifdef RTE_ARCH_32
> +	uint32_t head_incr_to_short_db;
> +#endif
>  };
> 
>  #define MANA_MR_BTREE_PER_QUEUE_N	64
> @@ -425,6 +441,9 @@ struct mana_rxq {
>  	 */
>  	uint32_t desc_ring_head, desc_ring_tail;
> 
> +	/* For storing wqe increment count btw each short doorbell ring */
> +	uint32_t wqe_cnt_to_short_db;
> +
>  	struct mana_gdma_queue gdma_rq;
>  	struct mana_gdma_queue gdma_cq;
>  	struct gdma_comp *gdma_comp_buf;
> @@ -455,8 +474,14 @@ extern int mana_logtype_init;
> 
>  #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
> 
> +#ifdef RTE_ARCH_32
> +int mana_ring_short_doorbell(void *db_page, enum gdma_queue_types
> queue_type,
> +			     uint32_t queue_id, uint32_t tail_incr,
> +			     uint8_t arm);
> +#else
>  int mana_ring_doorbell(void *db_page, enum gdma_queue_types
> queue_type,
>  		       uint32_t queue_id, uint32_t tail, uint8_t arm);
> +#endif
>  int mana_rq_ring_doorbell(struct mana_rxq *rxq);
> 
>  int gdma_post_work_request(struct mana_gdma_queue *queue, diff --git
> a/drivers/net/mana/rx.c b/drivers/net/mana/rx.c index
> 14d9085801..303d129e5b 100644
> --- a/drivers/net/mana/rx.c
> +++ b/drivers/net/mana/rx.c
> @@ -39,10 +39,23 @@ mana_rq_ring_doorbell(struct mana_rxq *rxq)
>  	/* Hardware Spec specifies that software client should set 0 for
>  	 * wqe_cnt for Receive Queues.
>  	 */
> +#ifdef RTE_ARCH_32
> +	if (rxq->wqe_cnt_to_short_db) {
> +		ret = mana_ring_short_doorbell(db_page,
> GDMA_QUEUE_RECEIVE,
> +				 rxq->gdma_rq.id,
> +				 rxq->wqe_cnt_to_short_db *
> +
> 	GDMA_WQE_ALIGNMENT_UNIT_SIZE,
> +				 0);
> +	} else {

Is it possible that rxq->wqe_cnt_to_short_db might be 0 from the caller? The caller shouldn't ring the doorbell if there is nothing to ring.





> +		/* No need to ring, just return */
> +		ret = 0;
> +	}
> +#else
>  	ret = mana_ring_doorbell(db_page, GDMA_QUEUE_RECEIVE,
>  			 rxq->gdma_rq.id,
>  			 rxq->gdma_rq.head *
> GDMA_WQE_ALIGNMENT_UNIT_SIZE,
>  			 0);
> +#endif
> 
>  	if (ret)
>  		DP_LOG(ERR, "failed to ring RX doorbell ret %d", ret); @@ -
> 97,6 +110,7 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
>  		/* update queue for tracking pending packets */
>  		desc->pkt = mbuf;
>  		desc->wqe_size_in_bu = wqe_size_in_bu;
> +		rxq->wqe_cnt_to_short_db += wqe_size_in_bu;
>  		rxq->desc_ring_head = (rxq->desc_ring_head + 1) % rxq-
> >num_desc;
>  	} else {
>  		DP_LOG(DEBUG, "failed to post recv ret %d", ret); @@ -
> 115,12 +129,22 @@ mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq)
>  	int ret;
>  	uint32_t i;
> 
> +#ifdef RTE_ARCH_32
> +	rxq->wqe_cnt_to_short_db = 0;
> +#endif
>  	for (i = 0; i < rxq->num_desc; i++) {
>  		ret = mana_alloc_and_post_rx_wqe(rxq);
>  		if (ret) {
>  			DP_LOG(ERR, "failed to post RX ret = %d", ret);
>  			return ret;
>  		}
> +
> +#ifdef RTE_ARCH_32
> +		if (rxq->wqe_cnt_to_short_db >
> RX_WQE_SHORT_DB_THRESHOLD) {
> +			mana_rq_ring_doorbell(rxq);
> +			rxq->wqe_cnt_to_short_db = 0;
> +		}
> +#endif
>  	}
> 
>  	mana_rq_ring_doorbell(rxq);
> @@ -349,6 +373,9 @@ mana_start_rx_queues(struct rte_eth_dev *dev)
> 
>  		/* CQ head starts with count */
>  		rxq->gdma_cq.head = rxq->gdma_cq.count;
> +#ifdef RTE_ARCH_32
> +		rxq->gdma_cq.head_incr_to_short_db = 0; #endif
> 
>  		DRV_LOG(INFO, "rxq cq id %u buf %p count %u size %u",
>  			rxq->gdma_cq.id, rxq->gdma_cq.buffer, @@ -397,6
> +424,10 @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
> uint16_t pkts_n)
>  	uint32_t i;
>  	int polled = 0;
> 
> +#ifdef RTE_ARCH_32
> +	rxq->wqe_cnt_to_short_db = 0;
> +#endif
> +
>  repoll:
>  	/* Polling on new completions if we have no backlog */
>  	if (rxq->comp_buf_idx == rxq->comp_buf_len) { @@ -505,6 +536,16
> @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t
> pkts_n)
>  		wqe_posted++;
>  		if (pkt_received == pkts_n)
>  			break;
> +
> +#ifdef RTE_ARCH_32
> +		/* Ring short doorbell if approaching the wqe increment
> +		 * limit.
> +		 */
> +		if (rxq->wqe_cnt_to_short_db >
> RX_WQE_SHORT_DB_THRESHOLD) {
> +			mana_rq_ring_doorbell(rxq);
> +			rxq->wqe_cnt_to_short_db = 0;
> +		}
> +#endif
>  	}
> 
>  	rxq->backlog_idx = pkt_idx;
> @@ -529,6 +570,16 @@ static int
>  mana_arm_cq(struct mana_rxq *rxq, uint8_t arm)  {
>  	struct mana_priv *priv = rxq->priv;
> +#ifdef RTE_ARCH_32
> +	uint16_t cqe_incr = (uint16_t)rxq->gdma_cq.head_incr_to_short_db;

How do you make sure head_incr_to_short_db doesn't overflow?


Thanks,

Long
  
Wei Hu Sept. 19, 2023, 2:13 a.m. UTC | #5
> -----Original Message-----
> From: Long Li <longli@microsoft.com>
> Sent: Tuesday, September 19, 2023 4:02 AM
> To: Wei Hu <weh@microsoft.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Ferruh Yigit <ferruh.yigit@amd.com>; Luca Boccassi
> <bluca@debian.org>; Kevin Traynor <ktraynor@redhat.com>
> Subject: RE: [PATCH 1/1] net/mana: add 32 bit short doorbell
> 
> > Subject: [PATCH 1/1] net/mana: add 32 bit short doorbell
> >
> > Add 32 bit short doorbell support. Ring short doorbell when running in
> > 32 bit applicactions.
> >
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Wei Hu <weh@microsoft.com>
> > ---
> >  drivers/net/mana/gdma.c | 95
> > +++++++++++++++++++++++++++++++++++++++++
> >  drivers/net/mana/mana.h | 25 +++++++++++
> >  drivers/net/mana/rx.c   | 52 ++++++++++++++++++++++
> >  drivers/net/mana/tx.c   | 28 ++++++++++++
> >  4 files changed, 200 insertions(+)
> >
> > diff --git a/drivers/net/mana/gdma.c b/drivers/net/mana/gdma.c index
> > 65685fe236..d1da025d1b 100644
> > --- a/drivers/net/mana/gdma.c
> > +++ b/drivers/net/mana/gdma.c
> > @@ -166,6 +166,97 @@ gdma_post_work_request(struct
> mana_gdma_queue
> > *queue,
> >  	return 0;
> >  }
> >
> > +#ifdef RTE_ARCH_32
> > +union gdma_short_doorbell_entry {
> > +	uint32_t     as_uint32;
> > +
> > +	struct {
> > +		uint32_t tail_ptr_incr	: 16; /* Number of CQEs */
> > +		uint32_t id		: 12;
> > +		uint32_t reserved	: 3;
> > +		uint32_t arm		: 1;
> > +	} cq;
> > +
> > +	struct {
> > +		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
> > +		uint32_t id		: 12;
> > +		uint32_t reserved	: 4;
> > +	} rq;
> > +
> > +	struct {
> > +		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
> > +		uint32_t id		: 12;
> > +		uint32_t reserved	: 4;
> > +	} sq;
> > +
> > +	struct {
> > +		uint32_t tail_ptr_incr	: 16; /* Number of EQEs */
> > +		uint32_t id		: 12;
> > +		uint32_t reserved	: 3;
> > +		uint32_t arm		: 1;
> > +	} eq;
> > +}; /* HW DATA */
> > +
> > +enum {
> > +	DOORBELL_SHORT_OFFSET_SQ = 0x10,
> > +	DOORBELL_SHORT_OFFSET_RQ = 0x410,
> > +	DOORBELL_SHORT_OFFSET_CQ = 0x810,
> > +	DOORBELL_SHORT_OFFSET_EQ = 0xFF0,
> > +};
> > +
> > +/*
> > + * Write to hardware doorbell to notify new activity.
> > + */
> > +int
> > +mana_ring_short_doorbell(void *db_page, enum gdma_queue_types
> > queue_type,
> > +			 uint32_t queue_id, uint32_t tail_incr, uint8_t arm) {
> > +	uint8_t *addr = db_page;
> > +	union gdma_short_doorbell_entry e = {};
> > +
> > +	if ((queue_id & ~GDMA_SHORT_DB_QID_MASK) ||
> > +	    (tail_incr & ~GDMA_SHORT_DB_INC_MASK)) {
> > +		DP_LOG(ERR, "%s: queue_id %u or "
> > +		       "tail_incr %u overflowed, queue type %d",
> > +		       __func__, queue_id, tail_incr, queue_type);
> > +		return -EINVAL;
> > +	}
> > +
> > +	switch (queue_type) {
> > +	case GDMA_QUEUE_SEND:
> > +		e.sq.id = queue_id;
> > +		e.sq.tail_ptr_incr = tail_incr;
> > +		addr += DOORBELL_SHORT_OFFSET_SQ;
> > +		break;
> > +
> > +	case GDMA_QUEUE_RECEIVE:
> > +		e.rq.id = queue_id;
> > +		e.rq.tail_ptr_incr = tail_incr;
> > +		addr += DOORBELL_SHORT_OFFSET_RQ;
> > +		break;
> > +
> > +	case GDMA_QUEUE_COMPLETION:
> > +		e.cq.id = queue_id;
> > +		e.cq.tail_ptr_incr = tail_incr;
> > +		e.cq.arm = arm;
> > +		addr += DOORBELL_SHORT_OFFSET_CQ;
> > +		break;
> > +
> > +	default:
> > +		DP_LOG(ERR, "Unsupported queue type %d", queue_type);
> > +		return -1;
> > +	}
> > +
> > +	/* Ensure all writes are done before ringing doorbell */
> > +	rte_wmb();
> > +
> > +	DP_LOG(DEBUG, "db_page %p addr %p queue_id %u type %u tail %u
> > arm %u",
> > +	       db_page, addr, queue_id, queue_type, tail_incr, arm);
> > +
> > +	rte_write32(e.as_uint32, addr);
> > +	return 0;
> > +}
> > +#else
> >  union gdma_doorbell_entry {
> >  	uint64_t     as_uint64;
> >
> > @@ -248,6 +339,7 @@ mana_ring_doorbell(void *db_page, enum
> > gdma_queue_types queue_type,
> >  	rte_write64(e.as_uint64, addr);
> >  	return 0;
> >  }
> > +#endif
> >
> >  /*
> >   * Poll completion queue for completions.
> > @@ -287,6 +379,9 @@ gdma_poll_completion_queue(struct
> mana_gdma_queue
> > *cq,
> >  		num_comp++;
> >
> >  		cq->head++;
> > +#ifdef RTE_ARCH_32
> > +		cq->head_incr_to_short_db++;
> > +#endif
> >
> >  		DP_LOG(DEBUG, "comp new 0x%x old 0x%x cqe 0x%x wq %u
> sq %u head
> > %u",
> >  		       new_owner_bits, old_owner_bits, cqe_owner_bits, diff --
> git
> > a/drivers/net/mana/mana.h b/drivers/net/mana/mana.h index
> > 5801491d75..848d87c096 100644
> > --- a/drivers/net/mana/mana.h
> > +++ b/drivers/net/mana/mana.h
> > @@ -50,6 +50,19 @@ struct mana_shared_data {  #define
> MAX_TX_WQE_SIZE
> > 512  #define MAX_RX_WQE_SIZE 256
> >
> > +/* For 32 bit only */
> > +#ifdef RTE_ARCH_32
> > +#define	GDMA_SHORT_DB_INC_MASK 0xffff
> > +#define	GDMA_SHORT_DB_QID_MASK 0xfff
> > +
> > +#define GDMA_SHORT_DB_MAX_WQE	(0x10000 /
> > GDMA_WQE_ALIGNMENT_UNIT_SIZE)
> > +
> > +#define TX_WQE_SHORT_DB_THRESHOLD \
> > +	(GDMA_SHORT_DB_MAX_WQE - (2 * MAX_TX_WQE_SIZE)) #define
> > +RX_WQE_SHORT_DB_THRESHOLD \
> 
> GDMA_SHORT_DB_MAX_WQE is in BU, MAX_TX_WQE_SIZE is in bytes.
> 
> Doing math on two different units...
> 
That is good point. I will correct this.

> And why using "2*", using "1*" is good enough?
> 
> 
Sure. I will use 1 in next revision.

> 
> 
> > +	(GDMA_SHORT_DB_MAX_WQE - (2 * MAX_RX_WQE_SIZE)) #endif
> > +
> >  /* Values from the GDMA specification document, WQE format
> > description */ #define INLINE_OOB_SMALL_SIZE_IN_BYTES 8  #define
> > INLINE_OOB_LARGE_SIZE_IN_BYTES 24 @@ -375,6 +388,9 @@ struct
> > mana_gdma_queue {
> >  	uint32_t id;
> >  	uint32_t head;
> >  	uint32_t tail;
> > +#ifdef RTE_ARCH_32
> > +	uint32_t head_incr_to_short_db;
> > +#endif
> >  };
> >
> >  #define MANA_MR_BTREE_PER_QUEUE_N	64
> > @@ -425,6 +441,9 @@ struct mana_rxq {
> >  	 */
> >  	uint32_t desc_ring_head, desc_ring_tail;
> >
> > +	/* For storing wqe increment count btw each short doorbell ring */
> > +	uint32_t wqe_cnt_to_short_db;
> > +
> >  	struct mana_gdma_queue gdma_rq;
> >  	struct mana_gdma_queue gdma_cq;
> >  	struct gdma_comp *gdma_comp_buf;
> > @@ -455,8 +474,14 @@ extern int mana_logtype_init;
> >
> >  #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
> >
> > +#ifdef RTE_ARCH_32
> > +int mana_ring_short_doorbell(void *db_page, enum gdma_queue_types
> > queue_type,
> > +			     uint32_t queue_id, uint32_t tail_incr,
> > +			     uint8_t arm);
> > +#else
> >  int mana_ring_doorbell(void *db_page, enum gdma_queue_types
> > queue_type,
> >  		       uint32_t queue_id, uint32_t tail, uint8_t arm);
> > +#endif
> >  int mana_rq_ring_doorbell(struct mana_rxq *rxq);
> >
> >  int gdma_post_work_request(struct mana_gdma_queue *queue, diff --git
> > a/drivers/net/mana/rx.c b/drivers/net/mana/rx.c index
> > 14d9085801..303d129e5b 100644
> > --- a/drivers/net/mana/rx.c
> > +++ b/drivers/net/mana/rx.c
> > @@ -39,10 +39,23 @@ mana_rq_ring_doorbell(struct mana_rxq *rxq)
> >  	/* Hardware Spec specifies that software client should set 0 for
> >  	 * wqe_cnt for Receive Queues.
> >  	 */
> > +#ifdef RTE_ARCH_32
> > +	if (rxq->wqe_cnt_to_short_db) {
> > +		ret = mana_ring_short_doorbell(db_page,
> > GDMA_QUEUE_RECEIVE,
> > +				 rxq->gdma_rq.id,
> > +				 rxq->wqe_cnt_to_short_db *
> > +
> > 	GDMA_WQE_ALIGNMENT_UNIT_SIZE,
> > +				 0);
> > +	} else {
> 
> Is it possible that rxq->wqe_cnt_to_short_db might be 0 from the caller? The
> caller shouldn't ring the doorbell if there is nothing to ring.
> 
This is just a safeguard. I can remove the check.
> 
> 
> 
> 
> > +		/* No need to ring, just return */
> > +		ret = 0;
> > +	}
> > +#else
> >  	ret = mana_ring_doorbell(db_page, GDMA_QUEUE_RECEIVE,
> >  			 rxq->gdma_rq.id,
> >  			 rxq->gdma_rq.head *
> > GDMA_WQE_ALIGNMENT_UNIT_SIZE,
> >  			 0);
> > +#endif
> >
> >  	if (ret)
> >  		DP_LOG(ERR, "failed to ring RX doorbell ret %d", ret); @@ -
> > 97,6 +110,7 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
> >  		/* update queue for tracking pending packets */
> >  		desc->pkt = mbuf;
> >  		desc->wqe_size_in_bu = wqe_size_in_bu;
> > +		rxq->wqe_cnt_to_short_db += wqe_size_in_bu;
> >  		rxq->desc_ring_head = (rxq->desc_ring_head + 1) % rxq-
> > >num_desc;
> >  	} else {
> >  		DP_LOG(DEBUG, "failed to post recv ret %d", ret); @@ -
> > 115,12 +129,22 @@ mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq)
> >  	int ret;
> >  	uint32_t i;
> >
> > +#ifdef RTE_ARCH_32
> > +	rxq->wqe_cnt_to_short_db = 0;
> > +#endif
> >  	for (i = 0; i < rxq->num_desc; i++) {
> >  		ret = mana_alloc_and_post_rx_wqe(rxq);
> >  		if (ret) {
> >  			DP_LOG(ERR, "failed to post RX ret = %d", ret);
> >  			return ret;
> >  		}
> > +
> > +#ifdef RTE_ARCH_32
> > +		if (rxq->wqe_cnt_to_short_db >
> > RX_WQE_SHORT_DB_THRESHOLD) {
> > +			mana_rq_ring_doorbell(rxq);
> > +			rxq->wqe_cnt_to_short_db = 0;
> > +		}
> > +#endif
> >  	}
> >
> >  	mana_rq_ring_doorbell(rxq);
> > @@ -349,6 +373,9 @@ mana_start_rx_queues(struct rte_eth_dev *dev)
> >
> >  		/* CQ head starts with count */
> >  		rxq->gdma_cq.head = rxq->gdma_cq.count;
> > +#ifdef RTE_ARCH_32
> > +		rxq->gdma_cq.head_incr_to_short_db = 0; #endif
> >
> >  		DRV_LOG(INFO, "rxq cq id %u buf %p count %u size %u",
> >  			rxq->gdma_cq.id, rxq->gdma_cq.buffer, @@ -397,6
> > +424,10 @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
> > uint16_t pkts_n)
> >  	uint32_t i;
> >  	int polled = 0;
> >
> > +#ifdef RTE_ARCH_32
> > +	rxq->wqe_cnt_to_short_db = 0;
> > +#endif
> > +
> >  repoll:
> >  	/* Polling on new completions if we have no backlog */
> >  	if (rxq->comp_buf_idx == rxq->comp_buf_len) { @@ -505,6 +536,16
> @@
> > mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t
> > pkts_n)
> >  		wqe_posted++;
> >  		if (pkt_received == pkts_n)
> >  			break;
> > +
> > +#ifdef RTE_ARCH_32
> > +		/* Ring short doorbell if approaching the wqe increment
> > +		 * limit.
> > +		 */
> > +		if (rxq->wqe_cnt_to_short_db >
> > RX_WQE_SHORT_DB_THRESHOLD) {
> > +			mana_rq_ring_doorbell(rxq);
> > +			rxq->wqe_cnt_to_short_db = 0;
> > +		}
> > +#endif
> >  	}
> >
> >  	rxq->backlog_idx = pkt_idx;
> > @@ -529,6 +570,16 @@ static int
> >  mana_arm_cq(struct mana_rxq *rxq, uint8_t arm)  {
> >  	struct mana_priv *priv = rxq->priv;
> > +#ifdef RTE_ARCH_32
> > +	uint16_t cqe_incr = (uint16_t)rxq->gdma_cq.head_incr_to_short_db;
> 
> How do you make sure head_incr_to_short_db doesn't overflow?
> 

I have checked this with hardware team. In my opinion it would be easily overflown. 
The hw team seems suggesting the hw will take care of this.

Thanks,
Wei

> 
> Thanks,
> 
> Long
  
Wei Hu Sept. 19, 2023, 3:38 a.m. UTC | #6
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, September 19, 2023 2:03 AM
> To: Wei Hu <weh@microsoft.com>; dev@dpdk.org; Long Li
> <longli@microsoft.com>
> Cc: stable@dpdk.org; Kevin Traynor <ktraynor@redhat.com>; Luca Boccassi
> <bluca@debian.org>
> Subject: Re: [PATCH 1/1] net/mana: add 32 bit short doorbell
> 
> On 9/9/2023 1:23 PM, Wei Hu wrote:
> > Add 32 bit short doorbell support. Ring short doorbell when running in
> > 32 bit applicactions.
> >
> 
> Hi Wei,
> 
> Is this performance improvement for 32 bit, or is short doorbell support
> required for 32 bit support?
> 	
> 
Hi Ferruh,

This it not a performance improvement patch. Without this patch, 32 bit applications
do not function. 

> This patch is using RTE_ARCH_32 compile time macro to enable short doorbell
> support, so need to decide to support 32 bit or 64 bit in compile time.
> 
> Also I guess 32 bit driver can run on 64 bit arch, what will be the result in that
> case?

The patch is for those applications compiled in 32 bit, but running on a 64 bit Linux
kernels.  There is no 64 bit mana kernel driver available. So the kernel still needs to be
in 64 bit.
> 
> My point is, instead of using compile time flag, what do you think to detect
> execution platform on runtime and use preferred doorbell according platform?
> 
> I can see short descriptor support touches multiple functions, can the support
> be abstracted to let to use it based on runtime detection?

The 32 bit support request is from a specific customer who only has 32 bit applications. 
The customer needs to build and link its applications into 32bit libraries and drivers. 
Therefore, the DPDK mana driver needs to be in 32 bit anyway. 

32bit applications cannot use 64bit doorbells. 64bit applications can use 32bit doorbells,
however the performance would suffer and it definitely not recommended. 

IMHO, there is not much difference between compile time flag and "if...then...else" statement
at runtime, except for in the latter case, a few more extra runtime instructions and maybe
some branch overhead in either 64bit or 32bit case.  Given we have limited 32bit use
cases, I chose to just use the compile time flag, which seems to be simpler to implement
and less work for our verification team. 

> 
> > Cc: stable@dpdk.org
> >
> 
> Similar comment as previous patch, this patch is not a fix but adding new
> support, not sure about backporting it.
> 
The customer who needs 32bit support wants it to be on 22.11.x. That is why 
I put "Cc: stable@dpdk.org" in it.

> > Signed-off-by: Wei Hu <weh@microsoft.com>
> >
> 
> <...>
> 
> > @@ -97,6 +110,7 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
> >  		/* update queue for tracking pending packets */
> >  		desc->pkt = mbuf;
> >  		desc->wqe_size_in_bu = wqe_size_in_bu;
> > +		rxq->wqe_cnt_to_short_db += wqe_size_in_bu;
> >
> 
> This variable always used within RTE_ARCH_32 block, but set here without
> RTE_ARCH_32 ifdef, is this intentional?

No, it is not intentional. Thanks for pointing this out. I will add ifdef in next
round.

Thanks,
Wei
  
Ferruh Yigit Sept. 19, 2023, 11:27 a.m. UTC | #7
On 9/19/2023 4:38 AM, Wei Hu wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Tuesday, September 19, 2023 2:03 AM
>> To: Wei Hu <weh@microsoft.com>; dev@dpdk.org; Long Li
>> <longli@microsoft.com>
>> Cc: stable@dpdk.org; Kevin Traynor <ktraynor@redhat.com>; Luca Boccassi
>> <bluca@debian.org>
>> Subject: Re: [PATCH 1/1] net/mana: add 32 bit short doorbell
>>
>> On 9/9/2023 1:23 PM, Wei Hu wrote:
>>> Add 32 bit short doorbell support. Ring short doorbell when running in
>>> 32 bit applicactions.
>>>
>>
>> Hi Wei,
>>
>> Is this performance improvement for 32 bit, or is short doorbell support
>> required for 32 bit support?
>> 	
>>
> Hi Ferruh,
> 
> This it not a performance improvement patch. Without this patch, 32 bit applications
> do not function. 
> 

OK

>> This patch is using RTE_ARCH_32 compile time macro to enable short doorbell
>> support, so need to decide to support 32 bit or 64 bit in compile time.
>>
>> Also I guess 32 bit driver can run on 64 bit arch, what will be the result in that
>> case?
> 
> The patch is for those applications compiled in 32 bit, but running on a 64 bit Linux
> kernels.  There is no 64 bit mana kernel driver available. So the kernel still needs to be
> in 64 bit.
>>
>> My point is, instead of using compile time flag, what do you think to detect
>> execution platform on runtime and use preferred doorbell according platform?
>>
>> I can see short descriptor support touches multiple functions, can the support
>> be abstracted to let to use it based on runtime detection?
> 
> The 32 bit support request is from a specific customer who only has 32 bit applications. 
> The customer needs to build and link its applications into 32bit libraries and drivers. 
> Therefore, the DPDK mana driver needs to be in 32 bit anyway. 
> 
> 32bit applications cannot use 64bit doorbells. 64bit applications can use 32bit doorbells,
> however the performance would suffer and it definitely not recommended. 
> 

OK, can you please add this detail to the commit log?

> IMHO, there is not much difference between compile time flag and "if...then...else" statement
> at runtime, except for in the latter case, a few more extra runtime instructions and maybe
> some branch overhead in either 64bit or 32bit case.  Given we have limited 32bit use
> cases, I chose to just use the compile time flag, which seems to be simpler to implement
> and less work for our verification team. 
> 

You are right, I was thinking it can help on deployment, but need to
compile separate binaries for 32bit and 64bit anyway, so not much gain.


>>
>>> Cc: stable@dpdk.org
>>>
>>
>> Similar comment as previous patch, this patch is not a fix but adding new
>> support, not sure about backporting it.
>>
> The customer who needs 32bit support wants it to be on 22.11.x. That is why 
> I put "Cc: stable@dpdk.org" in it.
> 
>>> Signed-off-by: Wei Hu <weh@microsoft.com>
>>>
>>
>> <...>
>>
>>> @@ -97,6 +110,7 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
>>>  		/* update queue for tracking pending packets */
>>>  		desc->pkt = mbuf;
>>>  		desc->wqe_size_in_bu = wqe_size_in_bu;
>>> +		rxq->wqe_cnt_to_short_db += wqe_size_in_bu;
>>>
>>
>> This variable always used within RTE_ARCH_32 block, but set here without
>> RTE_ARCH_32 ifdef, is this intentional?
> 
> No, it is not intentional. Thanks for pointing this out. I will add ifdef in next
> round.
> 
> Thanks,
> Wei
>
  
Long Li Sept. 19, 2023, 7:23 p.m. UTC | #8
> > > +#ifdef RTE_ARCH_32
> > > +	uint16_t cqe_incr = (uint16_t)rxq->gdma_cq.head_incr_to_short_db;
> >
> > How do you make sure head_incr_to_short_db doesn't overflow?
> >
> 
> I have checked this with hardware team. In my opinion it would be easily
> overflown.
> The hw team seems suggesting the hw will take care of this.
> 
> Thanks,
> Wei

I'm not sure how HW can take care of this when it overflows. When it happens, the HW will miss a doorbell and CQ queue will get full. And eventually you'll lose completions for TX/RX.

In mana_alloc_and_post_rx_wqes() and mana_rx_burst(), the code has check for RX/TX_WQE_SHORT_DB_THRESHOLD to make sure tail_incr doesn't overflow when ringing the doorbell.

In gdma_poll_completion_queue(), you need to have a similar mechanism to not overflow tail_incr  when ringing the doorbell.

Thanks,

Long
  
Wei Hu Sept. 20, 2023, 3:11 a.m. UTC | #9
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >
> > 32bit applications cannot use 64bit doorbells. 64bit applications can
> > use 32bit doorbells, however the performance would suffer and it definitely
> not recommended.
> >
> 
> OK, can you please add this detail to the commit log?
> 
Will do. Thanks.

Wei
  
Wei Hu Sept. 20, 2023, 8:10 a.m. UTC | #10
> -----Original Message-----
> From: Long Li <longli@microsoft.com>
> Sent: Wednesday, September 20, 2023 3:24 AM
> To: Wei Hu <weh@microsoft.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Ferruh Yigit <ferruh.yigit@amd.com>; Luca Boccassi
> <bluca@debian.org>; Kevin Traynor <ktraynor@redhat.com>
> Subject: RE: [PATCH 1/1] net/mana: add 32 bit short doorbell
> 
> > > > +#ifdef RTE_ARCH_32
> > > > +	uint16_t cqe_incr =
> > > > +(uint16_t)rxq->gdma_cq.head_incr_to_short_db;
> > >
> > > How do you make sure head_incr_to_short_db doesn't overflow?
> > >
> >
> > I have checked this with hardware team. In my opinion it would be
> > easily overflown.
> > The hw team seems suggesting the hw will take care of this.
> >
> > Thanks,
> > Wei
> 
> I'm not sure how HW can take care of this when it overflows. When it
> happens, the HW will miss a doorbell and CQ queue will get full. And
> eventually you'll lose completions for TX/RX.
> 
> In mana_alloc_and_post_rx_wqes() and mana_rx_burst(), the code has check
> for RX/TX_WQE_SHORT_DB_THRESHOLD to make sure tail_incr doesn't
> overflow when ringing the doorbell.
> 
> In gdma_poll_completion_queue(), you need to have a similar mechanism to
> not overflow tail_incr  when ringing the doorbell.
> 
I am not sure what can be done here. Applications could run in poll mode without
need to ring cq doorbell, or it could take very long time before it change the interrupt
state. What we can do when cq->head_incr_to_short_db reaches 0xffff in 
gdma_poll_completion_queue()?

If it breaks out the loop and return, the next time it enters it may still  at0xffff
because it has not rung doorbell the have it reset yet.

If  just resetting the value to 0 and let it keep going in the loop, it is no difference than
casting it to 16 bit unsigned int, which would be done in 
mana_arm_cq() if it is eventually called.

Anyway, ringing cq doorbell has not been tested as the driver doesn't support interrupts.

Wei
  
Long Li Sept. 20, 2023, 5:28 p.m. UTC | #11
> > Subject: RE: [PATCH 1/1] net/mana: add 32 bit short doorbell
> >
> > > > > +#ifdef RTE_ARCH_32
> > > > > +	uint16_t cqe_incr =
> > > > > +(uint16_t)rxq->gdma_cq.head_incr_to_short_db;
> > > >
> > > > How do you make sure head_incr_to_short_db doesn't overflow?
> > > >
> > >
> > > I have checked this with hardware team. In my opinion it would be
> > > easily overflown.
> > > The hw team seems suggesting the hw will take care of this.
> > >
> > > Thanks,
> > > Wei
> >
> > I'm not sure how HW can take care of this when it overflows. When it
> > happens, the HW will miss a doorbell and CQ queue will get full. And
> > eventually you'll lose completions for TX/RX.
> >
> > In mana_alloc_and_post_rx_wqes() and mana_rx_burst(), the code has
> > check for RX/TX_WQE_SHORT_DB_THRESHOLD to make sure tail_incr doesn't
> > overflow when ringing the doorbell.
> >
> > In gdma_poll_completion_queue(), you need to have a similar mechanism
> > to not overflow tail_incr  when ringing the doorbell.
> >
> I am not sure what can be done here. Applications could run in poll mode without
> need to ring cq doorbell, or it could take very long time before it change the
> interrupt state. What we can do when cq->head_incr_to_short_db reaches 0xffff
> in gdma_poll_completion_queue()?
> 
> If it breaks out the loop and return, the next time it enters it may still  at0xffff
> because it has not rung doorbell the have it reset yet.
> 
> If  just resetting the value to 0 and let it keep going in the loop, it is no difference
> than casting it to 16 bit unsigned int, which would be done in
> mana_arm_cq() if it is eventually called.
> 
> Anyway, ringing cq doorbell has not been tested as the driver doesn't support
> interrupts.

If 32bit doorbell doesn't support interrupts, you can remove this feature from 32 bits.

Long
  

Patch

diff --git a/drivers/net/mana/gdma.c b/drivers/net/mana/gdma.c
index 65685fe236..d1da025d1b 100644
--- a/drivers/net/mana/gdma.c
+++ b/drivers/net/mana/gdma.c
@@ -166,6 +166,97 @@  gdma_post_work_request(struct mana_gdma_queue *queue,
 	return 0;
 }
 
+#ifdef RTE_ARCH_32
+union gdma_short_doorbell_entry {
+	uint32_t     as_uint32;
+
+	struct {
+		uint32_t tail_ptr_incr	: 16; /* Number of CQEs */
+		uint32_t id		: 12;
+		uint32_t reserved	: 3;
+		uint32_t arm		: 1;
+	} cq;
+
+	struct {
+		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
+		uint32_t id		: 12;
+		uint32_t reserved	: 4;
+	} rq;
+
+	struct {
+		uint32_t tail_ptr_incr	: 16; /* In number of bytes */
+		uint32_t id		: 12;
+		uint32_t reserved	: 4;
+	} sq;
+
+	struct {
+		uint32_t tail_ptr_incr	: 16; /* Number of EQEs */
+		uint32_t id		: 12;
+		uint32_t reserved	: 3;
+		uint32_t arm		: 1;
+	} eq;
+}; /* HW DATA */
+
+enum {
+	DOORBELL_SHORT_OFFSET_SQ = 0x10,
+	DOORBELL_SHORT_OFFSET_RQ = 0x410,
+	DOORBELL_SHORT_OFFSET_CQ = 0x810,
+	DOORBELL_SHORT_OFFSET_EQ = 0xFF0,
+};
+
+/*
+ * Write to hardware doorbell to notify new activity.
+ */
+int
+mana_ring_short_doorbell(void *db_page, enum gdma_queue_types queue_type,
+			 uint32_t queue_id, uint32_t tail_incr, uint8_t arm)
+{
+	uint8_t *addr = db_page;
+	union gdma_short_doorbell_entry e = {};
+
+	if ((queue_id & ~GDMA_SHORT_DB_QID_MASK) ||
+	    (tail_incr & ~GDMA_SHORT_DB_INC_MASK)) {
+		DP_LOG(ERR, "%s: queue_id %u or "
+		       "tail_incr %u overflowed, queue type %d",
+		       __func__, queue_id, tail_incr, queue_type);
+		return -EINVAL;
+	}
+
+	switch (queue_type) {
+	case GDMA_QUEUE_SEND:
+		e.sq.id = queue_id;
+		e.sq.tail_ptr_incr = tail_incr;
+		addr += DOORBELL_SHORT_OFFSET_SQ;
+		break;
+
+	case GDMA_QUEUE_RECEIVE:
+		e.rq.id = queue_id;
+		e.rq.tail_ptr_incr = tail_incr;
+		addr += DOORBELL_SHORT_OFFSET_RQ;
+		break;
+
+	case GDMA_QUEUE_COMPLETION:
+		e.cq.id = queue_id;
+		e.cq.tail_ptr_incr = tail_incr;
+		e.cq.arm = arm;
+		addr += DOORBELL_SHORT_OFFSET_CQ;
+		break;
+
+	default:
+		DP_LOG(ERR, "Unsupported queue type %d", queue_type);
+		return -1;
+	}
+
+	/* Ensure all writes are done before ringing doorbell */
+	rte_wmb();
+
+	DP_LOG(DEBUG, "db_page %p addr %p queue_id %u type %u tail %u arm %u",
+	       db_page, addr, queue_id, queue_type, tail_incr, arm);
+
+	rte_write32(e.as_uint32, addr);
+	return 0;
+}
+#else
 union gdma_doorbell_entry {
 	uint64_t     as_uint64;
 
@@ -248,6 +339,7 @@  mana_ring_doorbell(void *db_page, enum gdma_queue_types queue_type,
 	rte_write64(e.as_uint64, addr);
 	return 0;
 }
+#endif
 
 /*
  * Poll completion queue for completions.
@@ -287,6 +379,9 @@  gdma_poll_completion_queue(struct mana_gdma_queue *cq,
 		num_comp++;
 
 		cq->head++;
+#ifdef RTE_ARCH_32
+		cq->head_incr_to_short_db++;
+#endif
 
 		DP_LOG(DEBUG, "comp new 0x%x old 0x%x cqe 0x%x wq %u sq %u head %u",
 		       new_owner_bits, old_owner_bits, cqe_owner_bits,
diff --git a/drivers/net/mana/mana.h b/drivers/net/mana/mana.h
index 5801491d75..848d87c096 100644
--- a/drivers/net/mana/mana.h
+++ b/drivers/net/mana/mana.h
@@ -50,6 +50,19 @@  struct mana_shared_data {
 #define MAX_TX_WQE_SIZE 512
 #define MAX_RX_WQE_SIZE 256
 
+/* For 32 bit only */
+#ifdef RTE_ARCH_32
+#define	GDMA_SHORT_DB_INC_MASK 0xffff
+#define	GDMA_SHORT_DB_QID_MASK 0xfff
+
+#define GDMA_SHORT_DB_MAX_WQE	(0x10000 / GDMA_WQE_ALIGNMENT_UNIT_SIZE)
+
+#define TX_WQE_SHORT_DB_THRESHOLD \
+	(GDMA_SHORT_DB_MAX_WQE - (2 * MAX_TX_WQE_SIZE))
+#define RX_WQE_SHORT_DB_THRESHOLD \
+	(GDMA_SHORT_DB_MAX_WQE - (2 * MAX_RX_WQE_SIZE))
+#endif
+
 /* Values from the GDMA specification document, WQE format description */
 #define INLINE_OOB_SMALL_SIZE_IN_BYTES 8
 #define INLINE_OOB_LARGE_SIZE_IN_BYTES 24
@@ -375,6 +388,9 @@  struct mana_gdma_queue {
 	uint32_t id;
 	uint32_t head;
 	uint32_t tail;
+#ifdef RTE_ARCH_32
+	uint32_t head_incr_to_short_db;
+#endif
 };
 
 #define MANA_MR_BTREE_PER_QUEUE_N	64
@@ -425,6 +441,9 @@  struct mana_rxq {
 	 */
 	uint32_t desc_ring_head, desc_ring_tail;
 
+	/* For storing wqe increment count btw each short doorbell ring */
+	uint32_t wqe_cnt_to_short_db;
+
 	struct mana_gdma_queue gdma_rq;
 	struct mana_gdma_queue gdma_cq;
 	struct gdma_comp *gdma_comp_buf;
@@ -455,8 +474,14 @@  extern int mana_logtype_init;
 
 #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
 
+#ifdef RTE_ARCH_32
+int mana_ring_short_doorbell(void *db_page, enum gdma_queue_types queue_type,
+			     uint32_t queue_id, uint32_t tail_incr,
+			     uint8_t arm);
+#else
 int mana_ring_doorbell(void *db_page, enum gdma_queue_types queue_type,
 		       uint32_t queue_id, uint32_t tail, uint8_t arm);
+#endif
 int mana_rq_ring_doorbell(struct mana_rxq *rxq);
 
 int gdma_post_work_request(struct mana_gdma_queue *queue,
diff --git a/drivers/net/mana/rx.c b/drivers/net/mana/rx.c
index 14d9085801..303d129e5b 100644
--- a/drivers/net/mana/rx.c
+++ b/drivers/net/mana/rx.c
@@ -39,10 +39,23 @@  mana_rq_ring_doorbell(struct mana_rxq *rxq)
 	/* Hardware Spec specifies that software client should set 0 for
 	 * wqe_cnt for Receive Queues.
 	 */
+#ifdef RTE_ARCH_32
+	if (rxq->wqe_cnt_to_short_db) {
+		ret = mana_ring_short_doorbell(db_page, GDMA_QUEUE_RECEIVE,
+				 rxq->gdma_rq.id,
+				 rxq->wqe_cnt_to_short_db *
+					GDMA_WQE_ALIGNMENT_UNIT_SIZE,
+				 0);
+	} else {
+		/* No need to ring, just return */
+		ret = 0;
+	}
+#else
 	ret = mana_ring_doorbell(db_page, GDMA_QUEUE_RECEIVE,
 			 rxq->gdma_rq.id,
 			 rxq->gdma_rq.head * GDMA_WQE_ALIGNMENT_UNIT_SIZE,
 			 0);
+#endif
 
 	if (ret)
 		DP_LOG(ERR, "failed to ring RX doorbell ret %d", ret);
@@ -97,6 +110,7 @@  mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
 		/* update queue for tracking pending packets */
 		desc->pkt = mbuf;
 		desc->wqe_size_in_bu = wqe_size_in_bu;
+		rxq->wqe_cnt_to_short_db += wqe_size_in_bu;
 		rxq->desc_ring_head = (rxq->desc_ring_head + 1) % rxq->num_desc;
 	} else {
 		DP_LOG(DEBUG, "failed to post recv ret %d", ret);
@@ -115,12 +129,22 @@  mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq)
 	int ret;
 	uint32_t i;
 
+#ifdef RTE_ARCH_32
+	rxq->wqe_cnt_to_short_db = 0;
+#endif
 	for (i = 0; i < rxq->num_desc; i++) {
 		ret = mana_alloc_and_post_rx_wqe(rxq);
 		if (ret) {
 			DP_LOG(ERR, "failed to post RX ret = %d", ret);
 			return ret;
 		}
+
+#ifdef RTE_ARCH_32
+		if (rxq->wqe_cnt_to_short_db > RX_WQE_SHORT_DB_THRESHOLD) {
+			mana_rq_ring_doorbell(rxq);
+			rxq->wqe_cnt_to_short_db = 0;
+		}
+#endif
 	}
 
 	mana_rq_ring_doorbell(rxq);
@@ -349,6 +373,9 @@  mana_start_rx_queues(struct rte_eth_dev *dev)
 
 		/* CQ head starts with count */
 		rxq->gdma_cq.head = rxq->gdma_cq.count;
+#ifdef RTE_ARCH_32
+		rxq->gdma_cq.head_incr_to_short_db = 0;
+#endif
 
 		DRV_LOG(INFO, "rxq cq id %u buf %p count %u size %u",
 			rxq->gdma_cq.id, rxq->gdma_cq.buffer,
@@ -397,6 +424,10 @@  mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	uint32_t i;
 	int polled = 0;
 
+#ifdef RTE_ARCH_32
+	rxq->wqe_cnt_to_short_db = 0;
+#endif
+
 repoll:
 	/* Polling on new completions if we have no backlog */
 	if (rxq->comp_buf_idx == rxq->comp_buf_len) {
@@ -505,6 +536,16 @@  mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		wqe_posted++;
 		if (pkt_received == pkts_n)
 			break;
+
+#ifdef RTE_ARCH_32
+		/* Ring short doorbell if approaching the wqe increment
+		 * limit.
+		 */
+		if (rxq->wqe_cnt_to_short_db > RX_WQE_SHORT_DB_THRESHOLD) {
+			mana_rq_ring_doorbell(rxq);
+			rxq->wqe_cnt_to_short_db = 0;
+		}
+#endif
 	}
 
 	rxq->backlog_idx = pkt_idx;
@@ -529,6 +570,16 @@  static int
 mana_arm_cq(struct mana_rxq *rxq, uint8_t arm)
 {
 	struct mana_priv *priv = rxq->priv;
+#ifdef RTE_ARCH_32
+	uint16_t cqe_incr = (uint16_t)rxq->gdma_cq.head_incr_to_short_db;
+
+	rxq->gdma_cq.head_incr_to_short_db = 0;
+	DP_LOG(DEBUG, "Ringing completion queue ID %u incr %u arm %d",
+	       rxq->gdma_cq.id, cqe_incr, arm);
+
+	return mana_ring_short_doorbell(priv->db_page, GDMA_QUEUE_COMPLETION,
+				  rxq->gdma_cq.id, cqe_incr, arm);
+#else
 	uint32_t head = rxq->gdma_cq.head %
 		(rxq->gdma_cq.count << COMPLETION_QUEUE_ENTRY_OWNER_BITS_SIZE);
 
@@ -537,6 +588,7 @@  mana_arm_cq(struct mana_rxq *rxq, uint8_t arm)
 
 	return mana_ring_doorbell(priv->db_page, GDMA_QUEUE_COMPLETION,
 				  rxq->gdma_cq.id, head, arm);
+#endif
 }
 
 int
diff --git a/drivers/net/mana/tx.c b/drivers/net/mana/tx.c
index 11ba2ee1ac..8ff81bde09 100644
--- a/drivers/net/mana/tx.c
+++ b/drivers/net/mana/tx.c
@@ -137,6 +137,9 @@  mana_start_tx_queues(struct rte_eth_dev *dev)
 
 		/* CQ head starts with count (not 0) */
 		txq->gdma_cq.head = txq->gdma_cq.count;
+#ifdef RTE_ARCH_32
+		txq->gdma_cq.head_incr_to_short_db = 0;
+#endif
 
 		DRV_LOG(INFO, "txq cq id %u buf %p count %u size %u head %u",
 			txq->gdma_cq.id, txq->gdma_cq.buffer,
@@ -176,6 +179,9 @@  mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	void *db_page;
 	uint16_t pkt_sent = 0;
 	uint32_t num_comp, i;
+#ifdef RTE_ARCH_32
+	uint32_t wqe_count = 0;
+#endif
 
 	/* Process send completions from GDMA */
 	num_comp = gdma_poll_completion_queue(&txq->gdma_cq,
@@ -418,6 +424,20 @@  mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			DP_LOG(DEBUG, "nb_pkts %u pkt[%d] sent",
 			       nb_pkts, pkt_idx);
+#ifdef RTE_ARCH_32
+			wqe_count += wqe_size_in_bu;
+			if (wqe_count > TX_WQE_SHORT_DB_THRESHOLD) {
+				/* wqe_count approaching to short doorbell
+				 * increment limit. Stop processing further
+				 * more packets and just ring short
+				 * doorbell.
+				 */
+				DP_LOG(DEBUG, "wqe_count %u reaching limit, "
+				       "pkt_sent %d",
+				       wqe_count, pkt_sent);
+				break;
+			}
+#endif
 		} else {
 			DP_LOG(DEBUG, "pkt[%d] failed to post send ret %d",
 			       pkt_idx, ret);
@@ -436,11 +456,19 @@  mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	}
 
 	if (pkt_sent) {
+#ifdef RTE_ARCH_32
+		ret = mana_ring_short_doorbell(db_page, GDMA_QUEUE_SEND,
+					       txq->gdma_sq.id,
+					       wqe_count *
+						GDMA_WQE_ALIGNMENT_UNIT_SIZE,
+					       0);
+#else
 		ret = mana_ring_doorbell(db_page, GDMA_QUEUE_SEND,
 					 txq->gdma_sq.id,
 					 txq->gdma_sq.head *
 						GDMA_WQE_ALIGNMENT_UNIT_SIZE,
 					 0);
+#endif
 		if (ret)
 			DP_LOG(ERR, "mana_ring_doorbell failed ret %d", ret);
 	}