[v5,28/28] common/cnxk: add support for per-port RQ in inline device

Message ID 20220508074839.6965-28-ndabilpuram@marvell.com (mailing list archive)
State Accepted, archived
Delegated to: Jerin Jacob
Headers
Series [v5,01/28] common/cnxk: add multi channel support for SDP send queues |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues
ci/intel-Testing success Testing PASS

Commit Message

Nithin Dabilpuram May 8, 2022, 7:48 a.m. UTC
  Add support for per port RQ in inline device thereby using
Aura/Pool attributes from that port specific first RQ.
When inline device is used with channel masking, it will
fallback to single RQ for all ethdev ports.

Also remove clamping up of CQ size for LBK ethdev when
inline inbound is enabled as now backpressure is supported
even on LBK ethdevs.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix.h             |   2 +-
 drivers/common/cnxk/roc_nix_debug.c       |   7 +-
 drivers/common/cnxk/roc_nix_inl.c         |  81 ++++++++--------
 drivers/common/cnxk/roc_nix_inl.h         |   5 +-
 drivers/common/cnxk/roc_nix_inl_dev.c     |  42 ++++++--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c | 155 +++++++++++++++++++-----------
 drivers/common/cnxk/roc_nix_inl_priv.h    |  12 ++-
 drivers/common/cnxk/roc_npc.c             |  13 ++-
 drivers/common/cnxk/version.map           |   1 -
 drivers/net/cnxk/cnxk_ethdev.c            |  14 +--
 10 files changed, 202 insertions(+), 130 deletions(-)
  

Comments

Jerin Jacob May 10, 2022, 2:31 p.m. UTC | #1
On Sun, May 8, 2022 at 1:22 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> Add support for per port RQ in inline device thereby using
> Aura/Pool attributes from that port specific first RQ.
> When inline device is used with channel masking, it will
> fallback to single RQ for all ethdev ports.
>
> Also remove clamping up of CQ size for LBK ethdev when
> inline inbound is enabled as now backpressure is supported
> even on LBK ethdevs.
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>

Series-Acked-by: Jerin Jacob <jerinj@marvell.com>

Updated the git commit as follows and applied to
dpdk-next-net-mrvl/for-next-net. Thanks

ommit 1d129cd2b63cae425ae807e9e0a11084c2d4fad9
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:39 2022 +0530

    common/cnxk: add support for per-port RQ in inline device

    Add support for per port RQ in inline device thereby using
    Aura/Pool attributes from that port specific first RQ.
    When inline device is used with channel masking, it will
    fallback to single RQ for all ethdev ports.

    Also remove clamping up of CQ size for LBK ethdev when
    inline inbound is enabled as now backpressure is supported
    even on LBK ethdevs.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 69f03449231aea7da82ef171795a4d51d3cf4578
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:38 2022 +0530

    net/cnxk: fix hotplug detach sequence for first device

    Fix hotplug detach sequence to handle case where first PCI
    device that is hosting NPA LF is being destroyed while in use.

    Fixes: 5a4341c84979 ("net/cnxk: add platform specific probe and remove")
    Cc: stable@dpdk.org

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 55130340c7e6ae76a075cd866e8d98408fd2a21a
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:37 2022 +0530

    net/cnxk: fix multi-seg extraction in vwqe path

    Fix multi-seg extraction in vwqe path to avoid updating mbuf[]
    array until it is used via cq0 path.

    Fixes: 7fbbc981d54f ("event/cnxk: support vectorized Rx event fast path")
    Cc: stable@dpdk.org

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 61b14ebde440fb5ebdc27d5c72b1da58716c3209
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:36 2022 +0530

    net/cnxk: perform early MTU setup for eventmode

    Perform early MTU setup for event mode path in order
    to update the Rx/Tx offload flags before Rx adapter setup
    starts.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 61b14ebde440fb5ebdc27d5c72b1da58716c3209
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:36 2022 +0530

    net/cnxk: perform early MTU setup for eventmode

    Perform early MTU setup for event mode path in order
    to update the Rx/Tx offload flags before Rx adapter setup
    starts.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit a21e134c7e34b1405a3ff5b5b67b23528e133ac8
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:35 2022 +0530

    net/cnxk: add support for flow control for outbound inline

    Add support for flow control in outbound inline path using
    FC updates from CPT.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit f418ba07428e5536f34756bb6ad4125d74af5583
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:34 2022 +0530

    net/cnxk: support security stats

    Enabled rte_security stats operation based on the configuration
    of SA options set while creating session.

    Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 8b300f9d766ca2dfad049d762239fb9aaa1d2880
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:33 2022 +0530

    net/cnxk: add capabilities for IPsec options

    Added supported capabilities for various IPsec SA options.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 7245450d99c329f6cf53e0c636024278229331f1
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:32 2022 +0530

    net/cnxk: add capabilities for IPsec crypto algos

    Added supported crypto algorithms for inline IPsec
    offload.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 781dc9daa5e83e25cf9c222ce1bbc60c5dcf7a42
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:31 2022 +0530

    net/cnxk: update olflags with L3/L4 csum offload

    When the packet is processed with inline IPsec offload,
    the ol_flags were updated only with RTE_MBUF_F_RX_SEC_OFFLOAD.

    But the hardware can also update the L3/L4 csum offload flags.
    Hence, ol_flags are updated with RTE_MBUF_F_RX_IP_CKSUM_GOOD,
    RTE_MBUF_F_RX_L4_CKSUM_GOOD, etc based on the microcode completion
    codes.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 67462f5dd216756c47d942bf45b5bf45c91051d4
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:30 2022 +0530

    net/cnxk: optimize Rx fast path for security offload

    Optimize Rx fast path for security packets by preprocessing
    most of the operations such as sa pointer compute,
    inner WQE pointer fetch and microcode completion translation
    before the pkt is characterized as inbound inline pkt.

    Preprocessed info will be discarded if packet is not
    found to be security pkt. Also fix fetching of CQ word5
    for vector mode. Get ucode completion code from CPT parse
    header and RLEN from IP4v/IPv6 decrypted packet as it is
    in same 64B cacheline as CPT parse header in most of
    the cases. By this method, we avoid accessing an extra
    cacheline

    Fixes: c062f5726f61 ("net/cnxk: support IP reassembly")
    Cc: stable@dpdk.org

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:29 2022 +0530

    net/cnxk: support decrement TTL for inline IPsec

    Added support for decrementing TTL(IPv4)/hoplimit(IPv6)
    while doing inline IPsec processing if the security session
    SA options is enabled with dec_ttl.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 2bd680ab791eeebcfdf7f4535069723a523d5e6b
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:28 2022 +0530

    net/cnxk: reset offload flag if reassembly is disabled

    The rx offload flag need to be reset if IP reassembly flag
    is not set while calling reassembly_conf_set.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 7016602ac842e6636594bfa6efdb783d141955ec
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:27 2022 +0530

    net/cnxk: update environment variable for debug IV

    Changed environment variable name for specifying
    debug IV for unit testing of inline IPsec offload
    with known test vectors.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 7ea50ff94f4d386d6e778eb72af508b1e542b22b
Author: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date:   Sun May 8 13:18:26 2022 +0530

    net/cnxk: update inline device in ethdev telemetry

    Inline PF_FUNC is updated in ethdev_tel_handle_info(),
    when inline device is attached to any dpdk process

    Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:25 2022 +0530

    net/cnxk: fix roundup size with transport mode

    For transport mode, roundup needs to be based on L4 data
    and shouldn't include L3 length.

    By including L3 length, rlen that is calculated and put in
    send hdr would cross the final length of the packet in some
    scenarios where padding is necessary.

    Also when outer and inner checksum offload flags are enabled,
    get the l2_len and l3_len from il3ptr and il4ptr.

    Fixes: 55bfac717c72 ("net/cnxk: support Tx security offload on cn10k")
    Cc: stable@dpdk.org

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 284056bf01423bc4b710962c5aca93f2d70050b3
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:24 2022 +0530

    net/cnxk: disable default inner chksum for outbound inline

    Disable default inner L3/L4 checksum generation for outbound inline
    path and enable based on SA options or RTE_MBUF flags as per
    the spec. Though the checksum generation is not impacting much
    performance, it is overwriting zero checksum for UDP packets
    which is not always good.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 0ef9aeeb07baa42549777f2684e2144aff32b89e
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>

Date:   Sun May 8 13:18:23 2022 +0530

    net/cnxk: add barrier after meta batch free in scalar

    Add barrier after meta batch free in scalar routine when
    LMT lines are exactly full to make sure that next LMT line user
    in Tx only starts writing the lines only when previous stoerl's
    are complete.

    Fixes: 4382a7ccf781 ("net/cnxk: support Rx security offload on cn10k")
    Cc: stable@dpdk.org

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 828ba05186e7f2229d4d7797f89cecd14129897c
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:22 2022 +0530

    net/cnxk: update LBK ethdev link info

    Update link info of LBK ethdev i.e AF's VF's as always up
    and 100G. This is because there is no phy for the LBK interfaces
    and driver won't get a link update notification for the same.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit d5b6a3ea74f649fc9420300df7cf76ee2a96dd31
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:21 2022 +0530

    net/cnxk: support loopback mode on AF VF's

    Support internal loopback mode on AF VF's using ROC by setting
    Tx channel same as Rx channel.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit ff471010e93bfb0fb70f7dbd7c0e51ffb8521eab
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:20 2022 +0530

    common/cnxk: use aggregate level RR priority from mbox

    Use aggregate level Round Robin Priority from mbox response instead of
    fixing it to single macro. This is useful when kernel AF driver
    changes the constant.

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 91a1d1f76dd223b64f114ff4c1416b47adc6c443
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Sun May 8 13:18:19 2022 +0530

    common/cnxk: convert warning to debug print

    Inbound SA SPI if not in min-max range specified in devargs,
    was marked as a warning. But this is not converted to debug
    print because if the entry is found to be duplicate in the mask,
    it will give another error print. Hence, warning print is not needed
    and is now converted to debug print.

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 65bd29b1be802a7de0cbca9be5288399ab3fcf84
Author: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date:   Sun May 8 13:18:18 2022 +0530

    common/cnxk: fix soft expiry disable path

    Fix issues in mode where soft expiry is disabled in ROC.
    When soft expiry support is not enabled in inline device,
    memory is not allocated for the ring base array and should
    not be accessed.

    Fixes: bea5d990a93b ("net/cnxk: support outbound soft expiry notification")
    Cc: stable@dpdk.org

    Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit fa5b48a53e22675e6c1749fa452ccb4dc617a3a3
Author: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date:   Sun May 8 13:18:17 2022 +0530

    common/cnxk: skip probing SoC environment for CN9k

    SoC run platform file is not present in CN9k so probing
    is done for CN10k devices

    Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit fcc998399ee5bd8603ba039b4cfa0df7c7ec1f3f
Author: Satha Rao <skoteshwar@marvell.com>
Date:   Sun May 8 13:18:16 2022 +0530

    common/cnxk: fix SQ flush sequence

    Fix SQ flush sequence to issue NIX RX SW Sync after SMQ flush.
    This sync ensures that all the packets that were in-flight are
    flushed out of memory.

    This patch also fixes NULL return issues reported by
    static analysis tool in Traffic Manager and sync's mailbox
    to that of the kernel version.

    Fixes: 05d727e8b14a ("common/cnxk: support NIX traffic management")
    Fixes: 0b7e667ee303 ("common/cnxk: enable packet marking")
    Cc: stable@dpdk.org

    Signed-off-by: Satha Rao <skoteshwar@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>


commit b9d96f33cebe34687bad48b4c5841a9ac5662b63
Author: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Date:   Sun May 8 13:18:15 2022 +0530

    common/cnxk: support to configure the TS PKIND in CPT

    Add new API to configure the SA table entries with new CPT PKIND
    when timestamp is enabled.

    Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
    Acked-by: Ray Kinsella <mdr@ashroe.eu>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit c4ce408631f987348dcbf814af7abe0e210c791d
Author: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Date:   Sun May 8 13:18:14 2022 +0530

    common/cnxk: add new PKIND for CPT when ts is enabled

    With timestamp enabled, time stamp will be added to second pass packets
    from CPT. NPC needs different configuration to parse second pass packets
    with and without timestamp.
    New PKIND is defined for CPT when time stamp is enabled on NIX.
    CPT should use this PKIND for second pass packets when TS is enabled for
    corresponding ethdev port.

    Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 57b167a2ef78249cf2abd37b9fba140229da3184
Author: Radha Mohan Chintakuntla <radhac@marvell.com>
Date:   Sun May 8 13:18:13 2022 +0530

    net/cnxk: add receive channel backpressure for SDP

    The SDP interfaces also need to be configured for NIX receive channel
    backpressure for packet receive.

    Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>

commit 271bdac1f287b6f5d8204319d6cf9ba3d3c918cc
Author: Subrahmanyam Nilla <snilla@marvell.com>
Date:   Sun May 8 13:18:12 2022 +0530

    common/cnxk: add multi channel support for SDP send queues

    Currently only base channel number is configured as default
    channel for all the SDP send queues. Due to this, packets
    sent on different SQ's are landing on the same output queue
    on the host. Channel number in the send queue should be
    configured according to the number of queues assigned to the
    SDP PF or VF device.

    Signed-off-by: Subrahmanyam Nilla <snilla@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  drivers/common/cnxk/roc_nix.h             |   2 +-
>  drivers/common/cnxk/roc_nix_debug.c       |   7 +-
>  drivers/common/cnxk/roc_nix_inl.c         |  81 ++++++++--------
>  drivers/common/cnxk/roc_nix_inl.h         |   5 +-
>  drivers/common/cnxk/roc_nix_inl_dev.c     |  42 ++++++--
>  drivers/common/cnxk/roc_nix_inl_dev_irq.c | 155 +++++++++++++++++++-----------
>  drivers/common/cnxk/roc_nix_inl_priv.h    |  12 ++-
>  drivers/common/cnxk/roc_npc.c             |  13 ++-
>  drivers/common/cnxk/version.map           |   1 -
>  drivers/net/cnxk/cnxk_ethdev.c            |  14 +--
>  10 files changed, 202 insertions(+), 130 deletions(-)
>
> diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
> index 6483131..98ff513 100644
> --- a/drivers/common/cnxk/roc_nix.h
> +++ b/drivers/common/cnxk/roc_nix.h
> @@ -309,7 +309,7 @@ struct roc_nix_rq {
>         bool spb_drop_ena;
>         /* End of Input parameters */
>         struct roc_nix *roc_nix;
> -       bool inl_dev_ref;
> +       uint16_t inl_dev_refs;
>  };
>
>  struct roc_nix_cq {
> diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
> index 1ae0451..e05e60d 100644
> --- a/drivers/common/cnxk/roc_nix_debug.c
> +++ b/drivers/common/cnxk/roc_nix_debug.c
> @@ -826,7 +826,7 @@ roc_nix_rq_dump(struct roc_nix_rq *rq)
>         nix_dump("  vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo);
>         nix_dump("  vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
>         nix_dump("  roc_nix = %p", rq->roc_nix);
> -       nix_dump("  inl_dev_ref = %d", rq->inl_dev_ref);
> +       nix_dump("  inl_dev_refs = %d", rq->inl_dev_refs);
>  }
>
>  void
> @@ -1243,6 +1243,7 @@ roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
>         struct nix_inl_dev *inl_dev =
>                 (struct nix_inl_dev *)&roc_inl_dev->reserved;
>         struct dev *dev = &inl_dev->dev;
> +       int i;
>
>         nix_dump("nix_inl_dev@%p", inl_dev);
>         nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
> @@ -1259,7 +1260,6 @@ roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
>         nix_dump("  \tssow_msixoff = %d", inl_dev->ssow_msixoff);
>         nix_dump("  \tnix_cints = %d", inl_dev->cints);
>         nix_dump("  \tnix_qints = %d", inl_dev->qints);
> -       nix_dump("  \trq_refs = %d", inl_dev->rq_refs);
>         nix_dump("  \tinb_sa_base = 0x%p", inl_dev->inb_sa_base);
>         nix_dump("  \tinb_sa_sz = %d", inl_dev->inb_sa_sz);
>         nix_dump("  \txaq_buf_size = %u", inl_dev->xaq_buf_size);
> @@ -1269,5 +1269,6 @@ roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
>         nix_dump("  \txaq_mem = 0x%p", inl_dev->xaq.mem);
>
>         nix_dump("  \tinl_dev_rq:");
> -       roc_nix_rq_dump(&inl_dev->rq);
> +       for (i = 0; i < inl_dev->nb_rqs; i++)
> +               roc_nix_rq_dump(&inl_dev->rqs[i]);
>  }
> diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
> index 9b8b6da..39b9bec 100644
> --- a/drivers/common/cnxk/roc_nix_inl.c
> +++ b/drivers/common/cnxk/roc_nix_inl.c
> @@ -588,8 +588,10 @@ int
>  roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
>  {
>         struct idev_cfg *idev = idev_get_cfg();
> +       int port_id = rq->roc_nix->port_id;
>         struct nix_inl_dev *inl_dev;
>         struct roc_nix_rq *inl_rq;
> +       uint16_t inl_rq_id;
>         struct dev *dev;
>         int rc;
>
> @@ -601,19 +603,24 @@ roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
>         if (!inl_dev)
>                 return 0;
>
> +       /* Check if this RQ is already holding reference */
> +       if (rq->inl_dev_refs)
> +               return 0;
> +
> +       inl_rq_id = inl_dev->nb_rqs > 1 ? port_id : 0;
> +       dev = &inl_dev->dev;
> +       inl_rq = &inl_dev->rqs[inl_rq_id];
> +
>         /* Just take reference if already inited */
> -       if (inl_dev->rq_refs) {
> -               inl_dev->rq_refs++;
> -               rq->inl_dev_ref = true;
> +       if (inl_rq->inl_dev_refs) {
> +               inl_rq->inl_dev_refs++;
> +               rq->inl_dev_refs = 1;
>                 return 0;
>         }
> -
> -       dev = &inl_dev->dev;
> -       inl_rq = &inl_dev->rq;
>         memset(inl_rq, 0, sizeof(struct roc_nix_rq));
>
>         /* Take RQ pool attributes from the first ethdev RQ */
> -       inl_rq->qid = 0;
> +       inl_rq->qid = inl_rq_id;
>         inl_rq->aura_handle = rq->aura_handle;
>         inl_rq->first_skip = rq->first_skip;
>         inl_rq->later_skip = rq->later_skip;
> @@ -691,8 +698,8 @@ roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
>                 return rc;
>         }
>
> -       inl_dev->rq_refs++;
> -       rq->inl_dev_ref = true;
> +       inl_rq->inl_dev_refs++;
> +       rq->inl_dev_refs = 1;
>         return 0;
>  }
>
> @@ -700,15 +707,17 @@ int
>  roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
>  {
>         struct idev_cfg *idev = idev_get_cfg();
> +       int port_id = rq->roc_nix->port_id;
>         struct nix_inl_dev *inl_dev;
>         struct roc_nix_rq *inl_rq;
> +       uint16_t inl_rq_id;
>         struct dev *dev;
>         int rc;
>
>         if (idev == NULL)
>                 return 0;
>
> -       if (!rq->inl_dev_ref)
> +       if (!rq->inl_dev_refs)
>                 return 0;
>
>         inl_dev = idev->nix_inl_dev;
> @@ -718,13 +727,15 @@ roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
>                 return -EFAULT;
>         }
>
> -       rq->inl_dev_ref = false;
> -       inl_dev->rq_refs--;
> -       if (inl_dev->rq_refs)
> -               return 0;
> -
>         dev = &inl_dev->dev;
> -       inl_rq = &inl_dev->rq;
> +       inl_rq_id = inl_dev->nb_rqs > 1 ? port_id : 0;
> +       inl_rq = &inl_dev->rqs[inl_rq_id];
> +
> +       rq->inl_dev_refs = 0;
> +       inl_rq->inl_dev_refs--;
> +       if (inl_rq->inl_dev_refs)
> +               return 0;
> +
>         /* There are no more references, disable RQ */
>         rc = nix_rq_ena_dis(dev, inl_rq, false);
>         if (rc)
> @@ -740,25 +751,6 @@ roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
>         return rc;
>  }
>
> -uint64_t
> -roc_nix_inl_dev_rq_limit_get(void)
> -{
> -       struct idev_cfg *idev = idev_get_cfg();
> -       struct nix_inl_dev *inl_dev;
> -       struct roc_nix_rq *inl_rq;
> -
> -       if (!idev || !idev->nix_inl_dev)
> -               return 0;
> -
> -       inl_dev = idev->nix_inl_dev;
> -       if (!inl_dev->rq_refs)
> -               return 0;
> -
> -       inl_rq = &inl_dev->rq;
> -
> -       return roc_npa_aura_op_limit_get(inl_rq->aura_handle);
> -}
> -
>  void
>  roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev)
>  {
> @@ -807,15 +799,22 @@ roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix)
>  }
>
>  struct roc_nix_rq *
> -roc_nix_inl_dev_rq(void)
> +roc_nix_inl_dev_rq(struct roc_nix *roc_nix)
>  {
>         struct idev_cfg *idev = idev_get_cfg();
> +       int port_id = roc_nix->port_id;
>         struct nix_inl_dev *inl_dev;
> +       struct roc_nix_rq *inl_rq;
> +       uint16_t inl_rq_id;
>
>         if (idev != NULL) {
>                 inl_dev = idev->nix_inl_dev;
> -               if (inl_dev != NULL && inl_dev->rq_refs)
> -                       return &inl_dev->rq;
> +               if (inl_dev != NULL) {
> +                       inl_rq_id = inl_dev->nb_rqs > 1 ? port_id : 0;
> +                       inl_rq = &inl_dev->rqs[inl_rq_id];
> +                       if (inl_rq->inl_dev_refs)
> +                               return inl_rq;
> +               }
>         }
>
>         return NULL;
> @@ -1025,6 +1024,7 @@ roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena, bool inb_inl_dev)
>         void *sa, *sa_base = NULL;
>         struct nix *nix = NULL;
>         uint16_t max_spi = 0;
> +       uint32_t rq_refs = 0;
>         uint8_t pkind = 0;
>         int i;
>
> @@ -1047,7 +1047,10 @@ roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena, bool inb_inl_dev)
>         }
>
>         if (inl_dev) {
> -               if (inl_dev->rq_refs == 0) {
> +               for (i = 0; i < inl_dev->nb_rqs; i++)
> +                       rq_refs += inl_dev->rqs[i].inl_dev_refs;
> +
> +               if (rq_refs == 0) {
>                         inl_dev->ts_ena = ts_ena;
>                         max_spi = inl_dev->ipsec_in_max_spi;
>                         sa_base = inl_dev->inb_sa_base;
> diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
> index 633f090..7835ba3 100644
> --- a/drivers/common/cnxk/roc_nix_inl.h
> +++ b/drivers/common/cnxk/roc_nix_inl.h
> @@ -168,12 +168,11 @@ void __roc_api roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev);
>  int __roc_api roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq);
>  int __roc_api roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq);
>  bool __roc_api roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix);
> -struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(void);
> +struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(struct roc_nix *roc_nix);
>  int __roc_api roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix,
>                                          uint32_t tag_const, uint8_t tt);
> -uint64_t __roc_api roc_nix_inl_dev_rq_limit_get(void);
>  int __roc_api roc_nix_reassembly_configure(uint32_t max_wait_time,
> -                                       uint16_t max_frags);
> +                                          uint16_t max_frags);
>  int __roc_api roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena,
>                                        bool inb_inl_dev);
>
> diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
> index 786a6bc..3a96498 100644
> --- a/drivers/common/cnxk/roc_nix_inl_dev.c
> +++ b/drivers/common/cnxk/roc_nix_inl_dev.c
> @@ -334,6 +334,7 @@ nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
>         struct nix_lf_alloc_rsp *rsp;
>         struct nix_lf_alloc_req *req;
>         struct nix_hw_info *hw_info;
> +       struct roc_nix_rq *rqs;
>         uint64_t max_sa, i;
>         size_t inb_sa_sz;
>         int rc = -ENOSPC;
> @@ -345,7 +346,8 @@ nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
>         req = mbox_alloc_msg_nix_lf_alloc(mbox);
>         if (req == NULL)
>                 return rc;
> -       req->rq_cnt = 1;
> +       /* We will have per-port RQ if it is not with channel masking */
> +       req->rq_cnt = inl_dev->nb_rqs;
>         req->sq_cnt = 1;
>         req->cq_cnt = 1;
>         /* XQESZ is W16 */
> @@ -421,6 +423,14 @@ nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
>                 goto free_mem;
>         }
>
> +       /* Allocate memory for RQ's */
> +       rqs = plt_zmalloc(sizeof(struct roc_nix_rq) * PLT_MAX_ETHPORTS, 0);
> +       if (!rqs) {
> +               plt_err("Failed to allocate memory for RQ's");
> +               goto free_mem;
> +       }
> +       inl_dev->rqs = rqs;
> +
>         return 0;
>  free_mem:
>         plt_free(inl_dev->inb_sa_base);
> @@ -464,7 +474,15 @@ nix_inl_nix_release(struct nix_inl_dev *inl_dev)
>         if (req == NULL)
>                 return -ENOSPC;
>
> -       return mbox_process(mbox);
> +       rc = mbox_process(mbox);
> +       if (rc)
> +               return rc;
> +
> +       plt_free(inl_dev->rqs);
> +       plt_free(inl_dev->inb_sa_base);
> +       inl_dev->rqs = NULL;
> +       inl_dev->inb_sa_base = NULL;
> +       return 0;
>  }
>
>  static int
> @@ -584,10 +602,13 @@ roc_nix_inl_dev_xaq_realloc(uint64_t aura_handle)
>
>  no_pool:
>         /* Disable RQ if enabled */
> -       if (inl_dev->rq_refs) {
> -               rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rq, false);
> +       for (i = 0; i < inl_dev->nb_rqs; i++) {
> +               if (!inl_dev->rqs[i].inl_dev_refs)
> +                       continue;
> +               rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rqs[i], false);
>                 if (rc) {
> -                       plt_err("Failed to disable inline dev RQ, rc=%d", rc);
> +                       plt_err("Failed to disable inline dev RQ %d, rc=%d", i,
> +                               rc);
>                         return rc;
>                 }
>         }
> @@ -633,10 +654,14 @@ roc_nix_inl_dev_xaq_realloc(uint64_t aura_handle)
>
>  exit:
>         /* Renable RQ */
> -       if (inl_dev->rq_refs) {
> -               rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rq, true);
> +       for (i = 0; i < inl_dev->nb_rqs; i++) {
> +               if (!inl_dev->rqs[i].inl_dev_refs)
> +                       continue;
> +
> +               rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rqs[i], true);
>                 if (rc)
> -                       plt_err("Failed to enable inline dev RQ, rc=%d", rc);
> +                       plt_err("Failed to enable inline dev RQ %d, rc=%d", i,
> +                               rc);
>         }
>
>         return rc;
> @@ -815,6 +840,7 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
>         inl_dev->spb_drop_pc = NIX_AURA_DROP_PC_DFLT;
>         inl_dev->lpb_drop_pc = NIX_AURA_DROP_PC_DFLT;
>         inl_dev->set_soft_exp_poll = roc_inl_dev->set_soft_exp_poll;
> +       inl_dev->nb_rqs = inl_dev->is_multi_channel ? 1 : PLT_MAX_ETHPORTS;
>
>         if (roc_inl_dev->spb_drop_pc)
>                 inl_dev->spb_drop_pc = roc_inl_dev->spb_drop_pc;
> diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
> index 1855f36..5c19bc3 100644
> --- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
> +++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
> @@ -179,50 +179,59 @@ nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
>  static void
>  nix_inl_nix_q_irq(void *param)
>  {
> -       struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
> +       struct nix_inl_qint *qints_mem = (struct nix_inl_qint *)param;
> +       struct nix_inl_dev *inl_dev = qints_mem->inl_dev;
>         uintptr_t nix_base = inl_dev->nix_base;
>         struct dev *dev = &inl_dev->dev;
> +       uint16_t qint = qints_mem->qint;
>         volatile void *ctx;
>         uint64_t reg, intr;
> +       uint64_t wdata;
>         uint8_t irq;
> -       int rc;
> +       int rc, q;
>
> -       intr = plt_read64(nix_base + NIX_LF_QINTX_INT(0));
> +       intr = plt_read64(nix_base + NIX_LF_QINTX_INT(qint));
>         if (intr == 0)
>                 return;
>
>         plt_err("Queue_intr=0x%" PRIx64 " qintx 0 pf=%d, vf=%d", intr, dev->pf,
>                 dev->vf);
>
> -       /* Get and clear RQ0 interrupt */
> -       reg = roc_atomic64_add_nosync(0,
> -                                     (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
> -       if (reg & BIT_ULL(42) /* OP_ERR */) {
> -               plt_err("Failed to get rq_int");
> -               return;
> +       /* Handle RQ interrupts */
> +       for (q = 0; q < inl_dev->nb_rqs; q++) {
> +               /* Get and clear RQ interrupts */
> +               wdata = (uint64_t)q << 44;
> +               reg = roc_atomic64_add_nosync(wdata,
> +                                             (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
> +               if (reg & BIT_ULL(42) /* OP_ERR */) {
> +                       plt_err("Failed to get rq_int");
> +                       return;
> +               }
> +               irq = reg & 0xff;
> +               plt_write64(wdata | irq, nix_base + NIX_LF_RQ_OP_INT);
> +
> +               if (irq & BIT_ULL(NIX_RQINT_DROP))
> +                       plt_err("RQ=0 NIX_RQINT_DROP");
> +
> +               if (irq & BIT_ULL(NIX_RQINT_RED))
> +                       plt_err("RQ=0 NIX_RQINT_RED");
>         }
> -       irq = reg & 0xff;
> -       plt_write64(0 | irq, nix_base + NIX_LF_RQ_OP_INT);
> -
> -       if (irq & BIT_ULL(NIX_RQINT_DROP))
> -               plt_err("RQ=0 NIX_RQINT_DROP");
> -
> -       if (irq & BIT_ULL(NIX_RQINT_RED))
> -               plt_err("RQ=0 NIX_RQINT_RED");
>
>         /* Clear interrupt */
> -       plt_write64(intr, nix_base + NIX_LF_QINTX_INT(0));
> +       plt_write64(intr, nix_base + NIX_LF_QINTX_INT(qint));
>
>         /* Dump registers to std out */
>         nix_inl_nix_reg_dump(inl_dev);
>
> -       /* Dump RQ 0 */
> -       rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
> -       if (rc) {
> -               plt_err("Failed to get rq context");
> -               return;
> +       /* Dump RQs */
> +       for (q = 0; q < inl_dev->nb_rqs; q++) {
> +               rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
> +               if (rc) {
> +                       plt_err("Failed to get rq %d context, rc=%d", q, rc);
> +                       continue;
> +               }
> +               nix_lf_rq_dump(ctx);
>         }
> -       nix_lf_rq_dump(ctx);
>  }
>
>  static void
> @@ -233,7 +242,7 @@ nix_inl_nix_ras_irq(void *param)
>         struct dev *dev = &inl_dev->dev;
>         volatile void *ctx;
>         uint64_t intr;
> -       int rc;
> +       int rc, q;
>
>         intr = plt_read64(nix_base + NIX_LF_RAS);
>         if (intr == 0)
> @@ -246,13 +255,15 @@ nix_inl_nix_ras_irq(void *param)
>         /* Dump registers to std out */
>         nix_inl_nix_reg_dump(inl_dev);
>
> -       /* Dump RQ 0 */
> -       rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
> -       if (rc) {
> -               plt_err("Failed to get rq context");
> -               return;
> +       /* Dump RQs */
> +       for (q = 0; q < inl_dev->nb_rqs; q++) {
> +               rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
> +               if (rc) {
> +                       plt_err("Failed to get rq %d context, rc=%d", q, rc);
> +                       continue;
> +               }
> +               nix_lf_rq_dump(ctx);
>         }
> -       nix_lf_rq_dump(ctx);
>  }
>
>  static void
> @@ -263,7 +274,7 @@ nix_inl_nix_err_irq(void *param)
>         struct dev *dev = &inl_dev->dev;
>         volatile void *ctx;
>         uint64_t intr;
> -       int rc;
> +       int rc, q;
>
>         intr = plt_read64(nix_base + NIX_LF_ERR_INT);
>         if (intr == 0)
> @@ -277,13 +288,15 @@ nix_inl_nix_err_irq(void *param)
>         /* Dump registers to std out */
>         nix_inl_nix_reg_dump(inl_dev);
>
> -       /* Dump RQ 0 */
> -       rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
> -       if (rc) {
> -               plt_err("Failed to get rq context");
> -               return;
> +       /* Dump RQs */
> +       for (q = 0; q < inl_dev->nb_rqs; q++) {
> +               rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
> +               if (rc) {
> +                       plt_err("Failed to get rq %d context, rc=%d", q, rc);
> +                       continue;
> +               }
> +               nix_lf_rq_dump(ctx);
>         }
> -       nix_lf_rq_dump(ctx);
>  }
>
>  int
> @@ -291,8 +304,10 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
>  {
>         struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
>         uintptr_t nix_base = inl_dev->nix_base;
> +       struct nix_inl_qint *qints_mem;
> +       int rc, q, ret = 0;
>         uint16_t msixoff;
> -       int rc;
> +       int qints;
>
>         msixoff = inl_dev->nix_msixoff;
>         if (msixoff == MSIX_VECTOR_INVALID) {
> @@ -317,21 +332,38 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
>         /* Enable RAS interrupts */
>         plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1S);
>
> -       /* Setup queue irq for RQ 0 */
> +       /* Setup queue irq for RQ's */
> +       qints = PLT_MIN(inl_dev->nb_rqs, inl_dev->qints);
> +       qints_mem = plt_zmalloc(sizeof(struct nix_inl_qint) * qints, 0);
> +       if (!qints_mem) {
> +               plt_err("Failed to allocate memory for %u qints", qints);
> +               return -ENOMEM;
> +       }
>
> -       /* Clear QINT CNT, interrupt */
> -       plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
> -       plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
> +       inl_dev->configured_qints = qints;
> +       inl_dev->qints_mem = qints_mem;
>
> -       /* Register queue irq vector */
> -       rc |= dev_irq_register(handle, nix_inl_nix_q_irq, inl_dev,
> -                              msixoff + NIX_LF_INT_VEC_QINT_START);
> +       for (q = 0; q < qints; q++) {
> +               /* Clear QINT CNT, interrupt */
> +               plt_write64(0, nix_base + NIX_LF_QINTX_CNT(q));
> +               plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(q));
>
> -       plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
> -       plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
> -       /* Enable QINT interrupt */
> -       plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(0));
> +               /* Register queue irq vector */
> +               ret = dev_irq_register(handle, nix_inl_nix_q_irq, &qints_mem[q],
> +                                      msixoff + NIX_LF_INT_VEC_QINT_START + q);
> +               if (ret)
> +                       break;
>
> +               plt_write64(0, nix_base + NIX_LF_QINTX_CNT(q));
> +               plt_write64(0, nix_base + NIX_LF_QINTX_INT(q));
> +               /* Enable QINT interrupt */
> +               plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(q));
> +
> +               qints_mem[q].inl_dev = inl_dev;
> +               qints_mem[q].qint = q;
> +       }
> +
> +       rc |= ret;
>         return rc;
>  }
>
> @@ -339,8 +371,10 @@ void
>  nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
>  {
>         struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
> +       struct nix_inl_qint *qints_mem = inl_dev->qints_mem;
>         uintptr_t nix_base = inl_dev->nix_base;
>         uint16_t msixoff;
> +       int q;
>
>         msixoff = inl_dev->nix_msixoff;
>         /* Disable err interrupts */
> @@ -353,14 +387,19 @@ nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
>         dev_irq_unregister(handle, nix_inl_nix_ras_irq, inl_dev,
>                            msixoff + NIX_LF_INT_VEC_POISON);
>
> -       /* Clear QINT CNT */
> -       plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
> -       plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
> +       for (q = 0; q < inl_dev->configured_qints; q++) {
> +               /* Clear QINT CNT */
> +               plt_write64(0, nix_base + NIX_LF_QINTX_CNT(q));
> +               plt_write64(0, nix_base + NIX_LF_QINTX_INT(q));
>
> -       /* Disable QINT interrupt */
> -       plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
> +               /* Disable QINT interrupt */
> +               plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(q));
>
> -       /* Unregister queue irq vector */
> -       dev_irq_unregister(handle, nix_inl_nix_q_irq, inl_dev,
> -                          msixoff + NIX_LF_INT_VEC_QINT_START);
> +               /* Unregister queue irq vector */
> +               dev_irq_unregister(handle, nix_inl_nix_q_irq, &qints_mem[q],
> +                                  msixoff + NIX_LF_INT_VEC_QINT_START + q);
> +       }
> +
> +       plt_free(inl_dev->qints_mem);
> +       inl_dev->qints_mem = NULL;
>  }
> diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
> index 1ab8470..d61c7b2 100644
> --- a/drivers/common/cnxk/roc_nix_inl_priv.h
> +++ b/drivers/common/cnxk/roc_nix_inl_priv.h
> @@ -6,6 +6,12 @@
>  #include <pthread.h>
>  #include <sys/types.h>
>
> +struct nix_inl_dev;
> +struct nix_inl_qint {
> +       struct nix_inl_dev *inl_dev;
> +       uint16_t qint;
> +};
> +
>  struct nix_inl_dev {
>         /* Base device object */
>         struct dev dev;
> @@ -42,8 +48,10 @@ struct nix_inl_dev {
>         uint16_t vwqe_interval;
>         uint16_t cints;
>         uint16_t qints;
> -       struct roc_nix_rq rq;
> -       uint16_t rq_refs;
> +       uint16_t configured_qints;
> +       struct roc_nix_rq *rqs;
> +       struct nix_inl_qint *qints_mem;
> +       uint16_t nb_rqs;
>         bool is_nix1;
>         uint8_t spb_drop_pc;
>         uint8_t lpb_drop_pc;
> diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
> index c8ada96..da5b962 100644
> --- a/drivers/common/cnxk/roc_npc.c
> +++ b/drivers/common/cnxk/roc_npc.c
> @@ -350,6 +350,7 @@ npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
>         uint8_t has_msns_act = 0;
>         int sel_act, req_act = 0;
>         uint16_t pf_func, vf_id;
> +       struct roc_nix *roc_nix;
>         int errcode = 0;
>         int mark = 0;
>         int rq = 0;
> @@ -436,11 +437,19 @@ npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
>                          */
>                         req_act |= ROC_NPC_ACTION_TYPE_SEC;
>                         rq = 0;
> +                       roc_nix = roc_npc->roc_nix;
>
>                         /* Special processing when with inline device */
> -                       if (roc_nix_inb_is_with_inl_dev(roc_npc->roc_nix) &&
> +                       if (roc_nix_inb_is_with_inl_dev(roc_nix) &&
>                             roc_nix_inl_dev_is_probed()) {
> -                               rq = 0;
> +                               struct roc_nix_rq *inl_rq;
> +
> +                               inl_rq = roc_nix_inl_dev_rq(roc_nix);
> +                               if (!inl_rq) {
> +                                       errcode = NPC_ERR_INTERNAL;
> +                                       goto err_exit;
> +                               }
> +                               rq = inl_rq->qid;
>                                 pf_func = nix_inl_dev_pffunc_get();
>                         }
>                         rc = npc_parse_msns_action(roc_npc, actions, flow,
> diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
> index 53586da..a77f3f6 100644
> --- a/drivers/common/cnxk/version.map
> +++ b/drivers/common/cnxk/version.map
> @@ -138,7 +138,6 @@ INTERNAL {
>         roc_nix_inl_dev_rq;
>         roc_nix_inl_dev_rq_get;
>         roc_nix_inl_dev_rq_put;
> -       roc_nix_inl_dev_rq_limit_get;
>         roc_nix_inl_dev_unlock;
>         roc_nix_inl_dev_xaq_realloc;
>         roc_nix_inl_inb_is_enabled;
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
> index 3912c24..09e5736 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -546,19 +546,6 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
>                 eth_dev->data->rx_queues[qid] = NULL;
>         }
>
> -       /* Clam up cq limit to size of packet pool aura for LBK
> -        * to avoid meta packet drop as LBK does not currently support
> -        * backpressure.
> -        */
> -       if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
> -               uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
> -
> -               /* Use current RQ's aura limit if inl rq is not available */
> -               if (!pkt_pool_limit)
> -                       pkt_pool_limit = roc_npa_aura_op_limit_get(mp->pool_id);
> -               nb_desc = RTE_MAX(nb_desc, pkt_pool_limit);
> -       }
> -
>         /* Its a no-op when inline device is not used */
>         if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY ||
>             dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
> @@ -1675,6 +1662,7 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
>         /* Initialize base roc nix */
>         nix->pci_dev = pci_dev;
>         nix->hw_vlan_ins = true;
> +       nix->port_id = eth_dev->data->port_id;
>         rc = roc_nix_dev_init(nix);
>         if (rc) {
>                 plt_err("Failed to initialize roc nix rc=%d", rc);
> --
> 2.8.4
>
  

Patch

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 6483131..98ff513 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -309,7 +309,7 @@  struct roc_nix_rq {
 	bool spb_drop_ena;
 	/* End of Input parameters */
 	struct roc_nix *roc_nix;
-	bool inl_dev_ref;
+	uint16_t inl_dev_refs;
 };
 
 struct roc_nix_cq {
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 1ae0451..e05e60d 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -826,7 +826,7 @@  roc_nix_rq_dump(struct roc_nix_rq *rq)
 	nix_dump("  vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo);
 	nix_dump("  vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
 	nix_dump("  roc_nix = %p", rq->roc_nix);
-	nix_dump("  inl_dev_ref = %d", rq->inl_dev_ref);
+	nix_dump("  inl_dev_refs = %d", rq->inl_dev_refs);
 }
 
 void
@@ -1243,6 +1243,7 @@  roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
 	struct nix_inl_dev *inl_dev =
 		(struct nix_inl_dev *)&roc_inl_dev->reserved;
 	struct dev *dev = &inl_dev->dev;
+	int i;
 
 	nix_dump("nix_inl_dev@%p", inl_dev);
 	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
@@ -1259,7 +1260,6 @@  roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
 	nix_dump("  \tssow_msixoff = %d", inl_dev->ssow_msixoff);
 	nix_dump("  \tnix_cints = %d", inl_dev->cints);
 	nix_dump("  \tnix_qints = %d", inl_dev->qints);
-	nix_dump("  \trq_refs = %d", inl_dev->rq_refs);
 	nix_dump("  \tinb_sa_base = 0x%p", inl_dev->inb_sa_base);
 	nix_dump("  \tinb_sa_sz = %d", inl_dev->inb_sa_sz);
 	nix_dump("  \txaq_buf_size = %u", inl_dev->xaq_buf_size);
@@ -1269,5 +1269,6 @@  roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
 	nix_dump("  \txaq_mem = 0x%p", inl_dev->xaq.mem);
 
 	nix_dump("  \tinl_dev_rq:");
-	roc_nix_rq_dump(&inl_dev->rq);
+	for (i = 0; i < inl_dev->nb_rqs; i++)
+		roc_nix_rq_dump(&inl_dev->rqs[i]);
 }
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 9b8b6da..39b9bec 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -588,8 +588,10 @@  int
 roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
 {
 	struct idev_cfg *idev = idev_get_cfg();
+	int port_id = rq->roc_nix->port_id;
 	struct nix_inl_dev *inl_dev;
 	struct roc_nix_rq *inl_rq;
+	uint16_t inl_rq_id;
 	struct dev *dev;
 	int rc;
 
@@ -601,19 +603,24 @@  roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
 	if (!inl_dev)
 		return 0;
 
+	/* Check if this RQ is already holding reference */
+	if (rq->inl_dev_refs)
+		return 0;
+
+	inl_rq_id = inl_dev->nb_rqs > 1 ? port_id : 0;
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rqs[inl_rq_id];
+
 	/* Just take reference if already inited */
-	if (inl_dev->rq_refs) {
-		inl_dev->rq_refs++;
-		rq->inl_dev_ref = true;
+	if (inl_rq->inl_dev_refs) {
+		inl_rq->inl_dev_refs++;
+		rq->inl_dev_refs = 1;
 		return 0;
 	}
-
-	dev = &inl_dev->dev;
-	inl_rq = &inl_dev->rq;
 	memset(inl_rq, 0, sizeof(struct roc_nix_rq));
 
 	/* Take RQ pool attributes from the first ethdev RQ */
-	inl_rq->qid = 0;
+	inl_rq->qid = inl_rq_id;
 	inl_rq->aura_handle = rq->aura_handle;
 	inl_rq->first_skip = rq->first_skip;
 	inl_rq->later_skip = rq->later_skip;
@@ -691,8 +698,8 @@  roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
 		return rc;
 	}
 
-	inl_dev->rq_refs++;
-	rq->inl_dev_ref = true;
+	inl_rq->inl_dev_refs++;
+	rq->inl_dev_refs = 1;
 	return 0;
 }
 
@@ -700,15 +707,17 @@  int
 roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
 {
 	struct idev_cfg *idev = idev_get_cfg();
+	int port_id = rq->roc_nix->port_id;
 	struct nix_inl_dev *inl_dev;
 	struct roc_nix_rq *inl_rq;
+	uint16_t inl_rq_id;
 	struct dev *dev;
 	int rc;
 
 	if (idev == NULL)
 		return 0;
 
-	if (!rq->inl_dev_ref)
+	if (!rq->inl_dev_refs)
 		return 0;
 
 	inl_dev = idev->nix_inl_dev;
@@ -718,13 +727,15 @@  roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
 		return -EFAULT;
 	}
 
-	rq->inl_dev_ref = false;
-	inl_dev->rq_refs--;
-	if (inl_dev->rq_refs)
-		return 0;
-
 	dev = &inl_dev->dev;
-	inl_rq = &inl_dev->rq;
+	inl_rq_id = inl_dev->nb_rqs > 1 ? port_id : 0;
+	inl_rq = &inl_dev->rqs[inl_rq_id];
+
+	rq->inl_dev_refs = 0;
+	inl_rq->inl_dev_refs--;
+	if (inl_rq->inl_dev_refs)
+		return 0;
+
 	/* There are no more references, disable RQ */
 	rc = nix_rq_ena_dis(dev, inl_rq, false);
 	if (rc)
@@ -740,25 +751,6 @@  roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
 	return rc;
 }
 
-uint64_t
-roc_nix_inl_dev_rq_limit_get(void)
-{
-	struct idev_cfg *idev = idev_get_cfg();
-	struct nix_inl_dev *inl_dev;
-	struct roc_nix_rq *inl_rq;
-
-	if (!idev || !idev->nix_inl_dev)
-		return 0;
-
-	inl_dev = idev->nix_inl_dev;
-	if (!inl_dev->rq_refs)
-		return 0;
-
-	inl_rq = &inl_dev->rq;
-
-	return roc_npa_aura_op_limit_get(inl_rq->aura_handle);
-}
-
 void
 roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev)
 {
@@ -807,15 +799,22 @@  roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix)
 }
 
 struct roc_nix_rq *
-roc_nix_inl_dev_rq(void)
+roc_nix_inl_dev_rq(struct roc_nix *roc_nix)
 {
 	struct idev_cfg *idev = idev_get_cfg();
+	int port_id = roc_nix->port_id;
 	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	uint16_t inl_rq_id;
 
 	if (idev != NULL) {
 		inl_dev = idev->nix_inl_dev;
-		if (inl_dev != NULL && inl_dev->rq_refs)
-			return &inl_dev->rq;
+		if (inl_dev != NULL) {
+			inl_rq_id = inl_dev->nb_rqs > 1 ? port_id : 0;
+			inl_rq = &inl_dev->rqs[inl_rq_id];
+			if (inl_rq->inl_dev_refs)
+				return inl_rq;
+		}
 	}
 
 	return NULL;
@@ -1025,6 +1024,7 @@  roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena, bool inb_inl_dev)
 	void *sa, *sa_base = NULL;
 	struct nix *nix = NULL;
 	uint16_t max_spi = 0;
+	uint32_t rq_refs = 0;
 	uint8_t pkind = 0;
 	int i;
 
@@ -1047,7 +1047,10 @@  roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena, bool inb_inl_dev)
 	}
 
 	if (inl_dev) {
-		if (inl_dev->rq_refs == 0) {
+		for (i = 0; i < inl_dev->nb_rqs; i++)
+			rq_refs += inl_dev->rqs[i].inl_dev_refs;
+
+		if (rq_refs == 0) {
 			inl_dev->ts_ena = ts_ena;
 			max_spi = inl_dev->ipsec_in_max_spi;
 			sa_base = inl_dev->inb_sa_base;
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 633f090..7835ba3 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -168,12 +168,11 @@  void __roc_api roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev);
 int __roc_api roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq);
 int __roc_api roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq);
 bool __roc_api roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix);
-struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(void);
+struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(struct roc_nix *roc_nix);
 int __roc_api roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix,
 					 uint32_t tag_const, uint8_t tt);
-uint64_t __roc_api roc_nix_inl_dev_rq_limit_get(void);
 int __roc_api roc_nix_reassembly_configure(uint32_t max_wait_time,
-					uint16_t max_frags);
+					   uint16_t max_frags);
 int __roc_api roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena,
 				       bool inb_inl_dev);
 
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index 786a6bc..3a96498 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -334,6 +334,7 @@  nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
 	struct nix_lf_alloc_rsp *rsp;
 	struct nix_lf_alloc_req *req;
 	struct nix_hw_info *hw_info;
+	struct roc_nix_rq *rqs;
 	uint64_t max_sa, i;
 	size_t inb_sa_sz;
 	int rc = -ENOSPC;
@@ -345,7 +346,8 @@  nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
 	req = mbox_alloc_msg_nix_lf_alloc(mbox);
 	if (req == NULL)
 		return rc;
-	req->rq_cnt = 1;
+	/* We will have per-port RQ if it is not with channel masking */
+	req->rq_cnt = inl_dev->nb_rqs;
 	req->sq_cnt = 1;
 	req->cq_cnt = 1;
 	/* XQESZ is W16 */
@@ -421,6 +423,14 @@  nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
 		goto free_mem;
 	}
 
+	/* Allocate memory for RQ's */
+	rqs = plt_zmalloc(sizeof(struct roc_nix_rq) * PLT_MAX_ETHPORTS, 0);
+	if (!rqs) {
+		plt_err("Failed to allocate memory for RQ's");
+		goto free_mem;
+	}
+	inl_dev->rqs = rqs;
+
 	return 0;
 free_mem:
 	plt_free(inl_dev->inb_sa_base);
@@ -464,7 +474,15 @@  nix_inl_nix_release(struct nix_inl_dev *inl_dev)
 	if (req == NULL)
 		return -ENOSPC;
 
-	return mbox_process(mbox);
+	rc = mbox_process(mbox);
+	if (rc)
+		return rc;
+
+	plt_free(inl_dev->rqs);
+	plt_free(inl_dev->inb_sa_base);
+	inl_dev->rqs = NULL;
+	inl_dev->inb_sa_base = NULL;
+	return 0;
 }
 
 static int
@@ -584,10 +602,13 @@  roc_nix_inl_dev_xaq_realloc(uint64_t aura_handle)
 
 no_pool:
 	/* Disable RQ if enabled */
-	if (inl_dev->rq_refs) {
-		rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rq, false);
+	for (i = 0; i < inl_dev->nb_rqs; i++) {
+		if (!inl_dev->rqs[i].inl_dev_refs)
+			continue;
+		rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rqs[i], false);
 		if (rc) {
-			plt_err("Failed to disable inline dev RQ, rc=%d", rc);
+			plt_err("Failed to disable inline dev RQ %d, rc=%d", i,
+				rc);
 			return rc;
 		}
 	}
@@ -633,10 +654,14 @@  roc_nix_inl_dev_xaq_realloc(uint64_t aura_handle)
 
 exit:
 	/* Renable RQ */
-	if (inl_dev->rq_refs) {
-		rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rq, true);
+	for (i = 0; i < inl_dev->nb_rqs; i++) {
+		if (!inl_dev->rqs[i].inl_dev_refs)
+			continue;
+
+		rc = nix_rq_ena_dis(&inl_dev->dev, &inl_dev->rqs[i], true);
 		if (rc)
-			plt_err("Failed to enable inline dev RQ, rc=%d", rc);
+			plt_err("Failed to enable inline dev RQ %d, rc=%d", i,
+				rc);
 	}
 
 	return rc;
@@ -815,6 +840,7 @@  roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
 	inl_dev->spb_drop_pc = NIX_AURA_DROP_PC_DFLT;
 	inl_dev->lpb_drop_pc = NIX_AURA_DROP_PC_DFLT;
 	inl_dev->set_soft_exp_poll = roc_inl_dev->set_soft_exp_poll;
+	inl_dev->nb_rqs = inl_dev->is_multi_channel ? 1 : PLT_MAX_ETHPORTS;
 
 	if (roc_inl_dev->spb_drop_pc)
 		inl_dev->spb_drop_pc = roc_inl_dev->spb_drop_pc;
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 1855f36..5c19bc3 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -179,50 +179,59 @@  nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
 static void
 nix_inl_nix_q_irq(void *param)
 {
-	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	struct nix_inl_qint *qints_mem = (struct nix_inl_qint *)param;
+	struct nix_inl_dev *inl_dev = qints_mem->inl_dev;
 	uintptr_t nix_base = inl_dev->nix_base;
 	struct dev *dev = &inl_dev->dev;
+	uint16_t qint = qints_mem->qint;
 	volatile void *ctx;
 	uint64_t reg, intr;
+	uint64_t wdata;
 	uint8_t irq;
-	int rc;
+	int rc, q;
 
-	intr = plt_read64(nix_base + NIX_LF_QINTX_INT(0));
+	intr = plt_read64(nix_base + NIX_LF_QINTX_INT(qint));
 	if (intr == 0)
 		return;
 
 	plt_err("Queue_intr=0x%" PRIx64 " qintx 0 pf=%d, vf=%d", intr, dev->pf,
 		dev->vf);
 
-	/* Get and clear RQ0 interrupt */
-	reg = roc_atomic64_add_nosync(0,
-				      (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
-	if (reg & BIT_ULL(42) /* OP_ERR */) {
-		plt_err("Failed to get rq_int");
-		return;
+	/* Handle RQ interrupts */
+	for (q = 0; q < inl_dev->nb_rqs; q++) {
+		/* Get and clear RQ interrupts */
+		wdata = (uint64_t)q << 44;
+		reg = roc_atomic64_add_nosync(wdata,
+					      (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
+		if (reg & BIT_ULL(42) /* OP_ERR */) {
+			plt_err("Failed to get rq_int");
+			return;
+		}
+		irq = reg & 0xff;
+		plt_write64(wdata | irq, nix_base + NIX_LF_RQ_OP_INT);
+
+		if (irq & BIT_ULL(NIX_RQINT_DROP))
+			plt_err("RQ=0 NIX_RQINT_DROP");
+
+		if (irq & BIT_ULL(NIX_RQINT_RED))
+			plt_err("RQ=0 NIX_RQINT_RED");
 	}
-	irq = reg & 0xff;
-	plt_write64(0 | irq, nix_base + NIX_LF_RQ_OP_INT);
-
-	if (irq & BIT_ULL(NIX_RQINT_DROP))
-		plt_err("RQ=0 NIX_RQINT_DROP");
-
-	if (irq & BIT_ULL(NIX_RQINT_RED))
-		plt_err("RQ=0 NIX_RQINT_RED");
 
 	/* Clear interrupt */
-	plt_write64(intr, nix_base + NIX_LF_QINTX_INT(0));
+	plt_write64(intr, nix_base + NIX_LF_QINTX_INT(qint));
 
 	/* Dump registers to std out */
 	nix_inl_nix_reg_dump(inl_dev);
 
-	/* Dump RQ 0 */
-	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
-	if (rc) {
-		plt_err("Failed to get rq context");
-		return;
+	/* Dump RQs */
+	for (q = 0; q < inl_dev->nb_rqs; q++) {
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
+		if (rc) {
+			plt_err("Failed to get rq %d context, rc=%d", q, rc);
+			continue;
+		}
+		nix_lf_rq_dump(ctx);
 	}
-	nix_lf_rq_dump(ctx);
 }
 
 static void
@@ -233,7 +242,7 @@  nix_inl_nix_ras_irq(void *param)
 	struct dev *dev = &inl_dev->dev;
 	volatile void *ctx;
 	uint64_t intr;
-	int rc;
+	int rc, q;
 
 	intr = plt_read64(nix_base + NIX_LF_RAS);
 	if (intr == 0)
@@ -246,13 +255,15 @@  nix_inl_nix_ras_irq(void *param)
 	/* Dump registers to std out */
 	nix_inl_nix_reg_dump(inl_dev);
 
-	/* Dump RQ 0 */
-	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
-	if (rc) {
-		plt_err("Failed to get rq context");
-		return;
+	/* Dump RQs */
+	for (q = 0; q < inl_dev->nb_rqs; q++) {
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
+		if (rc) {
+			plt_err("Failed to get rq %d context, rc=%d", q, rc);
+			continue;
+		}
+		nix_lf_rq_dump(ctx);
 	}
-	nix_lf_rq_dump(ctx);
 }
 
 static void
@@ -263,7 +274,7 @@  nix_inl_nix_err_irq(void *param)
 	struct dev *dev = &inl_dev->dev;
 	volatile void *ctx;
 	uint64_t intr;
-	int rc;
+	int rc, q;
 
 	intr = plt_read64(nix_base + NIX_LF_ERR_INT);
 	if (intr == 0)
@@ -277,13 +288,15 @@  nix_inl_nix_err_irq(void *param)
 	/* Dump registers to std out */
 	nix_inl_nix_reg_dump(inl_dev);
 
-	/* Dump RQ 0 */
-	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
-	if (rc) {
-		plt_err("Failed to get rq context");
-		return;
+	/* Dump RQs */
+	for (q = 0; q < inl_dev->nb_rqs; q++) {
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
+		if (rc) {
+			plt_err("Failed to get rq %d context, rc=%d", q, rc);
+			continue;
+		}
+		nix_lf_rq_dump(ctx);
 	}
-	nix_lf_rq_dump(ctx);
 }
 
 int
@@ -291,8 +304,10 @@  nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
 {
 	struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
 	uintptr_t nix_base = inl_dev->nix_base;
+	struct nix_inl_qint *qints_mem;
+	int rc, q, ret = 0;
 	uint16_t msixoff;
-	int rc;
+	int qints;
 
 	msixoff = inl_dev->nix_msixoff;
 	if (msixoff == MSIX_VECTOR_INVALID) {
@@ -317,21 +332,38 @@  nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
 	/* Enable RAS interrupts */
 	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1S);
 
-	/* Setup queue irq for RQ 0 */
+	/* Setup queue irq for RQ's */
+	qints = PLT_MIN(inl_dev->nb_rqs, inl_dev->qints);
+	qints_mem = plt_zmalloc(sizeof(struct nix_inl_qint) * qints, 0);
+	if (!qints_mem) {
+		plt_err("Failed to allocate memory for %u qints", qints);
+		return -ENOMEM;
+	}
 
-	/* Clear QINT CNT, interrupt */
-	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
-	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+	inl_dev->configured_qints = qints;
+	inl_dev->qints_mem = qints_mem;
 
-	/* Register queue irq vector */
-	rc |= dev_irq_register(handle, nix_inl_nix_q_irq, inl_dev,
-			       msixoff + NIX_LF_INT_VEC_QINT_START);
+	for (q = 0; q < qints; q++) {
+		/* Clear QINT CNT, interrupt */
+		plt_write64(0, nix_base + NIX_LF_QINTX_CNT(q));
+		plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(q));
 
-	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
-	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
-	/* Enable QINT interrupt */
-	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(0));
+		/* Register queue irq vector */
+		ret = dev_irq_register(handle, nix_inl_nix_q_irq, &qints_mem[q],
+				       msixoff + NIX_LF_INT_VEC_QINT_START + q);
+		if (ret)
+			break;
 
+		plt_write64(0, nix_base + NIX_LF_QINTX_CNT(q));
+		plt_write64(0, nix_base + NIX_LF_QINTX_INT(q));
+		/* Enable QINT interrupt */
+		plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(q));
+
+		qints_mem[q].inl_dev = inl_dev;
+		qints_mem[q].qint = q;
+	}
+
+	rc |= ret;
 	return rc;
 }
 
@@ -339,8 +371,10 @@  void
 nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
 {
 	struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
+	struct nix_inl_qint *qints_mem = inl_dev->qints_mem;
 	uintptr_t nix_base = inl_dev->nix_base;
 	uint16_t msixoff;
+	int q;
 
 	msixoff = inl_dev->nix_msixoff;
 	/* Disable err interrupts */
@@ -353,14 +387,19 @@  nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
 	dev_irq_unregister(handle, nix_inl_nix_ras_irq, inl_dev,
 			   msixoff + NIX_LF_INT_VEC_POISON);
 
-	/* Clear QINT CNT */
-	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
-	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+	for (q = 0; q < inl_dev->configured_qints; q++) {
+		/* Clear QINT CNT */
+		plt_write64(0, nix_base + NIX_LF_QINTX_CNT(q));
+		plt_write64(0, nix_base + NIX_LF_QINTX_INT(q));
 
-	/* Disable QINT interrupt */
-	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+		/* Disable QINT interrupt */
+		plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(q));
 
-	/* Unregister queue irq vector */
-	dev_irq_unregister(handle, nix_inl_nix_q_irq, inl_dev,
-			   msixoff + NIX_LF_INT_VEC_QINT_START);
+		/* Unregister queue irq vector */
+		dev_irq_unregister(handle, nix_inl_nix_q_irq, &qints_mem[q],
+				   msixoff + NIX_LF_INT_VEC_QINT_START + q);
+	}
+
+	plt_free(inl_dev->qints_mem);
+	inl_dev->qints_mem = NULL;
 }
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index 1ab8470..d61c7b2 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -6,6 +6,12 @@ 
 #include <pthread.h>
 #include <sys/types.h>
 
+struct nix_inl_dev;
+struct nix_inl_qint {
+	struct nix_inl_dev *inl_dev;
+	uint16_t qint;
+};
+
 struct nix_inl_dev {
 	/* Base device object */
 	struct dev dev;
@@ -42,8 +48,10 @@  struct nix_inl_dev {
 	uint16_t vwqe_interval;
 	uint16_t cints;
 	uint16_t qints;
-	struct roc_nix_rq rq;
-	uint16_t rq_refs;
+	uint16_t configured_qints;
+	struct roc_nix_rq *rqs;
+	struct nix_inl_qint *qints_mem;
+	uint16_t nb_rqs;
 	bool is_nix1;
 	uint8_t spb_drop_pc;
 	uint8_t lpb_drop_pc;
diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
index c8ada96..da5b962 100644
--- a/drivers/common/cnxk/roc_npc.c
+++ b/drivers/common/cnxk/roc_npc.c
@@ -350,6 +350,7 @@  npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	uint8_t has_msns_act = 0;
 	int sel_act, req_act = 0;
 	uint16_t pf_func, vf_id;
+	struct roc_nix *roc_nix;
 	int errcode = 0;
 	int mark = 0;
 	int rq = 0;
@@ -436,11 +437,19 @@  npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 			 */
 			req_act |= ROC_NPC_ACTION_TYPE_SEC;
 			rq = 0;
+			roc_nix = roc_npc->roc_nix;
 
 			/* Special processing when with inline device */
-			if (roc_nix_inb_is_with_inl_dev(roc_npc->roc_nix) &&
+			if (roc_nix_inb_is_with_inl_dev(roc_nix) &&
 			    roc_nix_inl_dev_is_probed()) {
-				rq = 0;
+				struct roc_nix_rq *inl_rq;
+
+				inl_rq = roc_nix_inl_dev_rq(roc_nix);
+				if (!inl_rq) {
+					errcode = NPC_ERR_INTERNAL;
+					goto err_exit;
+				}
+				rq = inl_rq->qid;
 				pf_func = nix_inl_dev_pffunc_get();
 			}
 			rc = npc_parse_msns_action(roc_npc, actions, flow,
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 53586da..a77f3f6 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -138,7 +138,6 @@  INTERNAL {
 	roc_nix_inl_dev_rq;
 	roc_nix_inl_dev_rq_get;
 	roc_nix_inl_dev_rq_put;
-	roc_nix_inl_dev_rq_limit_get;
 	roc_nix_inl_dev_unlock;
 	roc_nix_inl_dev_xaq_realloc;
 	roc_nix_inl_inb_is_enabled;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 3912c24..09e5736 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -546,19 +546,6 @@  cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 		eth_dev->data->rx_queues[qid] = NULL;
 	}
 
-	/* Clam up cq limit to size of packet pool aura for LBK
-	 * to avoid meta packet drop as LBK does not currently support
-	 * backpressure.
-	 */
-	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
-		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
-
-		/* Use current RQ's aura limit if inl rq is not available */
-		if (!pkt_pool_limit)
-			pkt_pool_limit = roc_npa_aura_op_limit_get(mp->pool_id);
-		nb_desc = RTE_MAX(nb_desc, pkt_pool_limit);
-	}
-
 	/* Its a no-op when inline device is not used */
 	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY ||
 	    dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
@@ -1675,6 +1662,7 @@  cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
 	/* Initialize base roc nix */
 	nix->pci_dev = pci_dev;
 	nix->hw_vlan_ins = true;
+	nix->port_id = eth_dev->data->port_id;
 	rc = roc_nix_dev_init(nix);
 	if (rc) {
 		plt_err("Failed to initialize roc nix rc=%d", rc);