From patchwork Fri Feb 25 06:54:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 108347 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BEA2A034C; Fri, 25 Feb 2022 07:54:55 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0D834115C; Fri, 25 Feb 2022 07:54:54 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F212B4068B for ; Fri, 25 Feb 2022 07:54:52 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21ONdpas012858 for ; Thu, 24 Feb 2022 22:54:52 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=tZuyp0wyt+MVGg0ltckpCPtNQvoQsX5WzlA3NAk0gQs=; b=Hn4JJWatPEKyDPQs7qYdWaREw3g0V++wwoBET0DxNd/mfoDPwa3FZqtO0ZfHsZgPNkuh j7y+IiUI9H+GMTZaDmnbTcy5u6MhO/V1mJMPU9GU7w55xePJpQAGUIJWvjcZw/Mcd2M0 tfUUAnylCuCMxnIQ7ygpzKUM9iIgvzJkOiaWi2q5ozIChTEpJns2x2Z2yHY7NCfRrwI0 zMTkHzH1zXx4lEDQMv+tHtHSghcPHgWy6oilUl7eyi8ef2JKPqR+OMMkVlxzfnpW4hk8 jElVZbsYEIYQWMIB5TH0HBki5QEBMrs1SF8Esl2bJA9JNeDx6dCUdJqRpOO1H+F9KLvm OA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3eekfw9p71-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 24 Feb 2022 22:54:52 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 24 Feb 2022 22:54:50 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 24 Feb 2022 22:54:50 -0800 Received: from localhost.localdomain (unknown [10.28.48.51]) by maili.marvell.com (Postfix) with ESMTP id 17D453F7052; Thu, 24 Feb 2022 22:54:48 -0800 (PST) From: Vamsi Attunuru To: CC: , , Subject: [PATCH 1/1] net/cnxk: make inline inbound device usage default Date: Fri, 25 Feb 2022 12:24:45 +0530 Message-ID: <20220225065445.1551062-1-vattunuru@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: LzGsjFlpovpqz4QCmhOXwpMCoI0jrbOl X-Proofpoint-ORIG-GUID: LzGsjFlpovpqz4QCmhOXwpMCoI0jrbOl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-02-25_05,2022-02-24_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently inline inbound device usage is not default for eventdev, patch renames force_inl_dev dev arg to no_inl_dev and enables inline inbound device by default. Signed-off-by: Vamsi Attunuru Acked-by: Jerin Jacob --- doc/guides/nics/cnxk.rst | 10 +++++----- drivers/event/cnxk/cnxk_eventdev_adptr.c | 4 ++-- drivers/net/cnxk/cn9k_ethdev.c | 1 + drivers/net/cnxk/cnxk_ethdev.h | 4 ++-- drivers/net/cnxk/cnxk_ethdev_devargs.c | 11 +++++------ 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index be51ca2146..31c801fa04 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -275,7 +275,7 @@ Runtime Config Options With the above configuration, two CPT LF's are setup and distributed among all the Tx queues for outbound processing. -- ``Force using inline ipsec device for inbound`` (default ``0``) +- ``Disable using inline ipsec device for inbound`` (default ``0``) In CN10K, in event mode, driver can work in two modes, @@ -285,13 +285,13 @@ Runtime Config Options 2. Both Inbound encrypted traffic and plain traffic post decryption are received by ethdev. - By default event mode works without using inline device i.e mode ``2``. - This behaviour can be changed to pick mode ``1`` by using - ``force_inb_inl_dev`` ``devargs`` parameter. + By default event mode works using inline device i.e mode ``1``. + This behaviour can be changed to pick mode ``2`` by using + ``no_inl_dev`` ``devargs`` parameter. For example:: - -a 0002:02:00.0,force_inb_inl_dev=1 -a 0002:03:00.0,force_inb_inl_dev=1 + -a 0002:02:00.0,no_inl_dev=1 -a 0002:03:00.0,no_inl_dev=1 With the above configuration, inbound encrypted traffic from both the ports is received by ipsec inline device. diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index 5ebd3340e7..42ac14064d 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -263,9 +263,9 @@ cnxk_sso_rx_adapter_queue_add( /* Switch to use PF/VF's NIX LF instead of inline device for inbound * when all the RQ's are switched to event dev mode. We do this only - * when using inline device is not forced by dev args. + * when dev arg no_inl_dev=1 is selected. */ - if (!cnxk_eth_dev->inb.force_inl_dev && + if (cnxk_eth_dev->inb.no_inl_dev && cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq) cnxk_nix_inb_mode_set(cnxk_eth_dev, false); diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c index 6b049b2897..ae42d76d6d 100644 --- a/drivers/net/cnxk/cn9k_ethdev.c +++ b/drivers/net/cnxk/cn9k_ethdev.c @@ -594,6 +594,7 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) } dev->hwcap = 0; + dev->inb.no_inl_dev = 1; /* Register up msg callbacks for PTP information */ roc_nix_ptp_info_cb_register(&dev->nix, cn9k_nix_ptp_info_update_cb); diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 445b7abf69..9a9d3baf25 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -272,8 +272,8 @@ struct cnxk_eth_dev_sec_inb { /* Using inbound with inline device */ bool inl_dev; - /* Device argument to force inline device for inb */ - bool force_inl_dev; + /* Device argument to disable inline device usage for inb */ + bool no_inl_dev; /* Active sessions */ uint16_t nb_sess; diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c index 8a71644899..9b2beb6743 100644 --- a/drivers/net/cnxk/cnxk_ethdev_devargs.c +++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c @@ -241,7 +241,7 @@ parse_sdp_channel_mask(const char *key, const char *value, void *extra_args) #define CNXK_IPSEC_IN_MAX_SPI "ipsec_in_max_spi" #define CNXK_IPSEC_OUT_MAX_SA "ipsec_out_max_sa" #define CNXK_OUTB_NB_DESC "outb_nb_desc" -#define CNXK_FORCE_INB_INL_DEV "force_inb_inl_dev" +#define CNXK_NO_INL_DEV "no_inl_dev" #define CNXK_OUTB_NB_CRYPTO_QS "outb_nb_crypto_qs" #define CNXK_SDP_CHANNEL_MASK "sdp_channel_mask" #define CNXK_FLOW_PRE_L2_INFO "flow_pre_l2_info" @@ -257,7 +257,6 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) uint16_t flow_prealloc_size = 1; uint16_t switch_header_type = 0; uint16_t flow_max_priority = 3; - uint16_t force_inb_inl_dev = 0; uint16_t outb_nb_crypto_qs = 1; uint32_t ipsec_in_min_spi = 0; uint16_t outb_nb_desc = 8200; @@ -266,6 +265,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) uint16_t scalar_enable = 0; uint8_t lock_rx_ctx = 0; struct rte_kvargs *kvlist; + uint16_t no_inl_dev = 0; memset(&sdp_chan, 0, sizeof(sdp_chan)); memset(&pre_l2_info, 0, sizeof(struct flow_pre_l2_size_info)); @@ -302,8 +302,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) &outb_nb_desc); rte_kvargs_process(kvlist, CNXK_OUTB_NB_CRYPTO_QS, &parse_outb_nb_crypto_qs, &outb_nb_crypto_qs); - rte_kvargs_process(kvlist, CNXK_FORCE_INB_INL_DEV, &parse_flag, - &force_inb_inl_dev); + rte_kvargs_process(kvlist, CNXK_NO_INL_DEV, &parse_flag, &no_inl_dev); rte_kvargs_process(kvlist, CNXK_SDP_CHANNEL_MASK, &parse_sdp_channel_mask, &sdp_chan); rte_kvargs_process(kvlist, CNXK_FLOW_PRE_L2_INFO, @@ -312,7 +311,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) null_devargs: dev->scalar_ena = !!scalar_enable; - dev->inb.force_inl_dev = !!force_inb_inl_dev; + dev->inb.no_inl_dev = !!no_inl_dev; dev->inb.max_spi = ipsec_in_max_spi; dev->outb.max_sa = ipsec_out_max_sa; dev->outb.nb_desc = outb_nb_desc; @@ -350,5 +349,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk, CNXK_OUTB_NB_DESC "=<1-65535>" CNXK_FLOW_PRE_L2_INFO "=<0-255>/<1-255>/<0-1>" CNXK_OUTB_NB_CRYPTO_QS "=<1-64>" - CNXK_FORCE_INB_INL_DEV "=1" + CNXK_NO_INL_DEV "=0" CNXK_SDP_CHANNEL_MASK "=<1-4095>/<1-4095>");