Message ID | 20230209084541.2712723-1-mingxia.liu@intel.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 535FE41C4D; Thu, 9 Feb 2023 10:42:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F8CF4067E; Thu, 9 Feb 2023 10:42:54 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 82711400D5 for <dev@dpdk.org>; Thu, 9 Feb 2023 10:42:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675935772; x=1707471772; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ITUnMWmHOrhUZL7CNhGgdtCOjYBcnouJlqcG0jQU5MM=; b=BDe7Shbuwq6NBRuTwVWv6w8TGLWtGnfXX/BGB7Sz3AdaqUKORbzDhNcr hs0klZZnSrgKYiWcO9Bn4WCheSv+cKXDX2B7tcP37+MfiBldWNMZZmKHs vwR8bsjlTThNLakCx56a0zMBMOsqVooBBb375FuYpP/zxKlVOkPpWxXkg J4CdF+hbNf9GPWD2CesIWepFJ3ehXOBIePb/U9xEwO50eTvZM3/2pI8hb ojOAWVzJiWnZlr6W6C5qDlWiD04b77de3oJ8VrBmqq5Qmx7zPb8j5KXms N0C1WU3XL92sYPX+DvEZ4STJIXchUgzW/w3Oh3uN3H7uM3aAsEtVoFUnV w==; X-IronPort-AV: E=McAfee;i="6500,9779,10615"; a="309712308" X-IronPort-AV: E=Sophos;i="5.97,283,1669104000"; d="scan'208";a="309712308" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2023 01:42:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10615"; a="697964639" X-IronPort-AV: E=Sophos;i="5.97,283,1669104000"; d="scan'208";a="697964639" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga008.jf.intel.com with ESMTP; 09 Feb 2023 01:42:49 -0800 From: Mingxia Liu <mingxia.liu@intel.com> To: dev@dpdk.org, qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: Mingxia Liu <mingxia.liu@intel.com> Subject: [PATCH v5 00/21] add support for cpfl PMD in DPDK Date: Thu, 9 Feb 2023 08:45:20 +0000 Message-Id: <20230209084541.2712723-1-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230118075738.904616-1-mingxia.liu@intel.com> References: <20230118075738.904616-1-mingxia.liu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=y Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org |
Series |
add support for cpfl PMD in DPDK
|
|
Message
Liu, Mingxia
Feb. 9, 2023, 8:45 a.m. UTC
The patchset introduced the cpfl (Control Plane Function Library) PMD for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453) The cpfl PMD inherits all the features from idpf PMD which will follow an ongoing standard data plan function spec https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf Besides, it will also support more device specific hardware offloading features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is different from idpf PMD, and that's why we need a new cpfl PMD. This patchset mainly focuses on idpf PMD’s equivalent features. To avoid duplicated code, the patchset depends on below patchsets which move the common part from net/idpf into common/idpf as a shared library. v2 changes: - rebase to the new baseline. - Fix rss lut config issue. v3 changes: - rebase to the new baseline. v4 changes: - Resend v3. No code changed. v3 changes: - rebase to the new baseline. - optimize some code - give "not supported" tips when user want to config rss hash type - if stats reset fails at initialization time, don't rollback, just print ERROR info Mingxia Liu (21): net/cpfl: support device initialization net/cpfl: add Tx queue setup net/cpfl: add Rx queue setup net/cpfl: support device start and stop net/cpfl: support queue start net/cpfl: support queue stop net/cpfl: support queue release net/cpfl: support MTU configuration net/cpfl: support basic Rx data path net/cpfl: support basic Tx data path net/cpfl: support write back based on ITR expire net/cpfl: support RSS net/cpfl: support Rx offloading net/cpfl: support Tx offloading net/cpfl: add AVX512 data path for single queue model net/cpfl: support timestamp offload net/cpfl: add AVX512 data path for split queue model net/cpfl: add HW statistics net/cpfl: add RSS set/get ops net/cpfl: support scalar scatter Rx datapath for single queue model net/cpfl: add xstats ops MAINTAINERS | 9 + doc/guides/nics/cpfl.rst | 88 ++ doc/guides/nics/features/cpfl.ini | 17 + doc/guides/rel_notes/release_23_03.rst | 6 + drivers/net/cpfl/cpfl_ethdev.c | 1453 +++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 95 ++ drivers/net/cpfl/cpfl_logs.h | 32 + drivers/net/cpfl/cpfl_rxtx.c | 952 +++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 44 + drivers/net/cpfl/cpfl_rxtx_vec_common.h | 116 ++ drivers/net/cpfl/meson.build | 38 + drivers/net/meson.build | 1 + 12 files changed, 2851 insertions(+) create mode 100644 doc/guides/nics/cpfl.rst create mode 100644 doc/guides/nics/features/cpfl.ini create mode 100644 drivers/net/cpfl/cpfl_ethdev.c create mode 100644 drivers/net/cpfl/cpfl_ethdev.h create mode 100644 drivers/net/cpfl/cpfl_logs.h create mode 100644 drivers/net/cpfl/cpfl_rxtx.c create mode 100644 drivers/net/cpfl/cpfl_rxtx.h create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h create mode 100644 drivers/net/cpfl/meson.build
Comments
On Thu, 9 Feb 2023 08:45:20 +0000 Mingxia Liu <mingxia.liu@intel.com> wrote: > The patchset introduced the cpfl (Control Plane Function Library) PMD > for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453) > > The cpfl PMD inherits all the features from idpf PMD which will follow > an ongoing standard data plan function spec > https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf > Besides, it will also support more device specific hardware offloading > features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is > different from idpf PMD, and that's why we need a new cpfl PMD. > > This patchset mainly focuses on idpf PMD’s equivalent features. > To avoid duplicated code, the patchset depends on below patchsets which > move the common part from net/idpf into common/idpf as a shared library. > > v2 changes: > - rebase to the new baseline. > - Fix rss lut config issue. > v3 changes: > - rebase to the new baseline. > v4 changes: > - Resend v3. No code changed. > v3 changes: > - rebase to the new baseline. > - optimize some code > - give "not supported" tips when user want to config rss hash type > - if stats reset fails at initialization time, don't rollback, just > print ERROR info > > Mingxia Liu (21): > net/cpfl: support device initialization > net/cpfl: add Tx queue setup > net/cpfl: add Rx queue setup > net/cpfl: support device start and stop > net/cpfl: support queue start > net/cpfl: support queue stop > net/cpfl: support queue release > net/cpfl: support MTU configuration > net/cpfl: support basic Rx data path > net/cpfl: support basic Tx data path > net/cpfl: support write back based on ITR expire > net/cpfl: support RSS > net/cpfl: support Rx offloading > net/cpfl: support Tx offloading > net/cpfl: add AVX512 data path for single queue model > net/cpfl: support timestamp offload > net/cpfl: add AVX512 data path for split queue model > net/cpfl: add HW statistics > net/cpfl: add RSS set/get ops > net/cpfl: support scalar scatter Rx datapath for single queue model > net/cpfl: add xstats ops > > MAINTAINERS | 9 + > doc/guides/nics/cpfl.rst | 88 ++ > doc/guides/nics/features/cpfl.ini | 17 + > doc/guides/rel_notes/release_23_03.rst | 6 + > drivers/net/cpfl/cpfl_ethdev.c | 1453 +++++++++++++++++++++++ > drivers/net/cpfl/cpfl_ethdev.h | 95 ++ > drivers/net/cpfl/cpfl_logs.h | 32 + > drivers/net/cpfl/cpfl_rxtx.c | 952 +++++++++++++++ > drivers/net/cpfl/cpfl_rxtx.h | 44 + > drivers/net/cpfl/cpfl_rxtx_vec_common.h | 116 ++ > drivers/net/cpfl/meson.build | 38 + > drivers/net/meson.build | 1 + > 12 files changed, 2851 insertions(+) > create mode 100644 doc/guides/nics/cpfl.rst > create mode 100644 doc/guides/nics/features/cpfl.ini > create mode 100644 drivers/net/cpfl/cpfl_ethdev.c > create mode 100644 drivers/net/cpfl/cpfl_ethdev.h > create mode 100644 drivers/net/cpfl/cpfl_logs.h > create mode 100644 drivers/net/cpfl/cpfl_rxtx.c > create mode 100644 drivers/net/cpfl/cpfl_rxtx.h > create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h > create mode 100644 drivers/net/cpfl/meson.build > Overall, the driver looks good. One recommendation would be to not use rte_memcpy for small fixed size structure. Regular memcpy() will be as fast or faster and get more checking from analyzers. Examples: rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va, rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring")); rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring")); rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring")); rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring")); rte_memcpy(vport->rss_key, rss_conf->rss_key, rte_memcpy(vport->rss_key, rss_conf->rss_key, rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
ok, thanks, I'll update in next version. > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Friday, February 10, 2023 12:47 AM > To: Liu, Mingxia <mingxia.liu@intel.com> > Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing > <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> > Subject: Re: [PATCH v5 00/21] add support for cpfl PMD in DPDK > > On Thu, 9 Feb 2023 08:45:20 +0000 > Mingxia Liu <mingxia.liu@intel.com> wrote: > > > The patchset introduced the cpfl (Control Plane Function Library) PMD > > for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453) > > > > The cpfl PMD inherits all the features from idpf PMD which will follow > > an ongoing standard data plan function spec > > https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf > > Besides, it will also support more device specific hardware offloading > > features from DPDK’s control path (e.g.: hairpin, rte_flow …). which > > is different from idpf PMD, and that's why we need a new cpfl PMD. > > > > This patchset mainly focuses on idpf PMD’s equivalent features. > > To avoid duplicated code, the patchset depends on below patchsets > > which move the common part from net/idpf into common/idpf as a shared > library. > > > > v2 changes: > > - rebase to the new baseline. > > - Fix rss lut config issue. > > v3 changes: > > - rebase to the new baseline. > > v4 changes: > > - Resend v3. No code changed. > > v3 changes: > > - rebase to the new baseline. > > - optimize some code > > - give "not supported" tips when user want to config rss hash type > > - if stats reset fails at initialization time, don't rollback, just > > print ERROR info > > > > Mingxia Liu (21): > > net/cpfl: support device initialization > > net/cpfl: add Tx queue setup > > net/cpfl: add Rx queue setup > > net/cpfl: support device start and stop > > net/cpfl: support queue start > > net/cpfl: support queue stop > > net/cpfl: support queue release > > net/cpfl: support MTU configuration > > net/cpfl: support basic Rx data path > > net/cpfl: support basic Tx data path > > net/cpfl: support write back based on ITR expire > > net/cpfl: support RSS > > net/cpfl: support Rx offloading > > net/cpfl: support Tx offloading > > net/cpfl: add AVX512 data path for single queue model > > net/cpfl: support timestamp offload > > net/cpfl: add AVX512 data path for split queue model > > net/cpfl: add HW statistics > > net/cpfl: add RSS set/get ops > > net/cpfl: support scalar scatter Rx datapath for single queue model > > net/cpfl: add xstats ops > > > > MAINTAINERS | 9 + > > doc/guides/nics/cpfl.rst | 88 ++ > > doc/guides/nics/features/cpfl.ini | 17 + > > doc/guides/rel_notes/release_23_03.rst | 6 + > > drivers/net/cpfl/cpfl_ethdev.c | 1453 +++++++++++++++++++++++ > > drivers/net/cpfl/cpfl_ethdev.h | 95 ++ > > drivers/net/cpfl/cpfl_logs.h | 32 + > > drivers/net/cpfl/cpfl_rxtx.c | 952 +++++++++++++++ > > drivers/net/cpfl/cpfl_rxtx.h | 44 + > > drivers/net/cpfl/cpfl_rxtx_vec_common.h | 116 ++ > > drivers/net/cpfl/meson.build | 38 + > > drivers/net/meson.build | 1 + > > 12 files changed, 2851 insertions(+) > > create mode 100644 doc/guides/nics/cpfl.rst create mode 100644 > > doc/guides/nics/features/cpfl.ini create mode 100644 > > drivers/net/cpfl/cpfl_ethdev.c create mode 100644 > > drivers/net/cpfl/cpfl_ethdev.h create mode 100644 > > drivers/net/cpfl/cpfl_logs.h create mode 100644 > > drivers/net/cpfl/cpfl_rxtx.c create mode 100644 > > drivers/net/cpfl/cpfl_rxtx.h create mode 100644 > > drivers/net/cpfl/cpfl_rxtx_vec_common.h > > create mode 100644 drivers/net/cpfl/meson.build > > > > Overall, the driver looks good. One recommendation would be to not use > rte_memcpy for small fixed size structure. Regular memcpy() will be as fast > or faster and get more checking from analyzers. > > Examples: > rte_memcpy(adapter->mbx_resp, > ctlq_msg.ctx.indirect.payload->va, > > rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring")); > > rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring")); > rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx > compl ring")); > rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx > buf ring")); > rte_memcpy(vport->rss_key, rss_conf->rss_key, > rte_memcpy(vport->rss_key, rss_conf->rss_key, > rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf- > >rss_key_len);