From patchwork Wed Sep 9 13:56:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 77054 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4106A04B1; Wed, 9 Sep 2020 15:58:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5900F1C0D2; Wed, 9 Sep 2020 15:58:19 +0200 (CEST) Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) by dpdk.org (Postfix) with ESMTP id 824A91C0D2 for ; Wed, 9 Sep 2020 15:58:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=7259; q=dns/txt; s=iport; t=1599659897; x=1600869497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y99mTNVctWF0aXzAg+oN4Nv84bK44jFqcLRdnZ7BMMY=; b=g2D8DaTQZEZDyQ/YjcQzzEpPzmQVCC7pFrhkid5/+qZkdBRRUzt8vUVP 3tTnZXV7qlhnLkQTrbG94nSAA9Gz/uejuH/XgWEPoxZ2ig+2LU0GLbzR/ 4yZhDxGAiLXXbBJQtx+VAXBHH0XLBZ/J3QsziogxuYeSObfGJ+74biwgy k=; X-IronPort-AV: E=Sophos;i="5.76,409,1592870400"; d="scan'208";a="823454875" Received: from rcdn-core-1.cisco.com ([173.37.93.152]) by rcdn-iport-8.cisco.com with ESMTP/TLS/DHE-RSA-SEED-SHA; 09 Sep 2020 13:58:16 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by rcdn-core-1.cisco.com (8.15.2/8.15.2) with ESMTP id 089DwGfL016370; Wed, 9 Sep 2020 13:58:16 GMT Received: by cisco.com (Postfix, from userid 508933) id 1B14F20F2005; Wed, 9 Sep 2020 06:58:16 -0700 (PDT) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, Hyong Youb Kim , John Daley Date: Wed, 9 Sep 2020 06:56:56 -0700 Message-Id: <20200909135656.18892-6-hyonkim@cisco.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909135656.18892-1-hyonkim@cisco.com> References: <20200909135656.18892-1-hyonkim@cisco.com> MIME-Version: 1.0 X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: rcdn-core-1.cisco.com Subject: [dpdk-dev] [PATCH 5/5] net/enic: enable flow API for VF representor X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use Flow Manager (flowman) to support flow API for representors. Representor's flow handlers simply invoke PF handlers and pass the representor's flowman structure. The PF flowman handlers are aware of representors and perform appropriate devcmds to create flows on the NIC. Also use flowman to create internal flows for implicit VF-representor path. With that, representor Tx/Rx is now functional. Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- doc/guides/rel_notes/release_20_11.rst | 4 + drivers/net/enic/enic_vf_representor.c | 160 +++++++++++++++++++++++++ 2 files changed, 164 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index df227a177..180ab8fa0 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -134,3 +134,7 @@ Tested Platforms This section is a comment. Do not overwrite or remove it. Also, make sure to start the actual text at the margin. ======================================================= + +* **Updated Cisco enic driver.** + + * Added support for VF representors with single-queue Tx/Rx and flow API diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c index cb41bb140..5d34e1b46 100644 --- a/drivers/net/enic/enic_vf_representor.c +++ b/drivers/net/enic/enic_vf_representor.c @@ -124,6 +124,33 @@ static int enic_vf_dev_configure(struct rte_eth_dev *eth_dev __rte_unused) return 0; } +static int +setup_rep_vf_fwd(struct enic_vf_representor *vf) +{ + int ret; + + ENICPMD_FUNC_TRACE(); + /* Representor -> VF rule + * Egress packets from this representor are on the representor's WQ. + * So, loop back that WQ to VF. + */ + ret = enic_fm_add_rep2vf_flow(vf); + if (ret) { + ENICPMD_LOG(ERR, "Cannot create representor->VF flow"); + return ret; + } + /* VF -> representor rule + * Packets from VF loop back to the representor, unless they match + * user-added flows. + */ + ret = enic_fm_add_vf2rep_flow(vf); + if (ret) { + ENICPMD_LOG(ERR, "Cannot create VF->representor flow"); + return ret; + } + return 0; +} + static int enic_vf_dev_start(struct rte_eth_dev *eth_dev) { struct enic_vf_representor *vf; @@ -138,6 +165,16 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev) vf = eth_dev->data->dev_private; pf = vf->pf; + /* Get representor flowman for flow API and representor path */ + ret = enic_fm_init(&vf->enic); + if (ret) + return ret; + /* Set up implicit flow rules to forward between representor and VF */ + ret = setup_rep_vf_fwd(vf); + if (ret) { + ENICPMD_LOG(ERR, "Cannot set up representor-VF flows"); + return ret; + } /* Remove all packet filters so no ingress packets go to VF. * When PF enables switchdev, it will ensure packet filters * are removed. So, this is not technically needed. @@ -232,6 +269,8 @@ static void enic_vf_dev_stop(struct rte_eth_dev *eth_dev) vnic_cq_clean(&pf->cq[enic_cq_rq(vf->pf, vf->pf_rq_sop_idx)]); eth_dev->data->tx_queue_state[0] = RTE_ETH_QUEUE_STATE_STOPPED; eth_dev->data->rx_queue_state[0] = RTE_ETH_QUEUE_STATE_STOPPED; + /* Clean up representor flowman */ + enic_fm_destroy(&vf->enic); } /* @@ -245,6 +284,126 @@ static void enic_vf_dev_close(struct rte_eth_dev *eth_dev __rte_unused) return; } +static int +adjust_flow_attr(const struct rte_flow_attr *attrs, + struct rte_flow_attr *vf_attrs, + struct rte_flow_error *error) +{ + if (!attrs) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "no attribute specified"); + } + /* + * Swap ingress and egress as the firmware view of direction + * is the opposite of the representor. + */ + *vf_attrs = *attrs; + if (attrs->ingress && !attrs->egress) { + vf_attrs->ingress = 0; + vf_attrs->egress = 1; + return 0; + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, + "representor only supports ingress"); +} + +static int +enic_vf_flow_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *attrs, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_attr vf_attrs; + int ret; + + ret = adjust_flow_attr(attrs, &vf_attrs, error); + if (ret) + return ret; + attrs = &vf_attrs; + return enic_fm_flow_ops.validate(dev, attrs, pattern, actions, error); +} + +static struct rte_flow * +enic_vf_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attrs, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_attr vf_attrs; + + if (adjust_flow_attr(attrs, &vf_attrs, error)) + return NULL; + attrs = &vf_attrs; + return enic_fm_flow_ops.create(dev, attrs, pattern, actions, error); +} + +static int +enic_vf_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, + struct rte_flow_error *error) +{ + return enic_fm_flow_ops.destroy(dev, flow, error); +} + +static int +enic_vf_flow_query(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_action *actions, + void *data, + struct rte_flow_error *error) +{ + return enic_fm_flow_ops.query(dev, flow, actions, data, error); +} + +static int +enic_vf_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error) +{ + return enic_fm_flow_ops.flush(dev, error); +} + +static const struct rte_flow_ops enic_vf_flow_ops = { + .validate = enic_vf_flow_validate, + .create = enic_vf_flow_create, + .destroy = enic_vf_flow_destroy, + .flush = enic_vf_flow_flush, + .query = enic_vf_flow_query, +}; + +static int +enic_vf_filter_ctrl(struct rte_eth_dev *eth_dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, + void *arg) +{ + struct enic_vf_representor *vf; + int ret = 0; + + ENICPMD_FUNC_TRACE(); + vf = eth_dev->data->dev_private; + switch (filter_type) { + case RTE_ETH_FILTER_GENERIC: + if (filter_op != RTE_ETH_FILTER_GET) + return -EINVAL; + if (vf->enic.flow_filter_mode == FILTER_FLOWMAN) { + *(const void **)arg = &enic_vf_flow_ops; + } else { + ENICPMD_LOG(WARNING, "VF representors require flowman support for rte_flow API"); + ret = -EINVAL; + } + break; + default: + ENICPMD_LOG(WARNING, "Filter type (%d) not supported", + filter_type); + ret = -EINVAL; + break; + } + return ret; +} + static int enic_vf_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused) { @@ -404,6 +563,7 @@ static const struct eth_dev_ops enic_vf_representor_dev_ops = { .dev_start = enic_vf_dev_start, .dev_stop = enic_vf_dev_stop, .dev_close = enic_vf_dev_close, + .filter_ctrl = enic_vf_filter_ctrl, .link_update = enic_vf_link_update, .promiscuous_enable = enic_vf_promiscuous_enable, .promiscuous_disable = enic_vf_promiscuous_disable,