From patchwork Sun Jun 30 18:05:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55708 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 394AB1BAB6; Sun, 30 Jun 2019 20:10:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 942301BAC5 for ; Sun, 30 Jun 2019 20:08:33 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73ON028454 for ; Sun, 30 Jun 2019 11:08:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=dVT+6xF9SSLb6P0GsaiByolSXKfOZ+47ZiLV30tdqA0=; b=U4+vzovmN7Wk7SD1vciMmsrxlcuK8/WeqWfA1ajc1HCMfLI0GAv4GYgR8mEgz/Ri4PJk SE2fEhN3Z2n9sltNNfBuwt2txk0YCaXKucqrT4UL+LZx3z6NWySQKYfsYlU5NdrUnfQX 4WS9rkQqS8eE7xcnjI8qiMlnng4XwDDvuYNj4bMnZPoOFkFcDQQUeh3LGm3kT5sEsUcy /sVnDaTBu5v0cnVhDvKFCh7jHMRkv1/MKWzaC8byWtEH3imqF2p3XlKgP5OnTaiFmLEy PsaUzoYLIjUqCZhC0BVAYtPGTD7sZczORDU893GrjTnr5X/SDPVwMMOUHcqBW2yspMvq 4g== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:32 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CB7523F703F; Sun, 30 Jun 2019 11:08:29 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:51 +0530 Message-ID: <20190630180609.36705-40-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 39/57] net/octeontx2: add flow operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding the initial flow ops like flow_create and flow_validate. These will be used to alloc and write flow rule to device and validate the flow rule. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_flow.c | 451 ++++++++++++++++++++++++++++++ 3 files changed, 453 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 3eb4dba53..21559f631 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -32,6 +32,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_rss.c \ otx2_mac.c \ otx2_ptp.c \ + otx2_flow.c \ otx2_link.c \ otx2_stats.c \ otx2_lookup.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index f608c4947..f0e03bffe 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -7,6 +7,7 @@ sources = files( 'otx2_rss.c', 'otx2_mac.c', 'otx2_ptp.c', + 'otx2_flow.c', 'otx2_link.c', 'otx2_stats.c', 'otx2_lookup.c', diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c new file mode 100644 index 000000000..896aef00a --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow.c @@ -0,0 +1,451 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +static int +flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox, + struct otx2_npc_flow_info *flow_info) +{ + /* This is non-LDATA part in search key */ + uint64_t key_data[2] = {0ULL, 0ULL}; + uint64_t key_mask[2] = {0ULL, 0ULL}; + int intf = pst->flow->nix_intf; + int key_len, bit = 0, index; + int off, idx, data_off = 0; + uint8_t lid, mask, data; + uint16_t layer_info; + uint64_t lt, flags; + + + /* Skip till Layer A data start */ + while (bit < NPC_PARSE_KEX_S_LA_OFFSET) { + if (flow_info->keyx_supp_nmask[intf] & (1 << bit)) + data_off++; + bit++; + } + + /* Each bit represents 1 nibble */ + data_off *= 4; + + index = 0; + for (lid = 0; lid < NPC_MAX_LID; lid++) { + /* Offset in key */ + off = NPC_PARSE_KEX_S_LID_OFFSET(lid); + lt = pst->lt[lid] & 0xf; + flags = pst->flags[lid] & 0xff; + + /* NPC_LAYER_KEX_S */ + layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7); + + if (layer_info) { + for (idx = 0; idx <= 2 ; idx++) { + if (layer_info & (1 << idx)) { + if (idx == 2) + data = lt; + else if (idx == 1) + data = ((flags >> 4) & 0xf); + else + data = (flags & 0xf); + + if (data_off >= 64) { + data_off = 0; + index++; + } + key_data[index] |= ((uint64_t)data << + data_off); + mask = 0xf; + if (lt == 0) + mask = 0; + key_mask[index] |= ((uint64_t)mask << + data_off); + data_off += 4; + } + } + } + } + + otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64, + key_data[0], key_data[1]); + + /* Copy this into mcam string */ + key_len = (pst->npc->keyx_len[intf] + 7) / 8; + otx2_npc_dbg("Key_len = %d", key_len); + memcpy(pst->flow->mcam_data, key_data, key_len); + memcpy(pst->flow->mcam_mask, key_mask, key_len); + + otx2_npc_dbg("Final flow data"); + for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) { + otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64, + idx, pst->flow->mcam_data[idx], + idx, pst->flow->mcam_mask[idx]); + } + + /* + * Now we have mcam data and mask formatted as + * [Key_len/4 nibbles][0 or 1 nibble hole][data] + * hole is present if key_len is odd number of nibbles. + * mcam data must be split into 64 bits + 48 bits segments + * for each back W0, W1. + */ + + return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info); +} + +static int +flow_parse_attr(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr, + struct rte_flow_error *error, + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + const char *errmsg = NULL; + + if (attr == NULL) + errmsg = "Attribute can't be empty"; + else if (attr->group) + errmsg = "Groups are not supported"; + else if (attr->priority >= dev->npc_flow.flow_max_priority) + errmsg = "Priority should be with in specified range"; + else if ((!attr->egress && !attr->ingress) || + (attr->egress && attr->ingress)) + errmsg = "Exactly one of ingress or egress must be set"; + + if (errmsg != NULL) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR, + attr, errmsg); + return -ENOTSUP; + } + + if (attr->ingress) + flow->nix_intf = OTX2_INTF_RX; + else + flow->nix_intf = OTX2_INTF_TX; + + flow->priority = attr->priority; + return 0; +} + +static inline int +flow_get_free_rss_grp(struct rte_bitmap *bmap, + uint32_t size, uint32_t *pos) +{ + for (*pos = 0; *pos < size; ++*pos) { + if (!rte_bitmap_get(bmap, *pos)) + break; + } + + return *pos < size ? 0 : -1; +} + +static int +flow_configure_rss_action(struct otx2_eth_dev *dev, + const struct rte_flow_action_rss *rss, + uint8_t *alg_idx, uint32_t *rss_grp, + int mcam_index) +{ + struct otx2_npc_flow_info *flow_info = &dev->npc_flow; + uint16_t reta[NIX_RSS_RETA_SIZE_MAX]; + uint32_t flowkey_cfg, grp_aval, i; + uint16_t *ind_tbl = NULL; + uint8_t flowkey_algx; + int rc; + + rc = flow_get_free_rss_grp(flow_info->rss_grp_entries, + flow_info->rss_grps, &grp_aval); + /* RSS group :0 is not usable for flow rss action */ + if (rc < 0 || grp_aval == 0) + return -ENOSPC; + + *rss_grp = grp_aval; + + otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key, + rss->key_len); + + /* If queue count passed in the rss action is less than + * HW configured reta size, replicate rss action reta + * across HW reta table. + */ + if (dev->rss_info.rss_size > rss->queue_num) { + ind_tbl = reta; + + for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++) + memcpy(reta + i * rss->queue_num, rss->queue, + sizeof(uint16_t) * rss->queue_num); + + i = dev->rss_info.rss_size % rss->queue_num; + if (i) + memcpy(&reta[dev->rss_info.rss_size] - i, + rss->queue, i * sizeof(uint16_t)); + } else { + ind_tbl = (uint16_t *)(uintptr_t)rss->queue; + } + + rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl); + if (rc) { + otx2_err("Failed to init rss table rc = %d", rc); + return rc; + } + + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx, + *rss_grp, mcam_index); + if (rc) { + otx2_err("Failed to set rss hash function rc = %d", rc); + return rc; + } + + *alg_idx = flowkey_algx; + + rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp); + + return 0; +} + + +static int +flow_program_rss_action(struct rte_eth_dev *eth_dev, + const struct rte_flow_action actions[], + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + const struct rte_flow_action_rss *rss; + uint32_t rss_grp; + uint8_t alg_idx; + int rc; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) { + rss = (const struct rte_flow_action_rss *)actions->conf; + + rc = flow_configure_rss_action(dev, + rss, &alg_idx, &rss_grp, + flow->mcam_id); + if (rc) + return rc; + + flow->npc_action |= + ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) << + NIX_RSS_ACT_ALG_OFFSET) | + ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) << + NIX_RSS_ACT_GRP_OFFSET); + } + } + return 0; +} + +static int +flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst) +{ + otx2_npc_dbg("Meta Item"); + return 0; +} + +/* + * Parse function of each layer: + * - Consume one or more patterns that are relevant. + * - Update parse_state + * - Set parse_state.pattern = last item consumed + * - Set appropriate error code/message when returning error. + */ +typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst); + +static int +flow_parse_pattern(struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + struct rte_flow_error *error, + struct rte_flow *flow, + struct otx2_parse_state *pst) +{ + flow_parse_stage_func_t parse_stage_funcs[] = { + flow_parse_meta_items, + otx2_flow_parse_la, + otx2_flow_parse_lb, + otx2_flow_parse_lc, + otx2_flow_parse_ld, + otx2_flow_parse_le, + otx2_flow_parse_lf, + otx2_flow_parse_lg, + otx2_flow_parse_lh, + }; + struct otx2_eth_dev *hw = dev->data->dev_private; + uint8_t layer = 0; + int key_offset; + int rc; + + if (pattern == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL, + "pattern is NULL"); + return -EINVAL; + } + + memset(pst, 0, sizeof(*pst)); + pst->npc = &hw->npc_flow; + pst->error = error; + pst->flow = flow; + + /* Use integral byte offset */ + key_offset = pst->npc->keyx_len[flow->nix_intf]; + key_offset = (key_offset + 7) / 8; + + /* Location where LDATA would begin */ + pst->mcam_data = (uint8_t *)flow->mcam_data; + pst->mcam_mask = (uint8_t *)flow->mcam_mask; + + while (pattern->type != RTE_FLOW_ITEM_TYPE_END && + layer < RTE_DIM(parse_stage_funcs)) { + otx2_npc_dbg("Pattern type = %d", pattern->type); + + /* Skip place-holders */ + pattern = otx2_flow_skip_void_and_any_items(pattern); + + pst->pattern = pattern; + otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer); + rc = parse_stage_funcs[layer](pst); + if (rc != 0) + return -rte_errno; + + layer++; + + /* + * Parse stage function sets pst->pattern to + * 1 past the last item it consumed. + */ + pattern = pst->pattern; + + if (pst->terminate) + break; + } + + /* Skip trailing place-holders */ + pattern = otx2_flow_skip_void_and_any_items(pattern); + + /* Are there more items than what we can handle? */ + if (pattern->type != RTE_FLOW_ITEM_TYPE_END) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, pattern, + "unsupported item in the sequence"); + return -ENOTSUP; + } + + return 0; +} + +static int +flow_parse_rule(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow, + struct otx2_parse_state *pst) +{ + int err; + + /* Check attributes */ + err = flow_parse_attr(dev, attr, error, flow); + if (err) + return err; + + /* Check actions */ + err = otx2_flow_parse_actions(dev, attr, actions, error, flow); + if (err) + return err; + + /* Check pattern */ + err = flow_parse_pattern(dev, pattern, error, flow, pst); + if (err) + return err; + + /* Check for overlaps? */ + return 0; +} + +static int +otx2_flow_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct otx2_parse_state parse_state; + struct rte_flow flow; + + memset(&flow, 0, sizeof(flow)); + return flow_parse_rule(dev, attr, pattern, actions, error, &flow, + &parse_state); +} + +static struct rte_flow * +otx2_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_parse_state parse_state; + struct otx2_mbox *mbox = hw->mbox; + struct rte_flow *flow, *flow_iter; + struct otx2_flow_list *list; + int rc; + + flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0); + if (flow == NULL) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Memory allocation failed"); + return NULL; + } + memset(flow, 0, sizeof(*flow)); + + rc = flow_parse_rule(dev, attr, pattern, actions, error, flow, + &parse_state); + if (rc != 0) + goto err_exit; + + rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to insert filter"); + goto err_exit; + } + + rc = flow_program_rss_action(dev, actions, flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to program rss action"); + goto err_exit; + } + + + list = &hw->npc_flow.flow_list[flow->priority]; + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + return flow; + } + } + + TAILQ_INSERT_TAIL(list, flow, next); + return flow; + +err_exit: + rte_free(flow); + return NULL; +} + +const struct rte_flow_ops otx2_flow_ops = { + .validate = otx2_flow_validate, + .create = otx2_flow_create, +};