From patchwork Tue Jul 7 09:22:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 73404 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79850A00BE; Tue, 7 Jul 2020 11:32:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B3DA1DE04; Tue, 7 Jul 2020 11:27:36 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 39E061DD3C for ; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1DD561A0A4F; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 643D31A0A30; Tue, 7 Jul 2020 11:27:13 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 27EC5402A8; Tue, 7 Jul 2020 17:27:11 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:39 +0530 Message-Id: <20200707092244.12791-25-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Make more sense to use RXQ index for queue distribution instead of flow ID. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 6f3139f86..76f68b903 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -56,7 +56,6 @@ struct rte_flow { uint8_t tc_id; /** Traffic Class ID. */ uint8_t tc_index; /** index within this Traffic Class. */ enum rte_flow_action_type action; - uint16_t flow_id; /* Special for IP address to specify the offset * in key/mask. */ @@ -3141,6 +3140,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, struct dpni_qos_tbl_cfg qos_cfg; struct dpni_fs_action_cfg action; struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_queue *rxq; struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw; size_t param; struct rte_flow *curr = LIST_FIRST(&priv->flows); @@ -3244,10 +3244,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow, case RTE_FLOW_ACTION_TYPE_QUEUE: dest_queue = (const struct rte_flow_action_queue *)(actions[j].conf); - flow->flow_id = dest_queue->index; + rxq = priv->rx_vq[dest_queue->index]; flow->action = RTE_FLOW_ACTION_TYPE_QUEUE; memset(&action, 0, sizeof(struct dpni_fs_action_cfg)); - action.flow_id = flow->flow_id; + action.flow_id = rxq->flow_id; if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { dpaa2_flow_qos_table_extracts_log(priv); if (dpkg_prepare_key_cfg( @@ -3303,8 +3303,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } /* Configure QoS table first */ - action.flow_id = action.flow_id % priv->num_rx_tc; - qos_index = flow->tc_id * priv->fs_entries + flow->tc_index; @@ -3407,13 +3405,22 @@ dpaa2_generic_flow_set(struct rte_flow *flow, break; case RTE_FLOW_ACTION_TYPE_RSS: rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf); + if (rss_conf->queue_num > priv->dist_queues) { + DPAA2_PMD_ERR( + "RSS number exceeds the distrbution size"); + return -ENOTSUP; + } + for (i = 0; i < (int)rss_conf->queue_num; i++) { - if (rss_conf->queue[i] < - (attr->group * priv->dist_queues) || - rss_conf->queue[i] >= - ((attr->group + 1) * priv->dist_queues)) { + if (rss_conf->queue[i] >= priv->nb_rx_queues) { + DPAA2_PMD_ERR( + "RSS RXQ number exceeds the total number"); + return -ENOTSUP; + } + rxq = priv->rx_vq[rss_conf->queue[i]]; + if (rxq->tc_index != attr->group) { DPAA2_PMD_ERR( - "Queue/Group combination are not supported\n"); + "RSS RXQ distributed is not in current group"); return -ENOTSUP; } }