From patchwork Tue Apr 18 17:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 126248 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CB334297F; Tue, 18 Apr 2023 19:22:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06FE64021F; Tue, 18 Apr 2023 19:22:16 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2087.outbound.protection.outlook.com [40.107.92.87]) by mails.dpdk.org (Postfix) with ESMTP id 6E1754014F for ; Tue, 18 Apr 2023 19:22:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ieymtgzcsuVITLPQkc9si3vXhO5OhRnd7YTVXO7/qC1HbYWEQ2imj51PJArQZ94MTGNUJ37Gs0VBZ+HVe/jKui4phS8Ed29s1cKLCAp60xYA6MXu3PM5apGJlkfoDCh27S2CZ9JwSCTmDsuijZN5OwFyn0SpcmSiCM6X6CNZmrQlpx2dMrwPuQgTv+fCS9/uOoWeQrFVe2GnhP8h5TjE0lc/IttkwTQNyPiRQGvj/y1Z5JgAq2luSQgO73IjCiKkgJsVg+AHliFQ4g++jsMMtm7c0hiRUy7s5z6nrBQhSmWQtCzBx7i3UMoXKSAQ1DGPClHBioPYtpBD+KfgeimSvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XVN4ccH2F9J17mw3qgnhtAKiDi6Hjh367bjYrOKh7pQ=; b=j0jLOKCXtDr1Xb8t4NLb/4igzOCRW61rrwODz2rrwWs35pX3hn/uuyZ1efyvBXAzrVih0qotpfTRPak/SbpHhkWyglI+BYkJBxGko5GNa8J87bS1y/jng1124LeuO157z3oqIr09NLwrdM4MQG4n0TM3QLW1S3g5lU5UgSYsVdEigIxbi1tLZ3BqRePj5v3nWsFLvF6o0moaabmaAuvZU+I9JRCfTGbQ/nLHq6CchH5g2B3BrhLG0ELdL2Eg1aYFHI3qxuUBmzwmnoAwn4z7DlgenNqvi8axHTy5aFJdSywiVTXVMOGv5gFxuHcZAndQ/yTtdWXStSaBaHUJHkPqeQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XVN4ccH2F9J17mw3qgnhtAKiDi6Hjh367bjYrOKh7pQ=; b=hWNnbJDMfUEXrLiR3PhzmcxabKYc48dJTO4/cDINJmc4J4za5mkQQcOaeqB71u68kJQTCqj2bjko3R7Jl3knqVvgRhVPN8jDK4HM+V/RwrgpKa9QSYiAHusImJMW6w8f1ipc/ztJ8NmGsSoZei+4Y+Oz1DcnPGDvNE29nnC2Zs0YHj1B87asFhVpqRVpsWvtiX6MLlYWlcMY8+FiHawYHq6V17IpeK3VDceDSh/EfAKVeIb5HHZk517ThZLZ9LvU/w9fEGiZjp4sR8VyEQY5EcP3XFmFMxEts1znlL6FG0jf/Ke2g+5bhjlqWLSCofJRt1/IqUrHbnu8bXs4ehs1vw== Received: from MW4PR03CA0354.namprd03.prod.outlook.com (2603:10b6:303:dc::29) by SJ0PR12MB8614.namprd12.prod.outlook.com (2603:10b6:a03:47d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 17:22:11 +0000 Received: from CO1NAM11FT090.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::99) by MW4PR03CA0354.outlook.office365.com (2603:10b6:303:dc::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 17:22:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT090.mail.protection.outlook.com (10.13.175.152) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 17:22:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 10:22:01 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 10:21:58 -0700 From: Gregory Etelson To: CC: , Ori Kam , Aman Singh , Yuying Zhang , "Ferruh Yigit" , Thomas Monjalon , "Andrew Rybchenko" Subject: [PATCH] ethdev: add indirect list flow action Date: Tue, 18 Apr 2023 20:21:44 +0300 Message-ID: <20230418172144.24365-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT090:EE_|SJ0PR12MB8614:EE_ X-MS-Office365-Filtering-Correlation-Id: 9a50f947-064d-4b18-9b37-08db4031722c X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tfMi4XDjwelcA8N2gVTV1nUsZoOTlNehRP/3kv/gDOG1defwVxGHOEotSagTXcFimexn5855hG2miyzSnSRpdzHfT3vM1MNh2dwUvaWDErxqOBf/+uCUonwdlREbEBlrzLZII/Z5+ClRKnFjnqed+krQXrBADdGWjOmDAkQ5RmQIBWaiXEKV+c7jmCfqupu5mFPWD7iqYmUqux0VCPP2bSkFjs4QpcmTZ0nOwBdeWZG+Nw4danrK+xhmFeDJFAVx0UW/YpLnnGkZiKMsgVHGORHKxNE4BLHKJ+fmlcb7WfI0xgVS7+dPSpHL7/AhDyWMEtfR+oAWq8OBKycj3yh1RdKlhQY+ix1sBrZHCHaHiNXifXPuWkXTc6bz5c+IrNxMx+Vl7AzM6z9RrWOw5dZZouJTovmRT4DETGYnq2jP3DUIdXoh62KQR2DHUhs8lfsz5A2d9ivsdGI1tEF5m3bO4WYmb2utp2F70S3TzuAgQHL8NIYT+17VEtq9p0F71WQ1rtZoNJb73Gj0VtorykAIgGWD6/JsgSWk23evrpYhomECrpPwZXrj2CqHF5QQmGvsIzOgzdODPgWwDRhle45wF12v0/BiUNG89bnaTDaNodYvx3gXwv15YOYSjZOCzICDBWGKWKEndzym8nNV4chY42T4mIhoxWqng6qZs/QOknntFS5SvATIRrl3kplxkcj+XIp8kmQ1nyqbuyZy0SmdzPchqt7fAUotAyG76c5HVPTpS6VUKj0ziJLRrk9jspYJU5GaDbRLrNe+2ErNxFJ2zQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(5660300002)(86362001)(426003)(336012)(82310400005)(2616005)(47076005)(83380400001)(82740400003)(16526019)(186003)(26005)(356005)(7636003)(1076003)(40480700001)(36860700001)(8676002)(34020700004)(6286002)(8936002)(478600001)(54906003)(7696005)(6666004)(316002)(55016003)(41300700001)(40460700003)(36756003)(4326008)(6916009)(70586007)(70206006)(2906002)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 17:22:11.5220 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a50f947-064d-4b18-9b37-08db4031722c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT090.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB8614 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Indirect flow action provides a handler to hardware flow action object. The handler is used in flow rules for sharing hardware action object state. Current INDIRECT flow handler can reference a single flow action type. New INDIRECT_LIST extends existing functionality. INDIRECT_LIST flow handler can reference one or many flow actions. testpmd example: set raw_encap 0 \ eth src is 11:00:00:00:00:11 dst is aa:00:00:00:00:aa / \ ipv4 src is 1.1.1.1 dst is 2.2.2.2 ttl is 64 proto is 17 / \ udp src is 0x1234 dst is 4789 / vxlan vni is 0xabcd / end_set set raw_encap 1 \ eth src is 22:00:00:00:00:22 dst is bb:00:00:00:00:bb / \ ipv6 src is 2001::1111 dst is 2001::2222 proto is 17 / \ udp src is 0x1234 dst is 4789 / vxlan vni is 0xabcd / end_set set sample_actions 0 \ raw_encap index 0 / represented_port ethdev_port_id 0 / end set sample_actions 1 \ raw_encap index 1 / represented_port ethdev_port_id 0 / end flow indirect_action 0 create transfer list actions \ sample ratio 1 index 0 / \ sample ratio 1 index 1 / \ jump group 0xcaca / end flow actions_template 0 create transfer actions_template_id 10 \ template indirect_list 0 / end mask indirect_list / end Signed-off-by: Gregory Etelson --- app/test-pmd/cmdline_flow.c | 41 ++++++- app/test-pmd/config.c | 162 +++++++++++++++++++------ app/test-pmd/testpmd.h | 7 +- doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/rte_flow.rst | 6 + doc/guides/rel_notes/release_23_07.rst | 4 + lib/ethdev/rte_flow.c | 92 ++++++++++++++ lib/ethdev/rte_flow.h | 149 +++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 27 ++++- lib/ethdev/version.map | 4 + 10 files changed, 452 insertions(+), 41 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..956a39d167 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -145,6 +145,7 @@ enum index { /* Queue indirect action arguments */ QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_LIST_CREATE, QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, QUEUE_INDIRECT_ACTION_QUERY, @@ -157,6 +158,7 @@ enum index { QUEUE_INDIRECT_ACTION_TRANSFER, QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, QUEUE_INDIRECT_ACTION_SPEC, + QUEUE_INDIRECT_ACTION_LIST, /* Queue indirect action update arguments */ QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, @@ -242,6 +244,7 @@ enum index { /* Indirect action arguments */ INDIRECT_ACTION_CREATE, + INDIRECT_ACTION_LIST_CREATE, INDIRECT_ACTION_UPDATE, INDIRECT_ACTION_DESTROY, INDIRECT_ACTION_QUERY, @@ -253,6 +256,7 @@ enum index { INDIRECT_ACTION_EGRESS, INDIRECT_ACTION_TRANSFER, INDIRECT_ACTION_SPEC, + INDIRECT_ACTION_LIST, /* Indirect action destroy arguments */ INDIRECT_ACTION_DESTROY_ID, @@ -626,6 +630,7 @@ enum index { ACTION_SAMPLE_INDEX, ACTION_SAMPLE_INDEX_VALUE, ACTION_INDIRECT, + ACTION_INDIRECT_LIST, ACTION_SHARED_INDIRECT, INDIRECT_ACTION_PORT, INDIRECT_ACTION_ID2PTR, @@ -1266,6 +1271,7 @@ static const enum index next_qia_create_attr[] = { QUEUE_INDIRECT_ACTION_TRANSFER, QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, QUEUE_INDIRECT_ACTION_SPEC, + QUEUE_INDIRECT_ACTION_LIST, ZERO, }; @@ -1294,6 +1300,7 @@ static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_EGRESS, INDIRECT_ACTION_TRANSFER, INDIRECT_ACTION_SPEC, + INDIRECT_ACTION_LIST, ZERO, }; @@ -2013,6 +2020,7 @@ static const enum index next_action[] = { ACTION_AGE_UPDATE, ACTION_SAMPLE, ACTION_INDIRECT, + ACTION_INDIRECT_LIST, ACTION_SHARED_INDIRECT, ACTION_MODIFY_FIELD, ACTION_CONNTRACK, @@ -2289,6 +2297,7 @@ static const enum index next_action_sample[] = { ACTION_RAW_ENCAP, ACTION_VXLAN_ENCAP, ACTION_NVGRE_ENCAP, + ACTION_REPRESENTED_PORT, ACTION_NEXT, ZERO, }; @@ -3426,6 +3435,12 @@ static const struct token token_list[] = { .help = "specify action to create indirect handle", .next = NEXT(next_action), }, + [QUEUE_INDIRECT_ACTION_LIST] = { + .name = "list", + .help = "specify actions for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_qia, + }, /* Top-level command. */ [PUSH] = { .name = "push", @@ -6775,6 +6790,14 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uint32_t))), .call = parse_vc, }, + [ACTION_INDIRECT_LIST] = { + .name = "indirect_list", + .help = "apply indirect list action by id", + .priv = PRIV_ACTION(INDIRECT_LIST, 0), + .next = NEXT(next_ia), + .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uint32_t))), + .call = parse_vc, + }, [ACTION_SHARED_INDIRECT] = { .name = "shared_indirect", .help = "apply indirect action by id and port", @@ -6823,6 +6846,12 @@ static const struct token token_list[] = { .help = "specify action to create indirect handle", .next = NEXT(next_action), }, + [INDIRECT_ACTION_LIST] = { + .name = "list", + .help = "specify actions for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_ia, + }, [ACTION_POL_G] = { .name = "g_actions", .help = "submit a list of associated actions for green", @@ -7181,6 +7210,9 @@ parse_ia(struct context *ctx, const struct token *token, return len; case INDIRECT_ACTION_QU_MODE: return len; + case INDIRECT_ACTION_LIST: + out->command = INDIRECT_ACTION_LIST_CREATE; + return len; default: return -1; } @@ -7278,6 +7310,9 @@ parse_qia(struct context *ctx, const struct token *token, return len; case QUEUE_INDIRECT_ACTION_QU_MODE: return len; + case QUEUE_INDIRECT_ACTION_LIST: + out->command = QUEUE_INDIRECT_ACTION_LIST_CREATE; + return len; default: return -1; } @@ -7454,10 +7489,12 @@ parse_vc(struct context *ctx, const struct token *token, return -1; break; case ACTIONS: - out->args.vc.actions = + out->args.vc.actions = out->args.vc.pattern ? (void *)RTE_ALIGN_CEIL((uintptr_t) (out->args.vc.pattern + out->args.vc.pattern_n), + sizeof(double)) : + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), sizeof(double)); ctx->object = out->args.vc.actions; ctx->objmask = NULL; @@ -11532,6 +11569,7 @@ cmd_flow_parsed(const struct buffer *in) in->args.aged.destroy); break; case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_LIST_CREATE: port_queue_action_handle_create( in->port, in->queue, in->postpone, in->args.vc.attr.group, @@ -11567,6 +11605,7 @@ cmd_flow_parsed(const struct buffer *in) in->args.vc.actions); break; case INDIRECT_ACTION_CREATE: + case INDIRECT_ACTION_LIST_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, &((const struct rte_flow_indir_action_conf) { diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 096c218c12..c220682ff9 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1764,6 +1764,44 @@ port_flow_configure(portid_t port_id, return 0; } +static int +action_handle_create(portid_t port_id, + struct port_indirect_action *pia, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) { + struct rte_flow_action_conntrack *ct = + (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf); + + memcpy(ct, &conntrack_context, sizeof(*ct)); + } + pia->type = action->type; + pia->handle = rte_flow_action_handle_create(port_id, conf, action, + error); + return pia->handle ? 0 : -1; +} + +static int +action_list_handle_create(portid_t port_id, + struct port_indirect_action *pia, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + pia->type = RTE_FLOW_ACTION_TYPE_INDIRECT_LIST; + pia->list_handle = + rte_flow_action_list_handle_create(port_id, conf, + actions, error); + return pia->list_handle ? 0 : -1; +} /** Create indirect action */ int port_action_handle_create(portid_t port_id, uint32_t id, @@ -1773,32 +1811,21 @@ port_action_handle_create(portid_t port_id, uint32_t id, struct port_indirect_action *pia; int ret; struct rte_flow_error error; + bool is_indirect_list = action[1].type != RTE_FLOW_ACTION_TYPE_END; ret = action_alloc(port_id, id, &pia); if (ret) return ret; - if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { - struct rte_flow_action_age *age = - (struct rte_flow_action_age *)(uintptr_t)(action->conf); - - pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; - age->context = &pia->age_type; - } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) { - struct rte_flow_action_conntrack *ct = - (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf); - - memcpy(ct, &conntrack_context, sizeof(*ct)); - } /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x22, sizeof(error)); - pia->handle = rte_flow_action_handle_create(port_id, conf, action, - &error); - if (!pia->handle) { + ret = is_indirect_list ? + action_list_handle_create(port_id, pia, conf, action, &error) : + action_handle_create(port_id, pia, conf, action, &error); + if (ret) { uint32_t destroy_id = pia->id; port_action_handle_destroy(port_id, 1, &destroy_id); return port_flow_complain(&error); } - pia->type = action->type; printf("Indirect action #%u created\n", pia->id); return 0; } @@ -1833,10 +1860,17 @@ port_action_handle_destroy(portid_t port_id, */ memset(&error, 0x33, sizeof(error)); - if (pia->handle && rte_flow_action_handle_destroy( - port_id, pia->handle, &error)) { - ret = port_flow_complain(&error); - continue; + if (pia->handle) { + ret = pia->type == + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_action_list_handle_destroy + (port_id, pia->list_handle, &error) : + rte_flow_action_handle_destroy + (port_id, pia->handle, &error); + if (ret) { + ret = port_flow_complain(&error); + continue; + } } *tmp = pia->next; printf("Indirect action #%u destroyed\n", pia->id); @@ -1867,11 +1901,18 @@ port_action_handle_flush(portid_t port_id) /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x44, sizeof(error)); - if (pia->handle != NULL && - rte_flow_action_handle_destroy - (port_id, pia->handle, &error) != 0) { - printf("Indirect action #%u not destroyed\n", pia->id); - ret = port_flow_complain(&error); + if (pia->handle != NULL) { + ret = pia->type == + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_action_list_handle_destroy + (port_id, pia->list_handle, &error) : + rte_flow_action_handle_destroy + (port_id, pia->handle, &error); + if (ret) { + printf("Indirect action #%u not destroyed\n", + pia->id); + ret = port_flow_complain(&error); + } tmp = &pia->next; } else { *tmp = pia->next; @@ -2822,6 +2863,45 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +static void +queue_action_handle_create(portid_t port_id, uint32_t queue_id, + struct port_indirect_action *pia, + struct queue_job *job, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, + attr, conf, action, + job, error); + pia->type = action->type; +} + +static void +queue_action_list_handle_create(portid_t port_id, uint32_t queue_id, + struct port_indirect_action *pia, + struct queue_job *job, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + /* Poisoning to make sure PMDs update it in case of error. */ + pia->type = RTE_FLOW_ACTION_TYPE_INDIRECT_LIST; + pia->list_handle = rte_flow_async_action_list_handle_create + (port_id, queue_id, attr, conf, action, + job, error); +} + /** Enqueue indirect action create operation. */ int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, @@ -2835,6 +2915,8 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, int ret; struct rte_flow_error error; struct queue_job *job; + bool is_indirect_list = action[1].type != RTE_FLOW_ACTION_TYPE_END; + ret = action_alloc(port_id, id, &pia); if (ret) @@ -2853,17 +2935,16 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, job->type = QUEUE_JOB_TYPE_ACTION_CREATE; job->pia = pia; - if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { - struct rte_flow_action_age *age = - (struct rte_flow_action_age *)(uintptr_t)(action->conf); - - pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; - age->context = &pia->age_type; - } /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x88, sizeof(error)); - pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, - &attr, conf, action, job, &error); + + if (is_indirect_list) + queue_action_list_handle_create(port_id, queue_id, pia, job, + &attr, conf, action, &error); + else + queue_action_handle_create(port_id, queue_id, pia, job, &attr, + conf, action, &error); + if (!pia->handle) { uint32_t destroy_id = pia->id; port_queue_action_handle_destroy(port_id, queue_id, @@ -2871,7 +2952,6 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, free(job); return port_flow_complain(&error); } - pia->type = action->type; printf("Indirect action #%u creation queued\n", pia->id); return 0; } @@ -2920,9 +3000,15 @@ port_queue_action_handle_destroy(portid_t port_id, } job->type = QUEUE_JOB_TYPE_ACTION_DESTROY; job->pia = pia; - - if (rte_flow_async_action_handle_destroy(port_id, - queue_id, &attr, pia->handle, job, &error)) { + ret = pia->type == RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_async_action_list_handle_destroy + (port_id, queue_id, + &attr, pia->list_handle, + job, &error) : + rte_flow_async_action_handle_destroy + (port_id, queue_id, &attr, pia->handle, + job, &error); + if (ret) { free(job); ret = port_flow_complain(&error); continue; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index bdfbfd36d3..9786e62d28 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -228,7 +228,12 @@ struct port_indirect_action { struct port_indirect_action *next; /**< Next flow in list. */ uint32_t id; /**< Indirect action ID. */ enum rte_flow_action_type type; /**< Action type. */ - struct rte_flow_action_handle *handle; /**< Indirect action handle. */ + union { + struct rte_flow_action_handle *handle; + /**< Indirect action handle. */ + struct rte_flow_action_list_handle *list_handle; + /**< Indirect action list handle*/ + }; enum age_action_context_type age_type; /**< Age action context type. */ }; diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 1a5087abad..10a1c1af77 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -158,6 +158,7 @@ drop = flag = inc_tcp_ack = inc_tcp_seq = +indirect_list = jump = mac_swap = mark = diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..ed67e86c58 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3300,6 +3300,12 @@ The ``quota`` value is reduced according to ``mode`` setting. | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | +------------------+----------------------------------------------------+ +Action: ``INDIRECT_LIST`` +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The new ``INDIRECT_LIST`` flow action references one or many flow actions. +Extends the ``INDIRECT`` flow action. + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..955493e445 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added indirect list flow action.** + + * ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` + Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..73b31fc69f 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), + MK_FLOW_ACTION(INDIRECT_LIST, 0), }; int @@ -2171,3 +2172,94 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, user_data, error); return flow_err(port_id, ret, error); } + +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_create) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + return NULL; + } + dev = &rte_eth_devices[port_id]; + return ops->action_list_handle_create(dev, conf, actions, error); +} + +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_destroy) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + dev = &rte_eth_devices[port_id]; + return ops->action_list_handle_destroy(dev, handle, error); +} + +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_create) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + return NULL; + } + dev = &rte_eth_devices[port_id]; + return ops->async_action_list_handle_create(dev, queue_id, attr, conf, + actions, user_data, error); +} + +int +rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id, + const + struct rte_flow_op_attr *op_attr, + struct + rte_flow_action_list_handle *handle, + void *user_data, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_destroy) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "async action_list handle not supported"); + dev = &rte_eth_devices[port_id]; + ret = ops->async_action_list_handle_destroy(dev, queue_id, op_attr, + handle, user_data, error); + return flow_err(port_id, ret, error); +} + diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..deb5dc2f9d 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,11 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * RTE_FLOW_ACTION_TYPE_INDIRECT_LIST + */ + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, }; /** @@ -6118,6 +6123,150 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error); +struct rte_flow_action_list_handle; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create an indirect flow action object from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action lists. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Async function call to create an indirect flow action object + * from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action list. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy indirect actions list by handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] handle + * Handle for the indirect actions list to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if actions list pointed by *action* handle was not found. + * - (-EBUSY) if actions list pointed by *action* handle still used + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action list destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] op_attr + * Indirect action destruction operation attributes. + * @param[in] handle + * Handle for the indirect action object to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_destroy + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..71d9b4b0a7 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -121,6 +121,17 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_create() */ + struct rte_flow_action_list_handle *(*action_list_handle_create) + (struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_destroy() */ + int (*action_list_handle_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -294,7 +305,7 @@ struct rte_flow_ops { void *data, void *user_data, struct rte_flow_error *error); - /** See rte_flow_async_action_handle_query_update */ + /** @see rte_flow_async_action_handle_query_update */ int (*async_action_handle_query_update) (struct rte_eth_dev *dev, uint32_t queue_id, const struct rte_flow_op_attr *op_attr, @@ -302,6 +313,20 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_create() */ + struct rte_flow_action_list_handle * + (*async_action_list_handle_create) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_destroy() */ + int (*async_action_list_handle_destroy) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *action_handle, + void *user_data, struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..d6c0b927f1 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,10 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + rte_flow_action_list_handle_create; + rte_flow_action_list_handle_destroy; + rte_flow_async_action_list_handle_create; + rte_flow_async_action_list_handle_destroy; }; INTERNAL {