From patchwork Fri Oct 22 08:57:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yu, DapengX" X-Patchwork-Id: 102629 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A06BA0C43; Fri, 22 Oct 2021 10:57:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0835D41149; Fri, 22 Oct 2021 10:57:47 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 636B44069D; Fri, 22 Oct 2021 10:57:45 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10144"; a="229128404" X-IronPort-AV: E=Sophos;i="5.87,172,1631602800"; d="scan'208";a="229128404" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 01:57:44 -0700 X-IronPort-AV: E=Sophos;i="5.87,172,1631602800"; d="scan'208";a="484602198" Received: from unknown (HELO localhost.localdomain) ([10.240.183.93]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 01:57:42 -0700 From: dapengx.yu@intel.com To: Qiming Yang , Qi Zhang Cc: dev@dpdk.org, haiyue.wang@intel.com, Dapeng Yu , stable@dpdk.org Date: Fri, 22 Oct 2021 16:57:34 +0800 Message-Id: <20211022085734.712382-1-dapengx.yu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20211021033527.177448-1-dapengx.yu@intel.com> References: <20211021033527.177448-1-dapengx.yu@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2] net/ice: fix function pointer in multi-process X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Dapeng Yu The sharing of function pointer may cause crash of secondary process. This patch removes the shared function pointer: "rxd_to_pkt_fields" in the instance of "struct ice_rx_queue" which is shared between primary and secondary process, and uses an index of function pointer array to replace it. Fixes: 7a340b0b4e03 ("net/ice: refactor Rx FlexiMD handling") Cc: stable@dpdk.org Signed-off-by: Dapeng Yu --- V2: * Remove redundant code --- drivers/net/ice/ice_rxtx.c | 35 +++++++++++++++++------------------ drivers/net/ice/ice_rxtx.h | 2 +- 2 files changed, 18 insertions(+), 19 deletions(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index ff362c21d9..667eae9f6d 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -205,51 +205,50 @@ ice_rxd_to_pkt_fields_by_comms_aux_v2(struct ice_rx_queue *rxq, #endif } +static const ice_rxd_to_pkt_fields_t rxd_to_pkt_fields_ops[] = { + [ICE_RXDID_COMMS_AUX_VLAN] = ice_rxd_to_pkt_fields_by_comms_aux_v1, + [ICE_RXDID_COMMS_AUX_IPV4] = ice_rxd_to_pkt_fields_by_comms_aux_v1, + [ICE_RXDID_COMMS_AUX_IPV6] = ice_rxd_to_pkt_fields_by_comms_aux_v1, + [ICE_RXDID_COMMS_AUX_IPV6_FLOW] = ice_rxd_to_pkt_fields_by_comms_aux_v1, + [ICE_RXDID_COMMS_AUX_TCP] = ice_rxd_to_pkt_fields_by_comms_aux_v1, + [ICE_RXDID_COMMS_AUX_IP_OFFSET] = ice_rxd_to_pkt_fields_by_comms_aux_v2, + [ICE_RXDID_COMMS_GENERIC] = ice_rxd_to_pkt_fields_by_comms_generic, + [ICE_RXDID_COMMS_OVS] = ice_rxd_to_pkt_fields_by_comms_ovs, +}; + void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq, uint32_t rxdid) { + rxq->rxdid = rxdid; + switch (rxdid) { case ICE_RXDID_COMMS_AUX_VLAN: rxq->xtr_ol_flag = rte_net_ice_dynflag_proto_xtr_vlan_mask; - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_aux_v1; break; case ICE_RXDID_COMMS_AUX_IPV4: rxq->xtr_ol_flag = rte_net_ice_dynflag_proto_xtr_ipv4_mask; - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_aux_v1; break; case ICE_RXDID_COMMS_AUX_IPV6: rxq->xtr_ol_flag = rte_net_ice_dynflag_proto_xtr_ipv6_mask; - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_aux_v1; break; case ICE_RXDID_COMMS_AUX_IPV6_FLOW: rxq->xtr_ol_flag = rte_net_ice_dynflag_proto_xtr_ipv6_flow_mask; - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_aux_v1; break; case ICE_RXDID_COMMS_AUX_TCP: rxq->xtr_ol_flag = rte_net_ice_dynflag_proto_xtr_tcp_mask; - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_aux_v1; break; case ICE_RXDID_COMMS_AUX_IP_OFFSET: rxq->xtr_ol_flag = rte_net_ice_dynflag_proto_xtr_ip_offset_mask; - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_aux_v2; - break; - - case ICE_RXDID_COMMS_GENERIC: - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_generic; - break; - - case ICE_RXDID_COMMS_OVS: - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_ovs; break; default: /* update this according to the RXDID for PROTO_XTR_NONE */ - rxq->rxd_to_pkt_fields = ice_rxd_to_pkt_fields_by_comms_ovs; + rxq->rxdid = ICE_RXDID_COMMS_OVS; break; } @@ -1622,7 +1621,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) mb->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)]; ice_rxd_to_vlan_tci(mb, &rxdp[j]); - rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]); + rxd_to_pkt_fields_ops[rxq->rxdid](rxq, mb, &rxdp[j]); #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) { ts_ns = ice_tstamp_convert_32b_64b(hw, @@ -1939,7 +1938,7 @@ ice_recv_scattered_pkts(void *rx_queue, first_seg->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; ice_rxd_to_vlan_tci(first_seg, &rxd); - rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd); + rxd_to_pkt_fields_ops[rxq->rxdid](rxq, first_seg, &rxd); pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0); #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) { @@ -2370,7 +2369,7 @@ ice_recv_pkts(void *rx_queue, rxm->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; ice_rxd_to_vlan_tci(rxm, &rxd); - rxq->rxd_to_pkt_fields(rxq, rxm, &rxd); + rxd_to_pkt_fields_ops[rxq->rxdid](rxq, rxm, &rxd); pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0); #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) { diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index e1c644fb63..146dc1f95d 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -89,7 +89,7 @@ struct ice_rx_queue { bool rx_deferred_start; /* don't start this queue in dev start */ uint8_t proto_xtr; /* Protocol extraction from flexible descriptor */ uint64_t xtr_ol_flag; /* Protocol extraction offload flag */ - ice_rxd_to_pkt_fields_t rxd_to_pkt_fields; /* handle FlexiMD by RXDID */ + uint32_t rxdid; /* Receive Flex Descriptor profile ID */ ice_rx_release_mbufs_t rx_rel_mbufs; uint64_t offloads; uint32_t time_high;