From patchwork Thu Jan 11 08:12:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 33564 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F41941B245; Thu, 11 Jan 2018 09:13:34 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 73E4F1B238 for ; Thu, 11 Jan 2018 09:13:28 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 3C36CBC005E; Thu, 11 Jan 2018 08:13:27 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Thu, 11 Jan 2018 00:13:24 -0800 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Thu, 11 Jan 2018 00:13:24 -0800 Received: from uklogin.uk.solarflarecom.com (uklogin.uk.solarflarecom.com [10.17.10.10]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w0B8DNoA010548; Thu, 11 Jan 2018 08:13:23 GMT Received: from uklogin.uk.solarflarecom.com (localhost.localdomain [127.0.0.1]) by uklogin.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w0B8DFho001318; Thu, 11 Jan 2018 08:13:23 GMT From: Andrew Rybchenko To: CC: Ivan Malov Date: Thu, 11 Jan 2018 08:12:37 +0000 Message-ID: <1515658359-1041-5-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1515658359-1041-1-git-send-email-arybchenko@solarflare.com> References: <1515658359-1041-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-MDID: 1515658407-rsmh0W3SynYb Subject: [dpdk-dev] [PATCH 4/6] net/sfc: convert to the new Rx offload API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ivan Malov Ethdev Rx offloads API has changed since: commit ce17eddefc20 ("ethdev: introduce Rx queue offloads API") This commit support the new Rx offloads API. Signed-off-by: Ivan Malov Signed-off-by: Andrew Rybchenko --- drivers/net/sfc/sfc_ethdev.c | 27 +++++++++-- drivers/net/sfc/sfc_port.c | 5 +- drivers/net/sfc/sfc_rx.c | 111 +++++++++++++++++++++++++++++-------------- drivers/net/sfc/sfc_rx.h | 1 + 4 files changed, 103 insertions(+), 41 deletions(-) diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 851b38b..0244a0f 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -104,7 +104,15 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) /* By default packets are dropped if no descriptors are available */ dev_info->default_rxconf.rx_drop_en = 1; - dev_info->rx_offload_capa = sfc_rx_get_dev_offload_caps(sa); + dev_info->rx_queue_offload_capa = sfc_rx_get_queue_offload_caps(sa); + + /* + * rx_offload_capa includes both device and queue offloads since + * the latter may be requested on a per device basis which makes + * sense when some offloads are needed to be set on all queues. + */ + dev_info->rx_offload_capa = sfc_rx_get_dev_offload_caps(sa) | + dev_info->rx_queue_offload_capa; dev_info->tx_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM | @@ -882,7 +890,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) * The driver does not use it, but other PMDs update jumbo_frame * flag and max_rx_pkt_len when MTU is set. */ - dev->data->dev_conf.rxmode.jumbo_frame = (mtu > ETHER_MAX_LEN); + if (mtu > ETHER_MAX_LEN) { + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + rxmode->jumbo_frame = 1; + } + dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu; sfc_adapter_unlock(sa); @@ -1045,8 +1059,13 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.rx_free_thresh = rxq->refill_threshold; qinfo->conf.rx_drop_en = 1; qinfo->conf.rx_deferred_start = rxq_info->deferred_start; - qinfo->scattered_rx = - ((rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) != 0); + qinfo->conf.offloads = DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) { + qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER; + qinfo->scattered_rx = 1; + } qinfo->nb_desc = rxq_info->entries; sfc_adapter_unlock(sa); diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c index a48388d..c423f52 100644 --- a/drivers/net/sfc/sfc_port.c +++ b/drivers/net/sfc/sfc_port.c @@ -299,11 +299,12 @@ sfc_port_configure(struct sfc_adapter *sa) { const struct rte_eth_dev_data *dev_data = sa->eth_dev->data; struct sfc_port *port = &sa->port; + const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode; sfc_log_init(sa, "entry"); - if (dev_data->dev_conf.rxmode.jumbo_frame) - port->pdu = dev_data->dev_conf.rxmode.max_rx_pkt_len; + if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + port->pdu = rxmode->max_rx_pkt_len; else port->pdu = EFX_MAC_PDU(dev_data->mtu); diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c index d35f4f7..abc53fb 100644 --- a/drivers/net/sfc/sfc_rx.c +++ b/drivers/net/sfc/sfc_rx.c @@ -768,6 +768,8 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa) const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic); uint64_t caps = 0; + caps |= DEV_RX_OFFLOAD_JUMBO_FRAME; + caps |= DEV_RX_OFFLOAD_CRC_STRIP; caps |= DEV_RX_OFFLOAD_IPV4_CKSUM; caps |= DEV_RX_OFFLOAD_UDP_CKSUM; caps |= DEV_RX_OFFLOAD_TCP_CKSUM; @@ -779,10 +781,62 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa) return caps; } +uint64_t +sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa) +{ + uint64_t caps = 0; + + if (sa->dp_rx->features & SFC_DP_RX_FEAT_SCATTER) + caps |= DEV_RX_OFFLOAD_SCATTER; + + return caps; +} + +static void +sfc_rx_log_offloads(struct sfc_adapter *sa, const char *offload_group, + const char *verdict, uint64_t offloads) +{ + unsigned long long bit; + + while ((bit = __builtin_ffsll(offloads)) != 0) { + uint64_t flag = (1ULL << --bit); + + sfc_err(sa, "Rx %s offload %s %s", offload_group, + rte_eth_dev_rx_offload_name(flag), verdict); + + offloads &= ~flag; + } +} + +static boolean_t +sfc_rx_queue_offloads_mismatch(struct sfc_adapter *sa, uint64_t requested) +{ + uint64_t mandatory = sa->eth_dev->data->dev_conf.rxmode.offloads; + uint64_t supported = sfc_rx_get_dev_offload_caps(sa) | + sfc_rx_get_queue_offload_caps(sa); + uint64_t rejected = requested & ~supported; + uint64_t missing = (requested & mandatory) ^ mandatory; + boolean_t mismatch = B_FALSE; + + if (rejected) { + sfc_rx_log_offloads(sa, "queue", "is unsupported", rejected); + mismatch = B_TRUE; + } + + if (missing) { + sfc_rx_log_offloads(sa, "queue", "must be set", missing); + mismatch = B_TRUE; + } + + return mismatch; +} + static int sfc_rx_qcheck_conf(struct sfc_adapter *sa, unsigned int rxq_max_fill_level, const struct rte_eth_rxconf *rx_conf) { + uint64_t offloads_supported = sfc_rx_get_dev_offload_caps(sa) | + sfc_rx_get_queue_offload_caps(sa); int rc = 0; if (rx_conf->rx_thresh.pthresh != 0 || @@ -804,6 +858,17 @@ sfc_rx_qcheck_conf(struct sfc_adapter *sa, unsigned int rxq_max_fill_level, rc = EINVAL; } + if ((rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM) != + DEV_RX_OFFLOAD_CHECKSUM) + sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)"); + + if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) && + (~rx_conf->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) + sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on"); + + if (sfc_rx_queue_offloads_mismatch(sa, rx_conf->offloads)) + rc = EINVAL; + return rc; } @@ -946,7 +1011,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index, } if ((buf_size < sa->port.pdu + encp->enc_rx_prefix_size) && - !sa->eth_dev->data->dev_conf.rxmode.enable_scatter) { + (~rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)) { sfc_err(sa, "Rx scatter is disabled and RxQ %u mbuf pool " "object size is too small", sw_index); sfc_err(sa, "RxQ %u calculated Rx buffer size is %u vs " @@ -964,7 +1029,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index, rxq_info->entries = rxq_entries; rxq_info->type = EFX_RXQ_TYPE_DEFAULT; rxq_info->type_flags = - sa->eth_dev->data->dev_conf.rxmode.enable_scatter ? + (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) ? EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE; if ((encp->enc_tunnel_encapsulations_supported != 0) && @@ -1227,6 +1292,9 @@ sfc_rx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index) static int sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode) { + uint64_t offloads_supported = sfc_rx_get_dev_offload_caps(sa) | + sfc_rx_get_queue_offload_caps(sa); + uint64_t offloads_rejected = rxmode->offloads & ~offloads_supported; int rc = 0; switch (rxmode->mq_mode) { @@ -1247,45 +1315,18 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode) rc = EINVAL; } - if (rxmode->header_split) { - sfc_err(sa, "Header split on Rx not supported"); - rc = EINVAL; - } - - if (rxmode->hw_vlan_filter) { - sfc_err(sa, "HW VLAN filtering not supported"); - rc = EINVAL; - } - - if (rxmode->hw_vlan_strip) { - sfc_err(sa, "HW VLAN stripping not supported"); + if (offloads_rejected) { + sfc_rx_log_offloads(sa, "device", "is unsupported", + offloads_rejected); rc = EINVAL; } - if (rxmode->hw_vlan_extend) { - sfc_err(sa, - "Q-in-Q HW VLAN stripping not supported"); - rc = EINVAL; - } - - if (!rxmode->hw_strip_crc) { - sfc_warn(sa, - "FCS stripping control not supported - always stripped"); + if (~rxmode->offloads & DEV_RX_OFFLOAD_CRC_STRIP) { + sfc_warn(sa, "FCS stripping cannot be disabled - always on"); + rxmode->offloads |= DEV_RX_OFFLOAD_CRC_STRIP; rxmode->hw_strip_crc = 1; } - if (rxmode->enable_scatter && - (~sa->dp_rx->features & SFC_DP_RX_FEAT_SCATTER)) { - sfc_err(sa, "Rx scatter not supported by %s datapath", - sa->dp_rx->dp.name); - rc = EINVAL; - } - - if (rxmode->enable_lro) { - sfc_err(sa, "LRO not supported"); - rc = EINVAL; - } - return rc; } diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h index cc9245f..8c0fa71 100644 --- a/drivers/net/sfc/sfc_rx.h +++ b/drivers/net/sfc/sfc_rx.h @@ -143,6 +143,7 @@ int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index); void sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index); uint64_t sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa); +uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa); void sfc_rx_qflush_done(struct sfc_rxq *rxq); void sfc_rx_qflush_failed(struct sfc_rxq *rxq);