From patchwork Thu Nov 23 16:13:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 31597 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ED8D63255; Thu, 23 Nov 2017 17:13:33 +0100 (CET) Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by dpdk.org (Postfix) with ESMTP id 65C702BEF for ; Thu, 23 Nov 2017 17:13:29 +0100 (CET) Received: by mail-wm0-f65.google.com with SMTP id r68so17929936wmr.3 for ; Thu, 23 Nov 2017 08:13:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=cVoD0L+MvfcZw3Fh7HjSbagnPpkE5MDBRk/sXZv2YyA=; b=Glz0tS2GInIhXoedZSZnFaFujQ9Iv2y+/j9I5nmgDqwGmZX5xpKs9KpIqifSXSi1tt Zf08WvDawO6W239BESUYpysjtwy81YqweqPafvX4kuZjfIlQQZtNinCICKZVUNbiXPGs ukWk2loIZbDwAGZnis5pLhVMvxo16CETSq9KAZWLyDjvUIA/7l91xzslogYJWgG+VRYS uFFoNTUkbNYQ0zQKvdIH9p5heR/W4nuk54q9BXkZ6osmE/JTXsrkqssXmiGORPzSY2Gu RvEydY07jPNUrHMSfdckTQ8CZgBZCJBrn5/WJc9FmFYaMKq31AARca6+Ux3eVMy5qV22 /tjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=cVoD0L+MvfcZw3Fh7HjSbagnPpkE5MDBRk/sXZv2YyA=; b=LdK1lUDVfds562h3FE02TIQ1iIWU9HX+kl3uSjywnMBXXaYrTZcPfqU6/Vbe/hKpaG OXyWxfcY/XaBjCTJm4S0VdXl7aYCYWDZWcSZN4iZCStEEmUY8I+5CtL5aDT2FdXVrwMC u++fr/B1IBF9/Oh1V0TtD7Z4YBdz4dWuOWKbwwfgJNqHa98jN3EyqfIAyKCRIco9FpoK QjSaAAeMGuXrcQN3H5XNGcXKKeGCYwCfOr3gGSj+F7dlhzXhWPnlWm5wxWE9/BdCI34R afzYJlMvkY6aKhq5HP6WlCWHfkuR7JUXYON4wxs8G56+W9av5+ZSS33n0tF6w7AZkM1h 6qOg== X-Gm-Message-State: AJaThX5niyei0jcGAQO1BI9o3xuKtUl1LMqEmW6P6df/JQOZ0FORFP4W DSg1+98M2a5Katbzf+IJVuvXZXfFTQ== X-Google-Smtp-Source: AGs4zMa/odQ5se0zCq4iyo2mjQ5BpzikcNgC3BwBOO54PbehaBvhIaggZWeovT862bPAoeihaPmJWw== X-Received: by 10.80.131.38 with SMTP id 35mr35167467edh.291.1511453608687; Thu, 23 Nov 2017 08:13:28 -0800 (PST) Received: from laranjeiro-vm.dev.6wind.com. (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id u10sm14663036edm.56.2017.11.23.08.13.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Nov 2017 08:13:28 -0800 (PST) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Aviad Yehezkel , Yongseok Koh , Adrien Mazarguil Date: Thu, 23 Nov 2017 17:13:05 +0100 Message-Id: <78bfa76414abebb8436c1c1000a180cb6b01e0ac.1511453340.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 3/7] net/mlx5: add IPsec Tx/Rx offload support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Aviad Yehezkel This feature is only supported by ConnectX-4 Lx INNOVA NIC. Having such support will automatically disable and enable crypto offload device arguments to make the PMD IPsec capable. Signed-off-by: Aviad Yehezkel Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5.c | 8 ++++ drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_ethdev.c | 4 ++ drivers/net/mlx5/mlx5_prm.h | 39 ++++++++++++++++ drivers/net/mlx5/mlx5_rxtx.c | 104 ++++++++++++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_rxtx.h | 4 +- drivers/net/mlx5/mlx5_txq.c | 1 + 7 files changed, 154 insertions(+), 7 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index cd66fe162..00480cef0 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -106,6 +106,14 @@ #define MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP (1 << 4) #endif +#ifdef HAVE_IBV_IPSEC_SUPPORT +#define MLX5_IPSEC_FLAGS \ + (MLX5DV_CONTEXT_XFRM_FLAGS_ESP_AES_GCM_TX | \ + MLX5DV_CONTEXT_XFRM_FLAGS_ESP_AES_GCM_RX | \ + MLX5DV_CONTEXT_XFRM_FLAGS_ESP_AES_GCM_REQ_METADATA | \ + MLX5DV_CONTEXT_XFRM_FLAGS_ESP_AES_GCM_SPI_RSS_ONLY) +#endif + struct mlx5_args { int cqe_comp; int txq_inline; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e6a69b823..c6a01d972 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -117,6 +117,7 @@ struct priv { unsigned int isolated:1; /* Whether isolated mode is enabled. */ unsigned int tx_vec_en:1; /* Whether Tx vector is enabled. */ unsigned int rx_vec_en:1; /* Whether Rx vector is enabled. */ + unsigned int ipsec_en:1; /* Whether IPsec is enabled. */ unsigned int counter_set_supported:1; /* Counter set is supported. */ /* Whether Tx offloads for tunneled packets are supported. */ unsigned int max_tso_payload_sz; /* Maximum TCP payload for TSO. */ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index ca9ad0fef..f0c7fba43 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -680,6 +680,10 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) DEV_TX_OFFLOAD_TCP_CKSUM); if (priv->tso) info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO; + if (priv->ipsec_en) { + info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY; + info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY; + } if (priv->tunnel_en) info->tx_offload_capa |= (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_VXLAN_TNL_TSO | diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h index 2de310bcb..bd6270671 100644 --- a/drivers/net/mlx5/mlx5_prm.h +++ b/drivers/net/mlx5/mlx5_prm.h @@ -342,4 +342,43 @@ mlx5_flow_mark_get(uint32_t val) #endif } +/* IPsec offloads elements. */ + +/* IPsec Rx code. */ +#define MLX5_IPSEC_RX_DECRYPTED 0x11 +#define MLX5_IPSEC_RX_AUTH_FAIL 0x12 + +/* IPsec Tx code. */ +#define MLX5_IPSEC_TX_OFFLOAD 0x8 + +/* Metadata length . */ +#define MLX5_METADATA_LEN 8 + +/* Packet IPsec Rx metadata. */ +struct mlx5_rx_pkt_ipsec_metadata { + uint8_t reserved; + rte_be32_t sa_handle; +} __rte_packed; + +/* Tx packet Metadata. */ +struct mlx5_tx_pkt_ipsec_metadata { + rte_be16_t mss_inv; /** MSS fixed point, used only in LSO. */ + rte_be16_t seq; /** LSBs of the first TCP seq, only in LSO. */ + uint8_t esp_next_proto; /* Next protocol of ESP. */ +} __rte_packed; + +/* Packet Metadata. */ +struct mlx5_pkt_metadata { + uint8_t syndrome; + union { + uint8_t raw[5]; + struct mlx5_rx_pkt_ipsec_metadata rx; + struct mlx5_tx_pkt_ipsec_metadata tx; + } __rte_packed; + rte_be16_t ethertype; +} __rte_packed; + +static_assert(sizeof(struct mlx5_pkt_metadata) == MLX5_METADATA_LEN, + "wrong metadata size detected."); + #endif /* RTE_PMD_MLX5_PRM_H_ */ diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 28c0ad8ab..91ceb3c55 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -344,6 +344,7 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n) unsigned int j = 0; unsigned int k = 0; uint16_t max_elts; + const unsigned int ipsec_en = txq->ipsec_en; uint16_t max_wqe; unsigned int comp; volatile struct mlx5_wqe_ctrl *last_wqe = NULL; @@ -417,14 +418,43 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n) rte_pktmbuf_mtod(*(pkts + 1), volatile void *)); cs_flags = txq_ol_cksum_to_cs(txq, buf); raw = ((uint8_t *)(uintptr_t)wqe) + 2 * MLX5_WQE_DWORD_SIZE; - /* Replace the Ethernet type by the VLAN if necessary. */ + addr += 2; + length -= 2; + /* Handle IPsec offload. */ + if (ipsec_en && (buf->ol_flags & PKT_TX_SEC_OFFLOAD)) { + struct mlx5_pkt_metadata mdata = { + .syndrome = MLX5_IPSEC_TX_OFFLOAD, + .tx.esp_next_proto = buf->inner_esp_next_proto, + }; + unsigned int len = 2 * ETHER_ADDR_LEN - 2; + rte_be16_t ethertype = + rte_cpu_to_be_16(ETHER_TYPE_MLNX); + + if (buf->ol_flags & PKT_TX_TCP_CKSUM) { + txq->stats.oerrors++; + break; + } + /* Copy Destination and source mac address. */ + memcpy((uint8_t *)raw, ((uint8_t *)addr), len); + raw += len; + addr += len; + length -= len; + /* Copy Metadata. */ + memcpy((uint8_t *)raw, ðertype, 2); + memcpy((uint8_t *)raw + 2, &mdata, + MLX5_METADATA_LEN - 2); + memcpy((uint8_t *)raw + MLX5_METADATA_LEN, + (uint8_t *)addr, 2); + addr += 2; + len -= 2; + raw += MLX5_METADATA_LEN + 2; + pkt_inline_sz += MLX5_METADATA_LEN; + } if (buf->ol_flags & PKT_TX_VLAN_PKT) { uint32_t vlan = rte_cpu_to_be_32(0x81000000 | buf->vlan_tci); unsigned int len = 2 * ETHER_ADDR_LEN - 2; - addr += 2; - length -= 2; /* Copy Destination and source mac address. */ memcpy((uint8_t *)raw, ((uint8_t *)addr), len); /* Copy VLAN. */ @@ -435,10 +465,10 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n) addr += len + 2; length -= (len + 2); } else { - memcpy((uint8_t *)raw, ((uint8_t *)addr) + 2, + memcpy((uint8_t *)raw, ((uint8_t *)addr), MLX5_WQE_DWORD_SIZE); - length -= pkt_inline_sz; - addr += pkt_inline_sz; + length -= MLX5_WQE_DWORD_SIZE; + addr += MLX5_WQE_DWORD_SIZE; } raw += MLX5_WQE_DWORD_SIZE; if (txq->tso_en) { @@ -1572,6 +1602,59 @@ mlx5_tx_burst_empw(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n) } /** + * Process an IPsec encrypted packet. + * + * @param pkt + * Pointer to the first segment of the packet. + * @param len + * Already gathered packet length. + * + * @return + * new packet length on success, 0 on failure. + */ +static __rte_noinline int +mlx5_rx_handle_ipsec(struct rte_mbuf *pkt, int len) +{ + struct mlx5_pkt_metadata *mdata; + struct ether_hdr *eth; + + if (len < ETHER_HDR_LEN + MLX5_METADATA_LEN) + return 0; + eth = rte_pktmbuf_mtod(pkt, struct ether_hdr *); + if (eth->ether_type != rte_cpu_to_be_16(ETHER_TYPE_MLNX)) + goto out; + /* Use the metadata */ + mdata = rte_pktmbuf_mtod_offset(pkt, struct mlx5_pkt_metadata *, + ETHER_HDR_LEN); + if (mdata->syndrome == MLX5_IPSEC_RX_DECRYPTED) + pkt->ol_flags |= PKT_RX_SEC_OFFLOAD; + else if (mdata->syndrome == MLX5_IPSEC_RX_AUTH_FAIL) + pkt->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED; + else + return 0; + /* + * Move the data from the buffer: + * + * 6B 6B 2B 6B 2B + * +-----+-----+-------+----+-------+ + * | DST | SRC | MType | MD | Etype | + * +-----+-----+-------+----+-------+ + * + * to: + * 6B 6B 6B 2B + * +---------+-----+-----+-------+ + * | Garbage | DST | SRC | EType | + * +---------+-----+-----+-------+ + */ + memmove((void *)((uintptr_t)mdata - 6), eth, 2 * ETHER_ADDR_LEN); + /* Ethertype is already in its new place */ + rte_pktmbuf_adj(pkt, MLX5_METADATA_LEN); + len -= MLX5_METADATA_LEN; +out: + return len; +} + +/** * Translate RX completion flags to packet type. * * @param[in] cqe @@ -1772,6 +1855,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) const unsigned int wqe_cnt = (1 << rxq->elts_n) - 1; const unsigned int cqe_cnt = (1 << rxq->cqe_n) - 1; const unsigned int sges_n = rxq->sges_n; + const unsigned int ipsec_en = rxq->ipsec_en; struct rte_mbuf *pkt = NULL; struct rte_mbuf *seg = NULL; volatile struct mlx5_cqe *cqe = @@ -1864,6 +1948,14 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) } if (rxq->crc_present) len -= ETHER_CRC_LEN; + if (ipsec_en) { + len = mlx5_rx_handle_ipsec(pkt, len); + if (unlikely(len == 0)) { + rte_mbuf_raw_free(rep); + ++rxq->stats.idropped; + goto skip; + } + } PKT_LEN(pkt) = len; } DATA_LEN(rep) = DATA_LEN(seg); diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index b8c7925a3..e219d01ee 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -115,7 +115,8 @@ struct mlx5_rxq_data { unsigned int rss_hash:1; /* RSS hash result is enabled. */ unsigned int mark:1; /* Marked flow available on the queue. */ unsigned int pending_err:1; /* CQE error needs to be handled. */ - unsigned int :14; /* Remaining bits. */ + unsigned int ipsec_en:1; /* IPsec is enabled on this queue. */ + unsigned int :13; /* Remaining bits. */ volatile uint32_t *rq_db; volatile uint32_t *cq_db; uint16_t port_id; @@ -195,6 +196,7 @@ struct mlx5_txq_data { uint16_t tunnel_en:1; /* When set TX offload for tunneled packets are supported. */ uint16_t mpw_hdr_dseg:1; /* Enable DSEGs in the title WQEBB. */ + uint16_t ipsec_en:1; /* Whether IPsec Tx offload is enabled. */ uint16_t max_inline; /* Multiple of RTE_CACHE_LINE_SIZE to inline. */ uint16_t inline_max_packet_sz; /* Max packet size for inlining. */ uint16_t mr_cache_idx; /* Index of last hit entry. */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index a786a6b63..4d53c6da5 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -646,6 +646,7 @@ mlx5_priv_txq_new(struct priv *priv, uint16_t idx, uint16_t desc, } if (priv->tunnel_en) tmpl->txq.tunnel_en = 1; + tmpl->txq.ipsec_en = priv->ipsec_en; tmpl->txq.elts = (struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1); tmpl->txq.stats.idx = idx;