From patchwork Tue Oct 31 10:31:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 31049 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD4831B247; Tue, 31 Oct 2017 11:31:17 +0100 (CET) Received: from mail-wr0-f196.google.com (mail-wr0-f196.google.com [209.85.128.196]) by dpdk.org (Postfix) with ESMTP id 4D9231B1FB for ; Tue, 31 Oct 2017 11:31:17 +0100 (CET) Received: by mail-wr0-f196.google.com with SMTP id o44so15414363wrf.11 for ; Tue, 31 Oct 2017 03:31:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=vVQx3H78+5EMO463wJBQlPAd3JLmVWBxjrrT+ttRGLk=; b=yotlSE7tmTA0q5lZdGcsXa59j+6wACozjkL0i5w/GI3xhHMgfqwxYWMvAheI216noH wf9MGowUrje5c7bslEip8Bljjjh32UkcQtXxx1BokM5N5+2qaTlwlVJKuaJjKFgZ7pMp wY10V0CdXbH00iLpviKuRi3RvvF8zmAcloW4rjQSf4qX8sikdkV3S7ScHViz3Ut7jgWV llBMT5Lj4R/kfbl0WHiejq3dSoUrPRfm2tUNciWXBtMGb3TV4QSswiUYXeAcbzKYV3dx hpTHYIBXxbqnOeGDbvGVbO32C0Q/jr1UmVBpiayC8AOvhfLf65FY2aMutfAIFvZQsjT6 tHmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=vVQx3H78+5EMO463wJBQlPAd3JLmVWBxjrrT+ttRGLk=; b=IUMeSg8lMvMLL4v6dqYrv8BsnGoPd17YlLcDWYOrSqqArVShPRT0+6/nOmFpv4JbsX q2+fCwfHNIO2uPgrYr4+iAEvy92sqfCWH8bqGHLpbmxGoTqw0/u0WLn5YljPfBQA6R3v l5E1ZQkE+iLvOAW8R5R5FZ3qOCeUAU0qGvZwI+tH4F0nCE1yPuBvnrGxkzvZ6EXlMna9 XWidhXM12Z3x4pSYuG/PW2QnKocis928xqBWOxzotZLb6W/BQwoyEXXCM6v/UMOVrz1f sQSivBAYMUYyO7chg6WcEEljXYD4ItClltXfwswy88Ee/mz0CbiaRaGZ7zR6fwdXWPTW h/1w== X-Gm-Message-State: AMCzsaU7E0x6PFNkisUyF0h6kuJTi79kSTxuuSn09hnmINyor9/I2L85 dO/LwM5PyGs1o+F5a/LdKVnFJQM7 X-Google-Smtp-Source: ABhQp+TZMRAckyYYKMoLK09s+xNTajRLsDIhwGv5iTQ0I/uuxxZnFj7IkBjp73uzzVmNxumkIy+Vkw== X-Received: by 10.223.176.14 with SMTP id f14mr1297308wra.129.1509445876786; Tue, 31 Oct 2017 03:31:16 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id v76sm1219579wmd.35.2017.10.31.03.31.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 31 Oct 2017 03:31:15 -0700 (PDT) Date: Tue, 31 Oct 2017 11:31:04 +0100 From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org Message-ID: <1509445707-19349-1-git-send-email-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline X-Mailer: git-send-email 2.1.4 Subject: [dpdk-dev] [PATCH 1/2] net/mlx4: fix Rx after updating number of queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When not in isolated mode, internal flow rules are automatically maintained by the PMD to receive traffic according to global device settings (MAC, VLAN, promiscuous mode and so on). Since RSS support was added to the mix, it must also check whether Rx queue configuration has changed when refreshing flow rules to prevent the following from happening: - With a smaller number of Rx queues, traffic is implicitly dropped since the existing RSS context cannot be re-applied. - With a larger number of Rx queues, traffic remains balanced within the original (smaller) set of queues. One workaround before this commit was to temporarily enter/leave isolated mode to make it regenerate internal flow rules. Fixes: 7d8675956f57 ("net/mlx4: add RSS support outside flow API") Signed-off-by: Adrien Mazarguil --- drivers/net/mlx4/mlx4_flow.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index 7a6097f..86bac1b 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -1342,6 +1342,7 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) assert(flow->ibv_attr->type == IBV_FLOW_ATTR_NORMAL); assert(flow->ibv_attr->num_of_specs == 1); assert(eth->type == IBV_FLOW_SPEC_ETH); + assert(flow->rss); if (rule_vlan && (eth->val.vlan_tag != *rule_vlan || eth->mask.vlan_tag != RTE_BE16(0x0fff))) @@ -1354,8 +1355,13 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) eth->val.src_mac[j] != UINT8_C(0x00) || eth->mask.src_mac[j] != UINT8_C(0x00)) break; - if (j == sizeof(mac->addr_bytes)) - break; + if (j != sizeof(mac->addr_bytes)) + continue; + if (flow->rss->queues != queues || + memcmp(flow->rss->queue_id, rss_conf->queue, + queues * sizeof(flow->rss->queue_id[0]))) + continue; + break; } if (!flow || !flow->internal) { /* Not found, create a new flow rule. */ @@ -1389,6 +1395,13 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) break; } } + if (flow && flow->internal) { + assert(flow->rss); + if (flow->rss->queues != queues || + memcmp(flow->rss->queue_id, rss_conf->queue, + queues * sizeof(flow->rss->queue_id[0]))) + flow = NULL; + } if (!flow || !flow->internal) { /* Not found, create a new flow rule. */ if (priv->dev->data->promiscuous) {