gjdwebserver-overlay/sys-kernel/pinephone-sources/files/PATCH-v1-08-14-mm-vmscan.c-refactor-shrink_node.patch

312 lines
12 KiB
Diff
Raw Normal View History

2021-04-17 13:18:07 +02:00
From mboxrd@z Thu Jan 1 00:00:00 1970
Return-Path: <linux-kernel-owner@kernel.org>
X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on
aws-us-west-2-korg-lkml-1.web.codeaurora.org
X-Spam-Level:
X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED,
DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,
INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,
USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no
version=3.4.0
Received: from mail.kernel.org (mail.kernel.org [198.145.29.99])
by smtp.lore.kernel.org (Postfix) with ESMTP id CBC4DC4332D
for <linux-kernel@archiver.kernel.org>; Sat, 13 Mar 2021 07:59:06 +0000 (UTC)
Received: from vger.kernel.org (vger.kernel.org [23.128.96.18])
by mail.kernel.org (Postfix) with ESMTP id B8A3D64ECE
for <linux-kernel@archiver.kernel.org>; Sat, 13 Mar 2021 07:59:06 +0000 (UTC)
Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
id S233468AbhCMH6l (ORCPT <rfc822;linux-kernel@archiver.kernel.org>);
Sat, 13 Mar 2021 02:58:41 -0500
Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58990 "EHLO
lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org
with ESMTP id S232705AbhCMH6I (ORCPT
<rfc822;linux-kernel@vger.kernel.org>);
Sat, 13 Mar 2021 02:58:08 -0500
Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749])
by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5BF1C061761
for <linux-kernel@vger.kernel.org>; Fri, 12 Mar 2021 23:58:07 -0800 (PST)
Received: by mail-qk1-x749.google.com with SMTP id i188so19867209qkd.7
for <linux-kernel@vger.kernel.org>; Fri, 12 Mar 2021 23:58:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=google.com; s=20161025;
h=date:in-reply-to:message-id:mime-version:references:subject:from:to
:cc;
bh=Y7HI6Iu8cEpN4/cKvfcV1g+GhNxN+RuODBCey3P9ub4=;
b=Yxxt5mdzkudc9JV5Q09FDlj0cvp1kmfCXxaVjGD5K0eCTz/So1vsV5U0rgmXDgA+oo
I0A6T+8UZJmQiHjXDXECTBckUL0jnXMRgRm7XVaG53YB34iWjqO40F4CKKEh9+/OoVRc
G0AmzhscUbBJ/kam3+ejsn6XT8HmEmL1E1JV5ccYQd+ZntwFpRDTuwloc8JJwbzlOPz6
/TmQCN3DlJawauHo+hK8esqlLBqlUo0Hc3ctZf6lI6jfTJ6xP19kgHg147e6ddt/0Gru
4TRJ0LLQctTAhahAckLyUPLY8X0ofJQRXl5IFVx653ZZtEjcg5D9kIYZI+E7lddJGHTM
WCSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20161025;
h=x-gm-message-state:date:in-reply-to:message-id:mime-version
:references:subject:from:to:cc;
bh=Y7HI6Iu8cEpN4/cKvfcV1g+GhNxN+RuODBCey3P9ub4=;
b=IhPlACmRhDBTPdmCHr0Oo1gV9hWUVpXkJQAh9lb+wytYYH7q3Kq7e+gKEZ0WvXAxP5
HCkzZEXo0KeDSrSNfpNxh8YUrLoNxRyc67U6jB0GE7MI8jH4hpyANf1w/gGK8VfBJVP6
HEd0MgT957tjxC4w5s2bFihlDvVujIJLQPYS+KO24VxvtbXLbjOayAHo8fH37VLxQ9U2
nOkP5ADzrpG+7bfsI2JutdLL5J6KjOX/ix5YTdFiiS1XlDgF+UDcwUkwnju04+gULHCK
SWuoOrT0DOfqlcQ4ome3RlXCB1UU/RCIPUUA06Y1a7h09m5zX6C1pqv0hnFWEk1Ehj8t
sE+g==
X-Gm-Message-State: AOAM533FjF5wfsmOCevkWqM2Zy0aRGz/n1+9ldB7hstZ21MlGbAcnU9S
iwofL/8D5KT2K6tTLoKvMx4do3+86Ss=
X-Google-Smtp-Source: ABdhPJwaHhrMfJqAPFj0pJIH6XbHZzHvvKGA6xcQa4GVhmvHaXMnKKa3+FEHfHSk+8gEmFOMqKskjMN6bjo=
X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74])
(user=yuzhao job=sendgmr) by 2002:ad4:50c7:: with SMTP id e7mr1747988qvq.58.1615622286760;
Fri, 12 Mar 2021 23:58:06 -0800 (PST)
Date: Sat, 13 Mar 2021 00:57:41 -0700
In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com>
Message-Id: <20210313075747.3781593-9-yuzhao@google.com>
Mime-Version: 1.0
References: <20210313075747.3781593-1-yuzhao@google.com>
X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog
Subject: [PATCH v1 08/14] mm/vmscan.c: refactor shrink_node()
From: Yu Zhao <yuzhao@google.com>
To: linux-mm@kvack.org
Cc: Alex Shi <alex.shi@linux.alibaba.com>,
Andrew Morton <akpm@linux-foundation.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Hillf Danton <hdanton@sina.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Matthew Wilcox <willy@infradead.org>,
Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@suse.com>,
Roman Gushchin <guro@fb.com>, Vlastimil Babka <vbabka@suse.cz>,
Wei Yang <richard.weiyang@linux.alibaba.com>,
Yang Shi <shy828301@gmail.com>,
Ying Huang <ying.huang@intel.com>,
linux-kernel@vger.kernel.org, page-reclaim@google.com,
Yu Zhao <yuzhao@google.com>
Content-Type: text/plain; charset="UTF-8"
Precedence: bulk
List-ID: <linux-kernel.vger.kernel.org>
X-Mailing-List: linux-kernel@vger.kernel.org
Archived-At: <https://lore.kernel.org/lkml/20210313075747.3781593-9-yuzhao@google.com/>
List-Archive: <https://lore.kernel.org/lkml/>
List-Post: <mailto:linux-kernel@vger.kernel.org>
Heuristics in shrink_node() are rather independent and can be
refactored into a separate function to improve readability.
Signed-off-by: Yu Zhao <yuzhao@google.com>
---
mm/vmscan.c | 186 +++++++++++++++++++++++++++-------------------------
1 file changed, 98 insertions(+), 88 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 562e87cbd7a1..1a24d2e0a4cb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2224,6 +2224,103 @@ enum scan_balance {
SCAN_FILE,
};
+static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc)
+{
+ unsigned long file;
+ struct lruvec *target_lruvec;
+
+ target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
+
+ /*
+ * Determine the scan balance between anon and file LRUs.
+ */
+ spin_lock_irq(&target_lruvec->lru_lock);
+ sc->anon_cost = target_lruvec->anon_cost;
+ sc->file_cost = target_lruvec->file_cost;
+ spin_unlock_irq(&target_lruvec->lru_lock);
+
+ /*
+ * Target desirable inactive:active list ratios for the anon
+ * and file LRU lists.
+ */
+ if (!sc->force_deactivate) {
+ unsigned long refaults;
+
+ refaults = lruvec_page_state(target_lruvec,
+ WORKINGSET_ACTIVATE_ANON);
+ if (refaults != target_lruvec->refaults[0] ||
+ inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
+ sc->may_deactivate |= DEACTIVATE_ANON;
+ else
+ sc->may_deactivate &= ~DEACTIVATE_ANON;
+
+ /*
+ * When refaults are being observed, it means a new
+ * workingset is being established. Deactivate to get
+ * rid of any stale active pages quickly.
+ */
+ refaults = lruvec_page_state(target_lruvec,
+ WORKINGSET_ACTIVATE_FILE);
+ if (refaults != target_lruvec->refaults[1] ||
+ inactive_is_low(target_lruvec, LRU_INACTIVE_FILE))
+ sc->may_deactivate |= DEACTIVATE_FILE;
+ else
+ sc->may_deactivate &= ~DEACTIVATE_FILE;
+ } else
+ sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE;
+
+ /*
+ * If we have plenty of inactive file pages that aren't
+ * thrashing, try to reclaim those first before touching
+ * anonymous pages.
+ */
+ file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
+ if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
+ sc->cache_trim_mode = 1;
+ else
+ sc->cache_trim_mode = 0;
+
+ /*
+ * Prevent the reclaimer from falling into the cache trap: as
+ * cache pages start out inactive, every cache fault will tip
+ * the scan balance towards the file LRU. And as the file LRU
+ * shrinks, so does the window for rotation from references.
+ * This means we have a runaway feedback loop where a tiny
+ * thrashing file LRU becomes infinitely more attractive than
+ * anon pages. Try to detect this based on file LRU size.
+ */
+ if (!cgroup_reclaim(sc)) {
+ unsigned long total_high_wmark = 0;
+ unsigned long free, anon;
+ int z;
+
+ free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
+ file = node_page_state(pgdat, NR_ACTIVE_FILE) +
+ node_page_state(pgdat, NR_INACTIVE_FILE);
+
+ for (z = 0; z < MAX_NR_ZONES; z++) {
+ struct zone *zone = &pgdat->node_zones[z];
+
+ if (!managed_zone(zone))
+ continue;
+
+ total_high_wmark += high_wmark_pages(zone);
+ }
+
+ /*
+ * Consider anon: if that's low too, this isn't a
+ * runaway file reclaim problem, but rather just
+ * extreme pressure. Reclaim as per usual then.
+ */
+ anon = node_page_state(pgdat, NR_INACTIVE_ANON);
+
+ sc->file_is_tiny =
+ file + free <= total_high_wmark &&
+ !(sc->may_deactivate & DEACTIVATE_ANON) &&
+ anon >> sc->priority;
+ }
+}
+
/*
* Determine how aggressively the anon and file LRU lists should be
* scanned. The relative value of each set of LRU lists is determined
@@ -2669,7 +2766,6 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
unsigned long nr_reclaimed, nr_scanned;
struct lruvec *target_lruvec;
bool reclaimable = false;
- unsigned long file;
target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
@@ -2679,93 +2775,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
nr_reclaimed = sc->nr_reclaimed;
nr_scanned = sc->nr_scanned;
- /*
- * Determine the scan balance between anon and file LRUs.
- */
- spin_lock_irq(&target_lruvec->lru_lock);
- sc->anon_cost = target_lruvec->anon_cost;
- sc->file_cost = target_lruvec->file_cost;
- spin_unlock_irq(&target_lruvec->lru_lock);
-
- /*
- * Target desirable inactive:active list ratios for the anon
- * and file LRU lists.
- */
- if (!sc->force_deactivate) {
- unsigned long refaults;
-
- refaults = lruvec_page_state(target_lruvec,
- WORKINGSET_ACTIVATE_ANON);
- if (refaults != target_lruvec->refaults[0] ||
- inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
- sc->may_deactivate |= DEACTIVATE_ANON;
- else
- sc->may_deactivate &= ~DEACTIVATE_ANON;
-
- /*
- * When refaults are being observed, it means a new
- * workingset is being established. Deactivate to get
- * rid of any stale active pages quickly.
- */
- refaults = lruvec_page_state(target_lruvec,
- WORKINGSET_ACTIVATE_FILE);
- if (refaults != target_lruvec->refaults[1] ||
- inactive_is_low(target_lruvec, LRU_INACTIVE_FILE))
- sc->may_deactivate |= DEACTIVATE_FILE;
- else
- sc->may_deactivate &= ~DEACTIVATE_FILE;
- } else
- sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE;
-
- /*
- * If we have plenty of inactive file pages that aren't
- * thrashing, try to reclaim those first before touching
- * anonymous pages.
- */
- file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
- if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
- sc->cache_trim_mode = 1;
- else
- sc->cache_trim_mode = 0;
-
- /*
- * Prevent the reclaimer from falling into the cache trap: as
- * cache pages start out inactive, every cache fault will tip
- * the scan balance towards the file LRU. And as the file LRU
- * shrinks, so does the window for rotation from references.
- * This means we have a runaway feedback loop where a tiny
- * thrashing file LRU becomes infinitely more attractive than
- * anon pages. Try to detect this based on file LRU size.
- */
- if (!cgroup_reclaim(sc)) {
- unsigned long total_high_wmark = 0;
- unsigned long free, anon;
- int z;
-
- free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
- file = node_page_state(pgdat, NR_ACTIVE_FILE) +
- node_page_state(pgdat, NR_INACTIVE_FILE);
-
- for (z = 0; z < MAX_NR_ZONES; z++) {
- struct zone *zone = &pgdat->node_zones[z];
- if (!managed_zone(zone))
- continue;
-
- total_high_wmark += high_wmark_pages(zone);
- }
-
- /*
- * Consider anon: if that's low too, this isn't a
- * runaway file reclaim problem, but rather just
- * extreme pressure. Reclaim as per usual then.
- */
- anon = node_page_state(pgdat, NR_INACTIVE_ANON);
-
- sc->file_is_tiny =
- file + free <= total_high_wmark &&
- !(sc->may_deactivate & DEACTIVATE_ANON) &&
- anon >> sc->priority;
- }
+ prepare_scan_count(pgdat, sc);
shrink_node_memcgs(pgdat, sc);
--
2.31.0.rc2.261.g7f71774620-goog