gjdwebserver-overlay/sys-kernel/pinephone-sources/files/PATCH-v1-05-14-mm-swap.c-export-activate_page.patch
2021-12-03 10:03:17 +01:00

179 lines
7.6 KiB
Diff

From mboxrd@z Thu Jan 1 00:00:00 1970
Return-Path: <linux-kernel-owner@kernel.org>
X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on
aws-us-west-2-korg-lkml-1.web.codeaurora.org
X-Spam-Level:
X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED,
DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,
INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,
USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no
version=3.4.0
Received: from mail.kernel.org (mail.kernel.org [198.145.29.99])
by smtp.lore.kernel.org (Postfix) with ESMTP id C5A9BC4332B
for <linux-kernel@archiver.kernel.org>; Sat, 13 Mar 2021 07:59:06 +0000 (UTC)
Received: from vger.kernel.org (vger.kernel.org [23.128.96.18])
by mail.kernel.org (Postfix) with ESMTP id 90B8C64F1F
for <linux-kernel@archiver.kernel.org>; Sat, 13 Mar 2021 07:59:06 +0000 (UTC)
Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
id S233424AbhCMH6i (ORCPT <rfc822;linux-kernel@archiver.kernel.org>);
Sat, 13 Mar 2021 02:58:38 -0500
Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58970 "EHLO
lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org
with ESMTP id S232431AbhCMH6D (ORCPT
<rfc822;linux-kernel@vger.kernel.org>);
Sat, 13 Mar 2021 02:58:03 -0500
Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a])
by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A425FC061574
for <linux-kernel@vger.kernel.org>; Fri, 12 Mar 2021 23:58:03 -0800 (PST)
Received: by mail-yb1-xb4a.google.com with SMTP id 194so31802025ybl.5
for <linux-kernel@vger.kernel.org>; Fri, 12 Mar 2021 23:58:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=google.com; s=20161025;
h=date:in-reply-to:message-id:mime-version:references:subject:from:to
:cc;
bh=773qLo/LiVz4n2L5CNuvuPAbdhL5vI+dQXWkGfq22YM=;
b=s62F923cqY3HHxlJ4hYL4HxcRTUA1o1Vmr0HgcffuxiRKFFBC1czWP98NMUIxWHBf1
ZTyWeNYix1pSuyOzyeK0NFpxLIWOmX10rPMvqWp8DuHg1yJhrNIGNko3fZT0atoX3aT9
tvbuoR86gyhnQZ8eh7p7K/l32hNMDiL/9yg2skyWxrtzqXc2LkdkDBaiklyidpzGD9Xo
glkEqmmHlh+PbMf1URYMZzEcs1zoLYSWmku5OQ0gpaw9yflCHjp7u8qrrfyCRliYK3I/
Tc+BFkGgrB7u8ficVc0QKkIHZrZFoWgOnbQ0s8MxrT5IfVlLG0WbP3MEYNuJZFSmG8zL
SKVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20161025;
h=x-gm-message-state:date:in-reply-to:message-id:mime-version
:references:subject:from:to:cc;
bh=773qLo/LiVz4n2L5CNuvuPAbdhL5vI+dQXWkGfq22YM=;
b=JutL0wSOtSKnXJpfp0Rn/HxX7HIFKRMSUwsEChk8vd4csU2mQte1wA+/+UuChaDIbU
LAbDmBYxJhnK4nlISGat+zJubGDWfCTB+UZ0ZMedNwvo4kQ6wCMcrBVBswWbPfpCjDwr
A4KWgVBEKj1hggrx43gbsIj1+nseOCIQxhyAyqUZPXMWyh5DbkzFwS/Ofm/k9MlAtqkm
kxvprFWjiL/B2UVHRa6QH0Pd81vDVjQVhL1VANnhSRBRqVhMl9CKh6LDeEXIQ/j/SUW1
sU5WIzhVZuh5Ce8LmXpg49O0w7XWFKmLxS65P39JtYklewPiPLasFUZmGGQIW0YkgSAb
ikpA==
X-Gm-Message-State: AOAM531sNbnzQRY0viG10kk0YQk3CPKaMfG7csYITYFCtPUjz1sM9Whr
xig3F1mAxIB/3uXuYRA3nVADy29ViV4=
X-Google-Smtp-Source: ABdhPJyM3uGtugEuR+BtqyW7Pcf+QxhuRuUSNVC/LK4e6LJubLVomvaz1At1w7Vl14tsop0cWxsmMMW22Kg=
X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74])
(user=yuzhao job=sendgmr) by 2002:a25:d84b:: with SMTP id p72mr22707445ybg.272.1615622282832;
Fri, 12 Mar 2021 23:58:02 -0800 (PST)
Date: Sat, 13 Mar 2021 00:57:38 -0700
In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com>
Message-Id: <20210313075747.3781593-6-yuzhao@google.com>
Mime-Version: 1.0
References: <20210313075747.3781593-1-yuzhao@google.com>
X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog
Subject: [PATCH v1 05/14] mm/swap.c: export activate_page()
From: Yu Zhao <yuzhao@google.com>
To: linux-mm@kvack.org
Cc: Alex Shi <alex.shi@linux.alibaba.com>,
Andrew Morton <akpm@linux-foundation.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Hillf Danton <hdanton@sina.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Matthew Wilcox <willy@infradead.org>,
Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@suse.com>,
Roman Gushchin <guro@fb.com>, Vlastimil Babka <vbabka@suse.cz>,
Wei Yang <richard.weiyang@linux.alibaba.com>,
Yang Shi <shy828301@gmail.com>,
Ying Huang <ying.huang@intel.com>,
linux-kernel@vger.kernel.org, page-reclaim@google.com,
Yu Zhao <yuzhao@google.com>
Content-Type: text/plain; charset="UTF-8"
Precedence: bulk
List-ID: <linux-kernel.vger.kernel.org>
X-Mailing-List: linux-kernel@vger.kernel.org
Archived-At: <https://lore.kernel.org/lkml/20210313075747.3781593-6-yuzhao@google.com/>
List-Archive: <https://lore.kernel.org/lkml/>
List-Post: <mailto:linux-kernel@vger.kernel.org>
Export activate_page(), which is a merger between the existing
activate_page() and __lru_cache_activate_page(), so it can be used to
activate pages that are already on lru or queued in lru_pvecs.lru_add.
Signed-off-by: Yu Zhao <yuzhao@google.com>
---
include/linux/swap.h | 1 +
mm/swap.c | 28 +++++++++++++++-------------
2 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 4cc6ec3bf0ab..de2bbbf181ba 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -344,6 +344,7 @@ extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
extern void lru_add_drain_all(void);
extern void rotate_reclaimable_page(struct page *page);
+extern void activate_page(struct page *page);
extern void deactivate_file_page(struct page *page);
extern void deactivate_page(struct page *page);
extern void mark_page_lazyfree(struct page *page);
diff --git a/mm/swap.c b/mm/swap.c
index 31b844d4ed94..f20ed56ebbbf 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -334,7 +334,7 @@ static bool need_activate_page_drain(int cpu)
return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0;
}
-static void activate_page(struct page *page)
+static void activate_page_on_lru(struct page *page)
{
page = compound_head(page);
if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
@@ -354,7 +354,7 @@ static inline void activate_page_drain(int cpu)
{
}
-static void activate_page(struct page *page)
+static void activate_page_on_lru(struct page *page)
{
struct lruvec *lruvec;
@@ -368,11 +368,22 @@ static void activate_page(struct page *page)
}
#endif
-static void __lru_cache_activate_page(struct page *page)
+/*
+ * If the page is on the LRU, queue it for activation via
+ * lru_pvecs.activate_page. Otherwise, assume the page is on a
+ * pagevec, mark it active and it'll be moved to the active
+ * LRU on the next drain.
+ */
+void activate_page(struct page *page)
{
struct pagevec *pvec;
int i;
+ if (PageLRU(page)) {
+ activate_page_on_lru(page);
+ return;
+ }
+
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_add);
@@ -421,16 +432,7 @@ void mark_page_accessed(struct page *page)
* evictable page accessed has no effect.
*/
} else if (!PageActive(page)) {
- /*
- * If the page is on the LRU, queue it for activation via
- * lru_pvecs.activate_page. Otherwise, assume the page is on a
- * pagevec, mark it active and it'll be moved to the active
- * LRU on the next drain.
- */
- if (PageLRU(page))
- activate_page(page);
- else
- __lru_cache_activate_page(page);
+ activate_page(page);
ClearPageReferenced(page);
workingset_activation(page);
}
--
2.31.0.rc2.261.g7f71774620-goog