<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:58:03 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13063] sanity test 411 times out for RHEL8.1</title>
                <link>https://jira.whamcloud.com/browse/LU-13063</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;The last thing seen in the suite_log for sanity test 411 is &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity test 411: Slab allocation error with cgroup does not LBUG ================================== 04:54:33 (1575953673)
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 3.88888 s, 27.0 MB/s
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Normally, on successful runs, we would see a dd error reading the file just created, but the test hangs at this point. Looking at the console logs, it&#8217;s not clear why the test is hanging, but we see lnet-selftest processes hung. Looking at the stack trace on the first client (vm10), we see that there a lnet-selftest process stuck D state&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[14127.185129] lst_t_00_00     S    0 14488      2 0x80000080
[14127.186075] Call Trace:
[14127.186561]  ? __schedule+0x253/0x830
[14127.187236]  ? sfw_test_unit_done.isra.14+0x9d/0x150 [lnet_selftest]
[14127.188348]  schedule+0x28/0x70
[14127.188929]  cfs_wi_scheduler+0x40d/0x420 [libcfs]
[14127.189783]  ? finish_wait+0x80/0x80
[14127.190466]  ? cfs_wi_sched_create+0x5a0/0x5a0 [libcfs]
[14127.191397]  kthread+0x112/0x130
[14127.191984]  ? kthread_flush_work_fn+0x10/0x10
[14127.192782]  ret_from_fork+0x35/0x40
[14127.193448] st_timer        D    0 14636      2 0x80000080
[14127.194413] Call Trace:
[14127.194882]  ? __schedule+0x253/0x830
[14127.195555]  schedule+0x28/0x70
[14127.196142]  schedule_timeout+0x16b/0x390
[14127.196859]  ? __next_timer_interrupt+0xc0/0xc0
[14127.197678]  ? prepare_to_wait_event+0xbb/0x140
[14127.198496]  stt_timer_main+0x215/0x230 [lnet_selftest]
[14127.199436]  ? finish_wait+0x80/0x80
[14127.200083]  ? sfw_startup+0x540/0x540 [lnet_selftest]
[14127.200989]  kthread+0x112/0x130
[14127.201595]  ? kthread_flush_work_fn+0x10/0x10
[14127.202393]  ret_from_fork+0x35/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Similarly in the stack-trace log on the MDS (vm12), we see the lnet process&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[14034.774700] st_timer        D ffff9cb15b62a080     0 28114      2 0x00000080
[14034.776068] Call Trace:
[14034.776493]  [&amp;lt;ffffffffb0f6af19&amp;gt;] schedule+0x29/0x70
[14034.777425]  [&amp;lt;ffffffffb0f68968&amp;gt;] schedule_timeout+0x168/0x2d0
[14034.778391]  [&amp;lt;ffffffffb08cfeb4&amp;gt;] ? __wake_up+0x44/0x50
[14034.779358]  [&amp;lt;ffffffffb08aab30&amp;gt;] ? __internal_add_timer+0x130/0x130
[14034.780432]  [&amp;lt;ffffffffb08c3a46&amp;gt;] ? prepare_to_wait+0x56/0x90
[14034.781474]  [&amp;lt;ffffffffc1542a98&amp;gt;] stt_timer_main+0x168/0x220 [lnet_selftest]
[14034.782654]  [&amp;lt;ffffffffb08c3f50&amp;gt;] ? wake_up_atomic_t+0x30/0x30
[14034.783688]  [&amp;lt;ffffffffc1542930&amp;gt;] ? sfw_startup+0x580/0x580 [lnet_selftest]
[14034.784856]  [&amp;lt;ffffffffb08c2e81&amp;gt;] kthread+0xd1/0xe0
[14034.785787]  [&amp;lt;ffffffffb08c2db0&amp;gt;] ? insert_kthread_work+0x40/0x40
[14034.786818]  [&amp;lt;ffffffffb0f77c37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[14034.788077]  [&amp;lt;ffffffffb08c2db0&amp;gt;] ? insert_kthread_work+0x40/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;lnet-selftest did run and fail (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10073&quot; title=&quot;lnet-selftest test_smoke: lst Error found&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10073&quot;&gt;&lt;del&gt;LU-10073&lt;/del&gt;&lt;/a&gt;) previous to sanity running. It&#8217;s not clear if lnet-selftest is a cause of this test hang.&lt;/p&gt;

&lt;p&gt;We&#8217;ve see this test hang twice for RHEL 8.1 testing both in December&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/293b5216-1b13-11ea-a9d7-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/293b5216-1b13-11ea-a9d7-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/133daa46-1b8a-11ea-b1e8-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/133daa46-1b8a-11ea-b1e8-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, we&apos;ve seen this once in the past 3 months in PPC testing for a patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11997&quot; title=&quot;Crash in lustre_swab_fiemap&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11997&quot;&gt;&lt;del&gt;LU-11997&lt;/del&gt;&lt;/a&gt; at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/b4851392-f175-11e9-b62b-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/b4851392-f175-11e9-b62b-52540065bddc&lt;/a&gt; .&lt;/p&gt;</description>
                <environment>RHEL 8.1</environment>
        <key id="57609">LU-13063</key>
            <summary>sanity test 411 times out for RHEL8.1</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="dongyang">Dongyang Li</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                            <label>always_except</label>
                            <label>rhel8</label>
                    </labels>
                <created>Wed, 11 Dec 2019 17:13:01 +0000</created>
                <updated>Wed, 29 Mar 2023 18:04:02 +0000</updated>
                                            <version>Lustre 2.14.0</version>
                    <version>Lustre 2.12.4</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="259621" author="simmonsja" created="Wed, 11 Dec 2019 17:38:55 +0000"  >&lt;p&gt;I have a patch that replaces cfs_wi_schedular with a workqueue. Give me a bit to get the patch working and I will push it.&lt;/p&gt;</comment>
                            <comment id="259692" author="adilger" created="Thu, 12 Dec 2019 15:16:35 +0000"  >&lt;p&gt;The patch looks like &lt;a href=&quot;https://review.whamcloud.com/36991&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36991&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9859&quot; title=&quot;libcfs simplification&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9859&quot;&gt;LU-9859&lt;/a&gt; lnet: convert selftest to use workqueues&lt;/tt&gt;&quot;&lt;/p&gt;</comment>
                            <comment id="260124" author="jamesanunez" created="Wed, 18 Dec 2019 22:48:09 +0000"  >&lt;p&gt;sanity test 411 still hangs for RHEL 8.1 when rebased on top of James&apos;s patch &lt;a href=&quot;https://review.whamcloud.com/#/c/36991/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/36991/&lt;/a&gt;. Results at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/ca87349e-21d2-11ea-80b4-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/ca87349e-21d2-11ea-80b4-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="260153" author="pjones" created="Thu, 19 Dec 2019 15:08:34 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=dongyang&quot; class=&quot;user-hover&quot; rel=&quot;dongyang&quot;&gt;dongyang&lt;/a&gt; could you please advise here?&lt;/p&gt;</comment>
                            <comment id="260351" author="dongyang" created="Tue, 24 Dec 2019 00:36:01 +0000"  >&lt;p&gt;from one of the test logs:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
[14170.962056] dd              R  running task        0 27130  26934 0x00000080
[14170.963279] Call Trace:
[14170.963757]  ? finish_task_switch+0x76/0x2b0
[14170.964525]  ? __schedule+0x25b/0x830
[14170.965207]  ? shrink_page_list+0x48e/0xc50
[14170.965970]  ? page_evictable+0xe/0x40
[14170.966675]  ? putback_inactive_pages+0x1f8/0x4f0
[14170.967517]  ? shrink_inactive_list+0x207/0x570
[14170.968339]  ? inactive_list_is_low+0xe0/0x220
[14170.969148]  ? shrink_node_memcg+0x204/0x770
[14170.969920]  ? shrink_node+0xce/0x440
[14170.970589]  ? do_try_to_free_pages+0xc3/0x360
[14170.971396]  ? try_to_free_mem_cgroup_pages+0xf9/0x210
[14170.972316]  ? try_charge+0x192/0x780
[14170.972992]  ? mem_cgroup_commit_charge+0x7a/0x560
[14170.973856]  ? mem_cgroup_try_charge+0x8b/0x1a0
[14170.974677]  ? __add_to_page_cache_locked+0x64/0x240
[14170.975560]  ? add_to_page_cache_lru+0x4a/0xc0
[14170.976366]  ? pagecache_get_page+0xf2/0x2c0
[14170.977294]  ? ll_read_ahead_pages+0x1e8/0x8b0 [lustre]
[14170.978305]  ? osc_io_fini+0x10/0x10 [osc]
[14170.979072]  ? ll_readahead.constprop.32+0x641/0x9d0 [lustre]
[14170.980105]  ? ll_io_read_page+0x355/0x4c0 [lustre]
[14170.980992]  ? ll_readpage+0xe3/0x650 [lustre]
[14170.981801]  ? find_get_entry+0x19/0xf0
[14170.982508]  ? pagecache_get_page+0x30/0x2c0
[14170.983286]  ? generic_file_buffered_read+0x601/0xb10
[14170.984188]  ? atime_needs_update+0x77/0xe0
[14170.984960]  ? vvp_io_read_start+0x3ef/0x720 [lustre]
[14170.986080]  ? cl_lock_request+0x62/0x1b0 [obdclass]
[14170.986989]  ? cl_io_start+0x58/0x100 [obdclass]
[14170.987851]  ? cl_io_loop+0xdc/0x1b0 [obdclass]
[14170.988682]  ? ll_file_io_generic+0x23d/0x960 [lustre]
[14170.989611]  ? ll_file_read_iter+0x244/0x2e0 [lustre]
[14170.990510]  ? new_sync_read+0x121/0x170
[14170.991233]  ? vfs_read+0x91/0x140
[14170.991864]  ? ksys_read+0x4f/0xb0
[14170.992493]  ? do_syscall_64+0x5b/0x1b0
[14170.993198]  ? entry_SYSCALL_64_after_hwframe+0x65/0xca
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;looks like dd is not &lt;b&gt;stuck&lt;/b&gt;, we are reading a page and read_ahead kicks in, trying to allocate a page in page cache, hit cgroup limits and then it just keep trying to reclaim in a loop?&#160;&lt;/p&gt;</comment>
                            <comment id="260659" author="adilger" created="Tue, 7 Jan 2020 01:07:21 +0000"  >&lt;p&gt;Dongyang, in the case of a cgroup limit being hit, is there an error returned to the caller, or where is the allocation looping?  It seems that &lt;tt&gt;ll_read_ahead_page()&lt;/tt&gt; is already calling &lt;tt&gt;grab_cache_page_nowait()&lt;/tt&gt; into &lt;tt&gt;pagecache_get_page(FGP_NOWAIT)&lt;/tt&gt; so it &lt;em&gt;shouldn&apos;t&lt;/em&gt; be blocking for pages that cannot be allocated.  Maybe this is a bug in the kernel?&lt;/p&gt;</comment>
                            <comment id="260733" author="dongyang" created="Wed, 8 Jan 2020 09:58:49 +0000"  >&lt;p&gt;Andreas, if we hit the cgroup limit in grab_cache_page_nowait(), it calls into mem_cgroup_try_charge() and eventually oom inside the cgroup.&lt;/p&gt;

&lt;p&gt;before doing that try_charge() will try to reclaim the pages, that&apos;s what we see here.&lt;/p&gt;

&lt;p&gt;yes ll_read_ahead_page() is using FGP_NOWAIT but that only means we won&apos;t wait for the&lt;/p&gt;

&lt;p&gt;page lock when getting the page ref. mapping-&amp;gt;gfp_mask is still used by the cgroup reclaim.&lt;/p&gt;

&lt;p&gt;It does feels like a bug in the kernel but I&apos;m having difficulty locating it, as well as reproducing it.&lt;/p&gt;</comment>
                            <comment id="260797" author="gerrit" created="Wed, 8 Jan 2020 21:10:28 +0000"  >&lt;p&gt;James Nunez (jnunez@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/37165&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37165&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; testing: reproduce sanity 411 error&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: a0560ae8ebcc4559be7823ab0a5975e057979f4a&lt;/p&gt;</comment>
                            <comment id="260807" author="adilger" created="Thu, 9 Jan 2020 00:45:35 +0000"  >&lt;blockquote&gt;
&lt;p&gt;mapping-&amp;gt;gfp_mask is still used by the cgroup reclaim.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I was wondering about that.  Are we sure that &lt;tt&gt;gfp&amp;#95;mask&lt;/tt&gt; excludes &lt;tt&gt;&amp;#95;&amp;#95;GFP&amp;#95;FS&lt;/tt&gt; at this point?  The comment says this, but it isn&apos;t clear to me how that is working since it doesn&apos;t actually clear &lt;tt&gt;&amp;#95;&amp;#95;GFP&amp;#95;FS&lt;/tt&gt; that I can see:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
 * Clear __GFP_FS when allocating the page to avoid recursion into the fs
 * and deadlock against the caller&apos;s locked page.
 */             
&lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; inline struct page *grab_cache_page_nowait(struct address_space *mapping,
                                pgoff_t index)
{       
        &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; pagecache_get_page(mapping, index,
                        FGP_LOCK|FGP_CREAT|FGP_NOFS|FGP_NOWAIT,
                        mapping_gfp_mask(mapping));
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If we can reproduce this failure fairly easily, we might consider using our own version of this function that explicitly clears &lt;tt&gt;&amp;#95;&amp;#95;GFP&amp;#95;FS&lt;/tt&gt;, though this is just speculation (I don&apos;t actually see recursion in the stack):&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
/* Clear __GFP_FS when allocating the page to avoid recursion into the fs
 * and deadlock against the caller&apos;s locked page.
 */             
&lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; inline struct page *ll_grab_cache_page_nowait(struct address_space *mapping,
                                   pgoff_t index)
{       
        &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; pagecache_get_page(mapping, index,
                        FGP_LOCK|FGP_CREAT|FGP_NOFS|FGP_NOWAIT,
                        mapping_gfp_mask(mapping) &amp;amp; ~__GFP_FS);
]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It would be worthwhile to see what has changed under &lt;tt&gt;linux/mm/&lt;/tt&gt; between the RHEL 8.0 and 8.1 kernels to see if this would identify what is causing the failures.&lt;/p&gt;</comment>
                            <comment id="260820" author="dongyang" created="Thu, 9 Jan 2020 06:02:13 +0000"  >&lt;p&gt;__GFP_FS is cleared in pagecache_get_page(), before we use the flag to alloc the page and add to lru,&lt;/p&gt;

&lt;p&gt;The comments from&#160;grab_cache_page_nowait() are misleading, see&#160;45f87de57f8fad59302fd263dd81ffa4843b5b24&lt;/p&gt;

&lt;p&gt;grab_cache_page_nowait() passes FGP_NOFS to pagecache_get_page(), so we should be good.&lt;/p&gt;</comment>
                            <comment id="260903" author="adilger" created="Thu, 9 Jan 2020 15:15:50 +0000"  >&lt;p&gt;Can you please compare RHEL8.0 and 8.1 &lt;tt&gt;mm/&lt;/tt&gt; tree and cgroups to see what has changed, since the test fails 100% on RHEL8.1 and never on 8.0, so there must be some code change in the kernel that is causing this. &lt;/p&gt;</comment>
                            <comment id="261001" author="dongyang" created="Fri, 10 Jan 2020 12:12:03 +0000"  >&lt;p&gt;The changes are significant and I&apos;m going through them. On the plus side, after enabling centos8-stream I&apos;m able to get&#160;4.18.0-147.3.1.el8_1.x86_64 and the newer&#160;4.18.0-151.el8.x86_64. I can reproduce it on my vm on both kernels. and you are right it&apos;s a regression on 8.1, 8.0 is fine. The issue actually has nothing to do with lustre, can be reproduced doing dd from a local file.&lt;/p&gt;</comment>
                            <comment id="261095" author="dongyang" created="Mon, 13 Jan 2020 05:16:04 +0000"  >&lt;p&gt;OK, turns out redhat is missing a patch for the 8.1 kernel.&lt;/p&gt;

&lt;p&gt;f9c645621a28e37813a1de96d9cbd89cde94a1e4&#160;memcg, oom: don&apos;t require __GFP_FS when invoking memcg OOM killer&lt;/p&gt;

&lt;p&gt;how do we proceed? I can create a new 4.18-rhel8.1.series under kernel_patches and add the patch there, but that&apos;s for the server side, we need the kernel patch on the clients...&lt;/p&gt;</comment>
                            <comment id="261097" author="adilger" created="Mon, 13 Jan 2020 06:29:41 +0000"  >&lt;p&gt;It makes sense to add the patch to the RHEL8.1 series and we can test with the server packages installed on the client. &lt;/p&gt;

&lt;p&gt;As for opening a ticket with RedHat, I hope Oleg or Peter know the details on that.&lt;/p&gt;</comment>
                            <comment id="261098" author="gerrit" created="Mon, 13 Jan 2020 07:25:00 +0000"  >&lt;p&gt;Li Dongyang (dongyangli@ddn.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/37197&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37197&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; kernel: fix try_charge retrying forever&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d4eae90c1ec4676f775bf7c0582c999968edf408&lt;/p&gt;</comment>
                            <comment id="261369" author="dongyang" created="Thu, 16 Jan 2020 23:06:32 +0000"  >&lt;p&gt;How can we tell Maloo to use the server packages(where the kernel patch actually apples) on the clients?&lt;/p&gt;

&lt;p&gt;I looked into&#160;&lt;a href=&quot;https://wiki.whamcloud.com/display/PUB/Changing+Test+Parameters+with+Gerrit+Commit+Messages&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://wiki.whamcloud.com/display/PUB/Changing+Test+Parameters+with+Gerrit+Commit+Messages&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;but could not find anything related&lt;/p&gt;</comment>
                            <comment id="261425" author="mdiep" created="Fri, 17 Jan 2020 15:01:34 +0000"  >&lt;p&gt;currently you can&apos;t install lustre server on client (unless you patched the client)&lt;/p&gt;</comment>
                            <comment id="261455" author="gerrit" created="Fri, 17 Jan 2020 19:28:33 +0000"  >&lt;p&gt;James Nunez (jnunez@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/37270&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37270&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: stop running sanity test 411&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 0681c9075cfb28745bdb90f660c36878a8b94679&lt;/p&gt;</comment>
                            <comment id="261464" author="gerrit" created="Fri, 17 Jan 2020 23:39:47 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/37272&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37272&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: remove checks for old RHEL versions&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 734072a4ff70ff044fe44c3ff018f28720f38617&lt;/p&gt;</comment>
                            <comment id="261969" author="gerrit" created="Tue, 28 Jan 2020 06:02:50 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/37270/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37270/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: stop running sanity test 411&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 34e4c37474b3d9328aac1dd9228a018ac7d4f47e&lt;/p&gt;</comment>
                            <comment id="262153" author="yujian" created="Wed, 29 Jan 2020 19:08:21 +0000"  >&lt;p&gt;The failure also occurred on Lustre b2_12 branch: &lt;a href=&quot;https://testing.whamcloud.com/test_sets/0e3d7c8e-3ea7-11ea-9543-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/0e3d7c8e-3ea7-11ea-9543-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="262224" author="gerrit" created="Thu, 30 Jan 2020 19:43:50 +0000"  >&lt;p&gt;James Nunez (jnunez@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/37376&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37376&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: fortestonly RHEL8.0 vs 8.1&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d20b45141a917549858dc443c18c7b8b4f108ca8&lt;/p&gt;</comment>
                            <comment id="262385" author="gerrit" created="Sat, 1 Feb 2020 08:10:48 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/37272/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37272/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: remove checks for old RHEL versions&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 4a8e06fb5af88f93a936ed8ba0718ff5d3554c9f&lt;/p&gt;</comment>
                            <comment id="267844" author="gerrit" created="Thu, 16 Apr 2020 18:56:47 +0000"  >&lt;p&gt;Jian Yu (yujian@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/38260&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38260&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: stop running sanity test 411&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: f857eb47ea816d4a74b0012049125ae8cbeb2149&lt;/p&gt;</comment>
                            <comment id="269073" author="gerrit" created="Fri, 1 May 2020 04:33:19 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/38260/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38260/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13063&quot; title=&quot;sanity test 411 times out for RHEL8.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13063&quot;&gt;LU-13063&lt;/a&gt; tests: stop running sanity test 411&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 495a6f01a30855c72aa0ec4fda12f961342a97a3&lt;/p&gt;</comment>
                            <comment id="367767" author="paf0186" created="Wed, 29 Mar 2023 18:04:02 +0000"  >&lt;p&gt;I don&apos;t think we&apos;re testing on RHEL 8.1 clients any more?&#160; Are we?&#160; I&apos;ve removed this ALWAYS_EXCEPT as part of &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/50460&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/50460&lt;/a&gt; ; hopefully I&apos;m right about testing&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="48588">LU-10073</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00quf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>