<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:52:14 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12398] sanity test_255b: FAIL: Ladvise willread should use more memory than 76800 KiB</title>
                <link>https://jira.whamcloud.com/browse/LU-12398</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity test 255b failed as follows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity test 255b: check &apos;lfs ladvise -a dontneed&apos; ================================================= 11:19:01 (1559845141)
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.586058 s, 179 MB/s
CMD: vm7 cat /proc/meminfo | grep ^MemTotal:
Total memory: 1877564 KiB
CMD: vm7 sync &amp;amp;&amp;amp; echo 3 &amp;gt; /proc/sys/vm/drop_caches
CMD: vm7 cat /proc/meminfo | grep ^Cached:
Cache used before read: 145664 KiB
CMD: vm7 cat /proc/meminfo | grep ^Cached:
Cache used after read: 145880 KiB
CMD: vm7 cat /proc/meminfo | grep ^Cached:
Cache used after dontneed ladvise: 145880 KiB
 sanity test_255b: @@@@@@ FAIL: Ladvise willread should use more memory than 76800 KiB
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://testing.whamcloud.com/test_sets/f180e12c-8889-11e9-be83-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/f180e12c-8889-11e9-be83-52540065bddc&lt;/a&gt;&lt;/p&gt;</description>
                <environment>Lustre build: &lt;a href=&quot;https://build.whamcloud.com/job/lustre-master/3904/&quot;&gt;https://build.whamcloud.com/job/lustre-master/3904/&lt;/a&gt; (tag 2.12.54)&lt;br/&gt;
Lustre client distro: RHEL 8.0&lt;br/&gt;
Lustre server distro: RHEL 7.6</environment>
        <key id="55877">LU-12398</key>
            <summary>sanity test_255b: FAIL: Ladvise willread should use more memory than 76800 KiB</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="dongyang">Dongyang Li</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Thu, 6 Jun 2019 18:43:40 +0000</created>
                <updated>Tue, 7 Nov 2023 21:52:19 +0000</updated>
                            <resolved>Tue, 7 Nov 2023 21:52:08 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="248596" author="pjones" created="Thu, 6 Jun 2019 21:58:50 +0000"  >&lt;p&gt;Dongyang can you please investigate?&lt;/p&gt;</comment>
                            <comment id="248969" author="dongyang" created="Tue, 11 Jun 2019 10:59:20 +0000"  >&lt;p&gt;from the Maloo debug log:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
00000080:00200000:0.0:1559845142.798658:0:4733:0:(file.c:3338:ll_file_ioctl()) VFS Op:inode=[0x2000059f3:0x13b7:0x0](000000004ccbd265), cmd=802066fa
00000080:00200000:0.0:1559845143.031197:0:4748:0:(file.c:3338:ll_file_ioctl()) VFS Op:inode=[0x2000059f3:0x13b7:0x0](000000004ccbd265), cmd=802066fa
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Looks like the ladvise ioctl was done from the client, which is a rpc to the server. Feels like the server didn&apos;t bring the file into memory?&lt;/p&gt;

&lt;p&gt;BTW using the same set of packages on a RHEL8 client and RHEL7.6 server, the test case works fine on my local boxes.&lt;/p&gt;</comment>
                            <comment id="249425" author="lixi_wc" created="Tue, 18 Jun 2019 03:36:07 +0000"  >&lt;p&gt;I noticed that at the end of the last test (255a), a disconnection happens:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[28154.168929] Lustre: DEBUG MARKER: /usr/sbin/lctl mark == sanity test 255a: check \&apos;lfs ladvise -a willread\&apos; ================================================= 11:12:37 \(1559844757\)
[28154.352785] Lustre: DEBUG MARKER: == sanity test 255a: check &apos;lfs ladvise -a willread&apos; ================================================= 11:12:37 (1559844757)
[28155.345118] Lustre: DEBUG MARKER: /usr/sbin/lctl set_param fail_val=4 fail_loc=0x237
[28155.459919] LustreError: 3276:0:(fail.c:129:__cfs_fail_timeout_set()) cfs_fail_timeout id 237 sleeping for 4000ms
[28159.462249] LustreError: 3276:0:(fail.c:140:__cfs_fail_timeout_set()) cfs_fail_timeout id 237 awake
[28159.487458] LustreError: 3276:0:(fail.c:129:__cfs_fail_timeout_set()) cfs_fail_timeout id 237 sleeping for 4000ms
[28159.487465] LustreError: 3276:0:(fail.c:129:__cfs_fail_timeout_set()) Skipped 1 previous similar message
[28159.586822] Lustre: DEBUG MARKER: /usr/sbin/lctl set_param fail_loc=0
[28159.688388] LustreError: 3276:0:(fail.c:135:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted
[28179.112695] Lustre: lustre-OST0001: haven&apos;t heard from client fb3fe627-e4c7-4 (at 192.168.0.108@tcp) in 47 seconds. I think it&apos;s dead, and I am evicting it. exp ffff9f33c71e6c00, cur 1559827411 expire 1559827381 last 1559827364
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[28536.526341] Lustre: DEBUG MARKER: lctl set_param -n fail_loc=0 	    fail_val=0 2&amp;gt;/dev/null
[28536.922444] Lustre: DEBUG MARKER: rc=0;
			val=$(/usr/sbin/lctl get_param -n catastrophe 2&amp;gt;&amp;amp;1);
			if [[ $? -eq 0 &amp;amp;&amp;amp; $val -ne 0 ]]; then
				echo $(hostname -s): $val;
				rc=$val;
			fi;
			exit $rc
[28537.157475] Lustre: DEBUG MARKER: dmesg
[28537.714141] Lustre: DEBUG MARKER: /usr/sbin/lctl mark == sanity test 255b: check \&apos;lfs ladvise -a dontneed\&apos; 
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I am wondering whether this abnormal contion causes the test failure.&lt;/p&gt;

&lt;p&gt;Is there any way to check the failure rate of this test with this system configuration?&lt;/p&gt;</comment>
                            <comment id="392116" author="adilger" created="Tue, 7 Nov 2023 21:52:08 +0000"  >&lt;p&gt;Haven&apos;t seen this in months or years.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="72094">LU-16127</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="55582">LU-12269</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00hrb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>