<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:31:37 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3175] recovery-mds-scale test_failover_mds: unlink ./clients/client1/~dmtmp/PWRPNT/PPTC112.TMP failed (Read-only file system)</title>
                <link>https://jira.whamcloud.com/browse/LU-3175</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running recovery-mds-scale test_failover_mds, dbench and iozone operations failed on client nodes as follows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;copying /usr/share/dbench/client.txt to /mnt/lustre/d0.dbench-wtm-79/client.txt
running &apos;dbench 2&apos; on /mnt/lustre/d0.dbench-wtm-79 at Mon Apr 15 08:12:41 PDT 2013
dbench PID=11113
dbench version 4.00 - Copyright Andrew Tridgell 1999-2004

Running for 600 seconds with load &apos;client.txt&apos; and minimum warmup 120 secs
0 of 2 processes prepared for launch   0 sec
2 of 2 processes prepared for launch   0 sec
releasing clients
   2       241    18.48 MB/sec  warmup   1 sec  latency 20.124 ms
   2       496    17.34 MB/sec  warmup   2 sec  latency 16.341 ms
   2       664    14.10 MB/sec  warmup   3 sec  latency 608.471 ms
   2       666    10.58 MB/sec  warmup   4 sec  latency 1093.980 ms
   2       722     8.52 MB/sec  warmup   5 sec  latency 649.730 ms
   2       724     7.10 MB/sec  warmup   6 sec  latency 1189.957 ms
   2       724     6.09 MB/sec  warmup   7 sec  latency 1332.253 ms
   2       724     5.32 MB/sec  warmup   8 sec  latency 2332.481 ms
   2       727     4.73 MB/sec  warmup   9 sec  latency 3176.583 ms
   2       729     4.26 MB/sec  warmup  10 sec  latency 632.289 ms
   2       731     3.87 MB/sec  warmup  11 sec  latency 804.657 ms
   2       731     3.55 MB/sec  warmup  12 sec  latency 1804.771 ms
   2       761     3.29 MB/sec  warmup  13 sec  latency 2337.010 ms
   2       791     3.07 MB/sec  warmup  14 sec  latency 1105.492 ms
[811] unlink ./clients/client1/~dmtmp/PWRPNT/PPTC112.TMP failed (Read-only file system) - expected NT_STATUS_OK
ERROR: child 1 failed at line 811
[811] unlink ./clients/client0/~dmtmp/PWRPNT/PPTC112.TMP failed (Read-only file system) - expected NT_STATUS_OK
ERROR: child 0 failed at line 811
Child failed with status 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;        Machine = Linux wtm-81 2.6.32-279.19.1.el6.x86_64 #1 SMP Wed Dec 19 07:05:20 U  Excel chart generation enabled
        Excel chart generation enabled
        Verify Mode. Pattern 3a3a3a3a
        Performance measurements are invalid in this mode.
        Using maximum file size of 102400 kilobytes.
        Using Maximum Record Size 512 KB
        Command line used: iozone -a -M -R -V 0xab -g 100M -q 512k -i0 -i1 -f /mnt/lustre/d0.iozone-wtm-81/iozone-file
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                   
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
              64       4
Can not open temp file: /mnt/lustre/d0.iozone-wtm-81/iozone-file
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Dmesg on the client nodes showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: Starting failover on mds1
LustreError: 11115:0:(llite_lib.c:1294:ll_md_setattr()) md_setattr fails: rc = -30
LustreError: 11114:0:(llite_lib.c:1294:ll_md_setattr()) md_setattr fails: rc = -30
LustreError: 11115:0:(file.c:158:ll_close_inode_openhandle()) inode 144115205289279635 mdc close failed: rc = -30
LustreError: 11115:0:(file.c:158:ll_close_inode_openhandle()) inode 144115205289279635 mdc close failed: rc = -30
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The test results are still in the Maloo import queue.&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
Lustre Branch: master&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-master/1406/&quot;&gt;http://build.whamcloud.com/job/lustre-master/1406/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.3/x86_64&lt;br/&gt;
Test Group: failover&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
</environment>
        <key id="18416">LU-3175</key>
            <summary>recovery-mds-scale test_failover_mds: unlink ./clients/client1/~dmtmp/PWRPNT/PPTC112.TMP failed (Read-only file system)</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="6">Not a Bug</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Mon, 15 Apr 2013 16:24:57 +0000</created>
                <updated>Thu, 18 Apr 2013 14:46:15 +0000</updated>
                            <resolved>Thu, 18 Apr 2013 14:46:15 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="56336" author="pjones" created="Mon, 15 Apr 2013 19:11:22 +0000"  >&lt;p&gt;Niu&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="56375" author="niu" created="Tue, 16 Apr 2013 03:18:28 +0000"  >&lt;p&gt;Yujian, is it possible to get the mds &amp;amp; client debug log? Looks b2_1 has the similar problem (see &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-536&quot; title=&quot;recovery-mds-scale: (llite_lib.c:1142:ll_md_setattr()) md_setattr fails: rc = -30&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-536&quot;&gt;&lt;del&gt;LU-536&lt;/del&gt;&lt;/a&gt;), I checked the attached logs in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-536&quot; title=&quot;recovery-mds-scale: (llite_lib.c:1142:ll_md_setattr()) md_setattr fails: rc = -30&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-536&quot;&gt;&lt;del&gt;LU-536&lt;/del&gt;&lt;/a&gt;, but didn&apos;t find logs for the client &amp;amp; active mds.&lt;/p&gt;</comment>
                            <comment id="56378" author="yujian" created="Tue, 16 Apr 2013 05:22:48 +0000"  >&lt;blockquote&gt;&lt;p&gt;Yujian, is it possible to get the mds &amp;amp; client debug log?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Hi Niu, the debug logs are in /scratch/logs/2.4.0/recovery-mds-scale.test_failover_mds.debug_log.tar.bz2 on brent node. The debug level is -1.&lt;/p&gt;

&lt;p&gt;The Maloo report is &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/3d09403c-a5f9-11e2-b0a9-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/3d09403c-a5f9-11e2-b0a9-52540035b04c&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="56384" author="niu" created="Tue, 16 Apr 2013 08:59:38 +0000"  >&lt;p&gt;Thank you, Yujian. Unfortunately, looks the log for MDS is truncated, what we got is after testing log: mds (wtm-82) log starts from 1366038913, but the last -EROFS seen on client (wtm-79) log is at 1366038775.&lt;/p&gt;</comment>
                            <comment id="56389" author="yujian" created="Tue, 16 Apr 2013 13:34:01 +0000"  >&lt;blockquote&gt;&lt;p&gt;Unfortunately, looks the log for MDS is truncated, what we got is after testing log: mds (wtm-82) log starts from 1366038913, but the last -EROFS seen on client (wtm-79) log is at 1366038775.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;This is because hard failure mode, i.e., wtm-82 was powered off and on during the testing. So, the debug log on wtm-82 was gathered after it was up.&lt;/p&gt;</comment>
                            <comment id="56463" author="yujian" created="Wed, 17 Apr 2013 14:38:56 +0000"  >&lt;p&gt;Lustre Branch: master&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-master/1406/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-master/1406/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new test result: &lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/ec1c08ae-a737-11e2-b3cc-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/ec1c08ae-a737-11e2-b3cc-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="56530" author="niu" created="Thu, 18 Apr 2013 03:26:09 +0000"  >&lt;p&gt;Thank you, Yujian!&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Shutting down system logger: [  OK  ]

Stopping iscsi: sd 17:0:0:1: [sdm] Synchronizing SCSI cache
sd 9:0:0:1: [sdn] Synchronizing SCSI cache
sd 16:0:0:1: [sdg] Synchronizing SCSI cache
sd 12:0:0:1: [sdk] Synchronizing SCSI cache
sd 13:0:0:1: [sdf] Synchronizing SCSI cache
sd 14:0:0:1: [sdj] Synchronizing SCSI cache
sd 15:0:0:1: [sdh] Synchronizing SCSI cache
sd 10:0:0:1: [sdl] Synchronizing SCSI cache
sd 11:0:0:1: [sde] Synchronizing SCSI cache
Aborting journal on device sdi-8.
JBD2: I/O error detected when updating journal superblock &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sdi-8.
LustreError: 9599:0:(osd_handler.c:636:osd_trans_commit_cb()) transaction @0xffff88061c03e780 commit error: 2
LDISKFS-fs error (device sdi): ldiskfs_journal_start_sb: Detected aborted journal
LDISKFS-fs (sdi): Remounting filesystem read-only
LustreError: 9599:0:(osd_handler.c:636:osd_trans_commit_cb()) transaction @0xffff880c18e70880 commit error: 2
journal commit I/O error
journal commit I/O error
LDISKFS-fs error (device sdi) in osd_trans_stop: IO failure
LustreError: 12393:0:(osd_handler.c:846:osd_trans_stop()) Failure to stop transaction: -5
LDISKFS-fs error (device sdi): ldiskfs_find_entry: 
sd 8:0:0:1: [sdi] Synchronizing SCSI cache
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When rebooting mds, we stopped iscsi before umount mds, that&apos;s why EROFS happened. Maybe there is something wrong in the (iscsi) shutdown script? I think all filesystems should be umount before stopping iscsi.&lt;/p&gt;</comment>
                            <comment id="56542" author="yujian" created="Thu, 18 Apr 2013 14:46:15 +0000"  >&lt;p&gt;The &quot;pm -h powerman --off&quot; command on Rosso cluster did not power off the physical test node directly, it just gracefully brought down the test node in a safe way (like shutdown command). On vm nodes, the &quot;pm -h powerman --off&quot; command worked correctly, which was the reason why autotest did not hit the issue in this ticket.&lt;/p&gt;

&lt;p&gt;After I changed to use &quot;--reset&quot; option, which really powered off the test node, I did not hit the issue again.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvo47:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7735</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>