<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:51:34 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5447] MGS fails to mount after 1.8 to 2.4.3 upgrade: checking for existing Lustre data: not found</title>
                <link>https://jira.whamcloud.com/browse/LU-5447</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After upgrading our lustre servers to lustre-2.4.3 (the exact branch can be seen at the github link below), we are not able to start the filesystem it fails at the first mount of the MGT with the messages below. A debugfs &apos;stats&apos; output is attached. &lt;/p&gt;

&lt;p&gt;The only difference I am able to notice between this filesystem and another that was successfully upgraded to 2.4.3 from 1.8.9 is that this has the flag &quot;update&quot; in the tunefs.lustre output. Is there a particular meaning to that?&lt;/p&gt;

&lt;p&gt;We mounted with e2fsprogs-1.42.9 first, and then downgraded to e2fsprogs-1.42.7 and still noticed the same result. The system was last mounted as a 1.8.9 filesystem and was cleanly unmounted. The multipath configuration would have changed slightly in the rhel5 to rhel6 transition, but the block device is still readable by debugfs.&lt;/p&gt;

&lt;p&gt;Is this related to the index not being assigned in 1.8? There were several jira tickets related, but they all appear to have been resolved in 2.4.0.&lt;/p&gt;

&lt;p&gt;We are working on a public repo of our branch. This should be it, but the one we are running has the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5284&quot; title=&quot;GPF in radix_tree_lookup_slot on OSS&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5284&quot;&gt;&lt;del&gt;LU-5284&lt;/del&gt;&lt;/a&gt;:&lt;a href=&quot;http://review.whamcloud.com/#/c/11136/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/11136/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our repo:&lt;br/&gt;
&lt;a href=&quot;https://github.com/ORNL-TechInt/lustre/commits/master&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/ORNL-TechInt/lustre/commits/master&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@sns-mds1 ~&amp;#93;&lt;/span&gt;# mount -vt lustre /dev/mpath/snsfs-mgt /tmp/lustre/snsfs/sns-mgs&lt;br/&gt;
arg&lt;span class=&quot;error&quot;&gt;&amp;#91;0&amp;#93;&lt;/span&gt; = /sbin/mount.lustre&lt;br/&gt;
arg&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; = -v&lt;br/&gt;
arg&lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; = -o&lt;br/&gt;
arg&lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; = rw&lt;br/&gt;
arg&lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; = /dev/mpath/snsfs-mgt&lt;br/&gt;
arg&lt;span class=&quot;error&quot;&gt;&amp;#91;5&amp;#93;&lt;/span&gt; = /tmp/lustre/snsfs/sns-mgs&lt;br/&gt;
source = /dev/mpath/snsfs-mgt (/dev/mpath/snsfs-mgt), target = /tmp/lustre/snsfs/sns-mgs&lt;br/&gt;
options = rw&lt;br/&gt;
checking for existing Lustre data: not found&lt;br/&gt;
mount.lustre: /dev/mpath/snsfs-mgt has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@sns-mds1 ~&amp;#93;&lt;/span&gt;# tunefs.lustre --dryrun /dev/mapper/snsfs-mgt&lt;br/&gt;
checking for existing Lustre data: found&lt;br/&gt;
Reading CONFIGS/mountdata&lt;/p&gt;

&lt;p&gt;   Read previous values:&lt;br/&gt;
Target:     MGS&lt;br/&gt;
Index:      unassigned&lt;br/&gt;
Lustre FS:  lustre&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x54&lt;br/&gt;
              (MGS needs_index update )&lt;br/&gt;
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro&lt;br/&gt;
Parameters:&lt;/p&gt;


&lt;p&gt;   Permanent disk data:&lt;br/&gt;
Target:     MGS&lt;br/&gt;
Index:      unassigned&lt;br/&gt;
Lustre FS:  lustre&lt;br/&gt;
Mount type: ldiskfs&lt;br/&gt;
Flags:      0x44&lt;br/&gt;
              (MGS update )&lt;br/&gt;
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro&lt;br/&gt;
Parameters:&lt;/p&gt;

&lt;p&gt;exiting before disk write.&lt;/p&gt;
</description>
                <environment>RHEL6.5&lt;br/&gt;
kernel-2.6.32-358.23.2.el6&lt;br/&gt;
e2fsprogs-1.42.7.wc1-7.el6.x86_64</environment>
        <key id="25855">LU-5447</key>
            <summary>MGS fails to mount after 1.8 to 2.4.3 upgrade: checking for existing Lustre data: not found</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="blakecaldwell">Blake Caldwell</reporter>
                        <labels>
                    </labels>
                <created>Mon, 4 Aug 2014 15:25:30 +0000</created>
                <updated>Fri, 8 Aug 2014 23:16:46 +0000</updated>
                            <resolved>Fri, 8 Aug 2014 23:16:45 +0000</resolved>
                                    <version>Lustre 2.4.3</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="90694" author="jfc" created="Mon, 4 Aug 2014 15:36:51 +0000"  >&lt;p&gt;We are locating an engineer to take a look at this problem.&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="90706" author="green" created="Mon, 4 Aug 2014 16:17:00 +0000"  >&lt;p&gt;Looking at the mount util code, the decision that is printing after &quot;checking for existing Lustre data&quot; is:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;int ldiskfs_is_lustre(char *dev, unsigned *mount_type)
{
        int ret;

        ret = file_in_dev(MOUNT_DATA_FILE, dev);
        if (ret) {
                /* in the -1 case, &apos;extents&apos; means IS a lustre target */
                *mount_type = LDD_MT_LDISKFS;
                return 1;
        }

        ret = file_in_dev(LAST_RCVD, dev);
        if (ret) {
                *mount_type = LDD_MT_LDISKFS;
                return 1;
        }

        return 0;
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;But the most strange thing is that the same check works from one tool, but not from the other.&lt;br/&gt;
I cannot help but notice that the mount command has different path: /dev/mpath/snsfs-mgt where as tunefs was given /dev/mapper/snsfs-mgt&lt;/p&gt;

&lt;p&gt;Can you please check that /dev/mapper/snsfs-mgt fails with mount as well as the first step please?&lt;/p&gt;</comment>
                            <comment id="90708" author="jfc" created="Mon, 4 Aug 2014 16:20:24 +0000"  >&lt;p&gt;Thank you Oleg for jumping in.&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="90714" author="blakecaldwell" created="Mon, 4 Aug 2014 16:42:57 +0000"  >&lt;p&gt;Thank you! Forgive my blind omission... /dev/mapper allowed the MGT to mount! Except now we hit &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4743&quot; title=&quot;soft lockup in lustre 2.4.2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4743&quot;&gt;&lt;del&gt;LU-4743&lt;/del&gt;&lt;/a&gt;. I am upload log messages Could you advise whether the patch landed for 2.5.2 can be backported to 2.4.3?&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#/c/9574/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/9574/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="90717" author="green" created="Mon, 4 Aug 2014 16:52:54 +0000"  >&lt;p&gt;the patch is trivial and cleanyl cherry picks into my b2_4 tree, so I expect it to work in your tree as well.&lt;/p&gt;</comment>
                            <comment id="90718" author="blakecaldwell" created="Mon, 4 Aug 2014 16:59:33 +0000"  >&lt;p&gt;Sounds good. Sorry to be pedantic, but any way to identify the obsoleted record type 10612401 that will be skipped?&lt;/p&gt;</comment>
                            <comment id="90720" author="green" created="Mon, 4 Aug 2014 17:12:25 +0000"  >&lt;p&gt;That used to be a setattr record that is now deprecated:&lt;/p&gt;

&lt;p&gt;        /* MDS_SETATTR_REC      = LLOG_OP_MAGIC | 0x12401, obsolete 1.8.0 */&lt;/p&gt;</comment>
                            <comment id="90722" author="jfc" created="Mon, 4 Aug 2014 17:18:59 +0000"  >&lt;p&gt;Blake,&lt;br/&gt;
May we mark this ticket as resolved?&lt;br/&gt;
Or, it you want us to keep it open a while longer, can I downgrade the severity level?&lt;/p&gt;

&lt;p&gt;Many thanks,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="90726" author="blakecaldwell" created="Mon, 4 Aug 2014 17:24:53 +0000"  >&lt;p&gt;Thank you. Yes, please lower severity level. I&apos;ll update this ticket once we&apos;ve applied LU--4743&lt;/p&gt;</comment>
                            <comment id="90728" author="jfc" created="Mon, 4 Aug 2014 17:29:39 +0000"  >&lt;p&gt;Done &amp;#8211; thanks Blake.&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="90733" author="blakecaldwell" created="Mon, 4 Aug 2014 19:01:13 +0000"  >&lt;p&gt;It mounted successfully! Thanks for your help. This ticket can be closed.&lt;/p&gt;</comment>
                            <comment id="90734" author="jfc" created="Mon, 4 Aug 2014 19:03:53 +0000"  >&lt;p&gt;Excellent! Thank you Blake, and thank you Oleg.&lt;/p&gt;

&lt;p&gt;Best regards,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="90755" author="blakecaldwell" created="Mon, 4 Aug 2014 22:42:26 +0000"  >&lt;p&gt;However now we have an LBUG on MDT unmount that appears to be caused by this patch. This is a similar situation to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5188&quot; title=&quot;nbp6-OST002f-osc-MDT0000: invalid setattr record, lsr_valid:0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5188&quot;&gt;&lt;del&gt;LU-5188&lt;/del&gt;&lt;/a&gt; that caused &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5244&quot; title=&quot;conf-sanity test_32b: osp_sync_thread()) ASSERTION( count &amp;lt; 10 ) &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5244&quot;&gt;&lt;del&gt;LU-5244&lt;/del&gt;&lt;/a&gt; (the error below).&lt;/p&gt;

&lt;p&gt;Could this get attention as to the possibility of a backport? While it only happens on unmount during test, the concern is that it happens under load. We are aiming for a return to production tomorrow am.&lt;/p&gt;

&lt;p&gt;Aug  4 18:29:32 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10346.158586&amp;#93;&lt;/span&gt; Lustre: Failing over snsfs-MDT0000&lt;br/&gt;
Aug  4 18:29:38 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10352.174342&amp;#93;&lt;/span&gt; Lustre: 20277:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1407191372/real 1407191372&amp;#93;&lt;/span&gt;  req@ffff8808054b5000 x1475540343371484/t0(0) o9-&amp;gt;snsfs-OST0033-osc@128.219.249.38@tcp:28/4 lens 224/224 e 0 to 1 dl 1407191378 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1&lt;br/&gt;
Aug  4 18:29:38 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10352.202412&amp;#93;&lt;/span&gt; Lustre: 20277:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 58 previous similar messages&lt;br/&gt;
Aug  4 18:29:44 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10358.215880&amp;#93;&lt;/span&gt; Lustre: 20277:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1407191378/real 1407191378&amp;#93;&lt;/span&gt;  req@ffff880805bce400 x1475540343371524/t0(0) o9-&amp;gt;snsfs-OST0034-osc@128.219.249.35@tcp:28/4 lens 224/224 e 0 to 1 dl 1407191384 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1&lt;br/&gt;
Aug  4 18:29:45 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10359.063530&amp;#93;&lt;/span&gt; Lustre: 10969:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1407191373/real 1407191373&amp;#93;&lt;/span&gt;  req@ffff880415a31000 x1475540343371488/t0(0) o13-&amp;gt;snsfs-OST0038-osc@128.219.249.35@tcp:7/4 lens 224/368 e 0 to 1 dl 1407191385 ref 1 fl Rpc:X/0/ffffffff rc 0/-1&lt;br/&gt;
Aug  4 18:29:45 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10359.065516&amp;#93;&lt;/span&gt; Lustre: snsfs-OST003b-osc: Connection to snsfs-OST003b (at 128.219.249.38@tcp) was lost; in progress operations using this service will wait for recovery to complete&lt;br/&gt;
Aug  4 18:29:45 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10359.065521&amp;#93;&lt;/span&gt; Lustre: Skipped 12 previous similar messages&lt;br/&gt;
Aug  4 18:29:45 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10359.112859&amp;#93;&lt;/span&gt; Lustre: 10969:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages&lt;br/&gt;
Aug  4 18:29:50 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10364.247409&amp;#93;&lt;/span&gt; Lustre: 20277:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1407191384/real 1407191384&amp;#93;&lt;/span&gt;  req@ffff880805bce400 x1475540343371532/t0(0) o9-&amp;gt;snsfs-OST0035-osc@128.219.249.36@tcp:28/4 lens 224/224 e 0 to 1 dl 1407191390 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.341707&amp;#93;&lt;/span&gt; LustreError: 11515:0:(osp_sync.c:885:osp_sync_thread()) ASSERTION( count &amp;lt; 10 ) failed: snsfs-OST0001-osc: 2 2 empty&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.353399&amp;#93;&lt;/span&gt; LustreError: 11515:0:(osp_sync.c:885:osp_sync_thread()) LBUG&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.360167&amp;#93;&lt;/span&gt; Pid: 11515, comm: osp-syn-1&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.364061&amp;#93;&lt;/span&gt; &lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.364062&amp;#93;&lt;/span&gt; Call Trace:&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.368132&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04af895&amp;gt;&amp;#93;&lt;/span&gt; libcfs_debug_dumpstack+0x55/0x80 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.375168&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04afe97&amp;gt;&amp;#93;&lt;/span&gt; lbug_with_loc+0x47/0xb0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.381423&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0f81f04&amp;gt;&amp;#93;&lt;/span&gt; osp_sync_thread+0x6d4/0x7e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osp&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.387758&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81063b80&amp;gt;&amp;#93;&lt;/span&gt; ? default_wake_function+0x0/0x20&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.394006&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0f81830&amp;gt;&amp;#93;&lt;/span&gt; ? osp_sync_thread+0x0/0x7e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osp&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.400343&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c0ca&amp;gt;&amp;#93;&lt;/span&gt; child_rip+0xa/0x20&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.405377&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0f81830&amp;gt;&amp;#93;&lt;/span&gt; ? osp_sync_thread+0x0/0x7e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osp&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.411712&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0f81830&amp;gt;&amp;#93;&lt;/span&gt; ? osp_sync_thread+0x0/0x7e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osp&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.418043&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c0c0&amp;gt;&amp;#93;&lt;/span&gt; ? child_rip+0x0/0x20&lt;br/&gt;
Aug  4 18:30:40 sns-mds1.ornl.gov kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;10414.423244&amp;#93;&lt;/span&gt; &lt;/p&gt;</comment>
                            <comment id="90758" author="jfc" created="Mon, 4 Aug 2014 23:00:16 +0000"  >&lt;p&gt;Reopened due to reported new LBUG.&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="90759" author="green" created="Mon, 4 Aug 2014 23:04:26 +0000"  >&lt;p&gt;Hm, indeed, it looks like this is the case.&lt;/p&gt;

&lt;p&gt;I tried the &lt;a href=&quot;http://review.whamcloud.com/#/c/10828/4&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/10828/4&lt;/a&gt; and it also cleanly applies to b2_4, so you should be able to apply it as is.&lt;/p&gt;</comment>
                            <comment id="90762" author="blakecaldwell" created="Tue, 5 Aug 2014 00:57:38 +0000"  >&lt;p&gt;The patch was successful. Several mount/unmount cycles were completed without a hitch. Thanks Oleg and John! All done with this ticket.&lt;/p&gt;</comment>
                            <comment id="90764" author="jfc" created="Tue, 5 Aug 2014 01:03:22 +0000"  >&lt;p&gt;Thank you for this update Blake &amp;#8211; glad to see that things are working well.&lt;/p&gt;

&lt;p&gt;I&apos;ll leave this ticket &apos;as is&apos; for a few days, and then we can decide to mark it resolved, if no further problems come along.&lt;/p&gt;

&lt;p&gt;Best regards,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="91240" author="jamesanunez" created="Fri, 8 Aug 2014 23:16:46 +0000"  >&lt;p&gt;ORNL applied patches for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4743&quot; title=&quot;soft lockup in lustre 2.4.2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4743&quot;&gt;&lt;del&gt;LU-4743&lt;/del&gt;&lt;/a&gt; (&lt;a href=&quot;http://review.whamcloud.com/#/c/10624/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/10624/&lt;/a&gt;) and one of the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5188&quot; title=&quot;nbp6-OST002f-osc-MDT0000: invalid setattr record, lsr_valid:0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5188&quot;&gt;&lt;del&gt;LU-5188&lt;/del&gt;&lt;/a&gt; patches (&lt;a href=&quot;http://review.whamcloud.com/#/c/10828/4&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/10828/4&lt;/a&gt; ) and this fixed the issues they were seeing. &lt;/p&gt;

&lt;p&gt;Confirmed with ORNL that we can close this ticket.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="15459" name="debugfs.snsfs-mgt" size="790498" author="blakecaldwell" created="Mon, 4 Aug 2014 15:25:30 +0000"/>
                            <attachment id="15461" name="lustre_2.4.3_upgrade_kernel_logs.gz" size="1040619" author="blakecaldwell" created="Mon, 4 Aug 2014 16:47:15 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwsvr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15165</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>