<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:11:29 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7737] osd_handler.c:2777:osd_object_destroy()) ASSERTION( !lu_object_is_dying(dt-&gt;do_lu.lo_header) </title>
                <link>https://jira.whamcloud.com/browse/LU-7737</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This error might be a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6699&quot; title=&quot;LustreError: 7605:0:(osd_handler.c:2530:osd_object_destroy()) ASSERTION&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6699&quot;&gt;&lt;del&gt;LU-6699&lt;/del&gt;&lt;/a&gt;. Anyway as the bug occurred in conjunction with llog errors, it might related to the latest changes in DNE (change 16838).&lt;/p&gt;

&lt;p&gt;The error happens during soak testing of build &apos;20160203&apos; (see: &lt;a href=&quot;https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160203&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160203&lt;/a&gt;). DNE is enable. MDTs had been formatted with &lt;em&gt;ldiskfs&lt;/em&gt;, OSTs with &lt;em&gt;zfs&lt;/em&gt;. MDSes are configured in actve-active HA failover configuration.&lt;/p&gt;

&lt;p&gt;The configuration for the HA pair in question reads as:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;&lt;tt&gt;lola-8&lt;/tt&gt; - mdt-0, 1 (primary resources)&lt;/li&gt;
	&lt;li&gt;&lt;tt&gt;lola-9&lt;/tt&gt; - mdt-2, 3 (primary resources)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;During the umount (failback of resources) of mdt-3 on &lt;tt&gt;lola-8&lt;/tt&gt; the &lt;br/&gt;
node crashed with LBUG:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;0&amp;gt;LustreError: 5861:0:(osd_handler.c:2777:osd_object_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) ) fail
ed: 
&amp;lt;0&amp;gt;LustreError: 5861:0:(osd_handler.c:2777:osd_object_destroy()) LBUG
&amp;lt;4&amp;gt;Pid: 5861, comm: umount
&amp;lt;4&amp;gt;
&amp;lt;4&amp;gt;Call Trace:
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0772875&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0772e77&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa1060fbb&amp;gt;] osd_object_destroy+0x52b/0x5b0 [osd_ldiskfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa105e42d&amp;gt;] ? osd_object_ref_del+0x22d/0x4e0 [osd_ldiskfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0851dda&amp;gt;] llog_osd_destroy+0x1ba/0x9e0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08417a6&amp;gt;] llog_destroy+0x2b6/0x470 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08438cb&amp;gt;] llog_cat_close+0x17b/0x220 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa12b04e7&amp;gt;] lod_sub_fini_llog+0xb7/0x380 [lod]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109ec20&amp;gt;] ? autoremove_wake_function+0x0/0x40
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa12b35c4&amp;gt;] lod_process_config+0xbc4/0x1830 [lod]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa111361f&amp;gt;] ? lfsck_stop+0x15f/0x4c0 [lfsck]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8117523c&amp;gt;] ? __kmalloc+0x21c/0x230
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109ec20&amp;gt;] ? autoremove_wake_function+0x0/0x40
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa1331474&amp;gt;] mdd_process_config+0x114/0x5d0 [mdd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa11db55e&amp;gt;] mdt_device_fini+0x3ee/0xf40 [mdt]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0860406&amp;gt;] ? class_disconnect_exports+0x116/0x2f0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa087a552&amp;gt;] class_cleanup+0x572/0xd20 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa085b0c6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa087cbd6&amp;gt;] class_process_config+0x1ed6/0x2830 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa077dd01&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8117523c&amp;gt;] ? __kmalloc+0x21c/0x230
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa087d9ef&amp;gt;] class_manual_cleanup+0x4bf/0x8e0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa085b0c6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08b610c&amp;gt;] server_put_super+0xa0c/0xed0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811ac776&amp;gt;] ? invalidate_inodes+0xf6/0x190
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190b7b&amp;gt;] generic_shutdown_super+0x5b/0xe0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190c66&amp;gt;] kill_anon_super+0x16/0x60
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08808a6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81191407&amp;gt;] deactivate_super+0x57/0x80
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b10df&amp;gt;] mntput_no_expire+0xbf/0x110
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b1c2b&amp;gt;] sys_umount+0x7b/0x3a0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100b0d2&amp;gt;] system_call_fastpath+0x16/0x1b
&amp;lt;4&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Also, immediately the following error was reported on &lt;tt&gt;lola-8&lt;/tt&gt;:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Feb  3 10:51:27 lola-8 kernel: LustreError: 5733:0:(llog.c:588:llog_process_thread()) soaked-MDT0006-osp-MDT0003 retry remo
te llog process
Feb  3 10:51:27 lola-8 kernel: LustreError: 5733:0:(lod_dev.c:419:lod_sub_recovery_thread()) soaked-MDT0006-osp-MDT0003 get
ting update log failed: rc = -11
Feb  3 10:51:27 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Local llog found corrupted
Feb  3 10:51:27 lola-8 kernel: LustreError: 5730:0:(osp_object.c:588:osp_attr_get()) soaked-MDT0002-osp-MDT0003:osp_attr_ge
t update error [0x200000009:0x2:0x0]: rc = -5
Feb  3 10:51:27 lola-8 kernel: LustreError: 5730:0:(lod_sub_object.c:959:lod_sub_prep_llog()) soaked-MDT0003-mdtlov: can&apos;t get id from catalogs: rc = -5
Feb  3 10:51:28 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Local llog found corrupted
Feb  3 10:51:28 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Skipped 1 previous similar message
Feb  3 10:51:29 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Local llog found corrupted
Feb  3 10:51:29 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Skipped 1 previous similar message
Feb  3 10:51:31 lola-8 kernel: Lustre: soaked-MDT0003: Not available for connect from 0@lo (stopping)
Feb  3 10:51:32 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Local llog found corrupted
Feb  3 10:51:32 lola-8 kernel: LustreError: 5727:0:(llog.c:595:llog_process_thread()) Skipped 2 previous similar messages
Feb  3 10:51:35 lola-8 kernel: LustreError: 5861:0:(osd_handler.c:3291:osd_object_ref_del()) soaked-MDT0003-osd: nlink == 0 on [0x2c00042a3:0x15ec5:0x0], maybe an upgraded file? (LU-3915)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The  sequence of events are:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;2016-02-03 10:42:39  -  failover started for &lt;tt&gt;lola-9&lt;/tt&gt;&lt;/li&gt;
	&lt;li&gt;2016-02-03 10:42:39  -  &lt;tt&gt;lola-9&lt;/tt&gt; online again&lt;/li&gt;
	&lt;li&gt;2016-02-03 10:51:26  -  Failback of resoures (umount mdt-3)&lt;/li&gt;
	&lt;li&gt;2016-02-03 10:51:35  -  &lt;tt&gt;lola-8&lt;/tt&gt; hit LBUG&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Attached files: &lt;br/&gt;
&lt;tt&gt;lola-8&lt;/tt&gt; messages, console, vmcore-dmesg.txt&lt;br/&gt;
soak.log (for injected errors)&lt;br/&gt;
Note:&lt;br/&gt;
Crash dump have been created. I&apos;ll add the info about the storage location as soon as ticket is created.&lt;/p&gt;

&lt;p&gt;Info required for matching: sanity-quota 7c&lt;/p&gt;</description>
                <environment>lola&lt;br/&gt;
build: &lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-reviews/37226/&quot;&gt;https://build.hpdd.intel.com/job/lustre-reviews/37226/&lt;/a&gt;</environment>
        <key id="34484">LU-7737</key>
            <summary>osd_handler.c:2777:osd_object_destroy()) ASSERTION( !lu_object_is_dying(dt-&gt;do_lu.lo_header) </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="di.wang">Di Wang</assignee>
                                    <reporter username="heckes">Frank Heckes</reporter>
                        <labels>
                            <label>dne2</label>
                            <label>soak</label>
                    </labels>
                <created>Thu, 4 Feb 2016 11:00:17 +0000</created>
                <updated>Fri, 25 Sep 2020 23:46:07 +0000</updated>
                            <resolved>Thu, 11 Feb 2016 14:53:09 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="141148" author="heckes" created="Thu, 4 Feb 2016 11:10:58 +0000"  >&lt;p&gt;crash dump can be found at: &lt;tt&gt;lhn.lola.hpdd.intel.com:/scratch/crashdumps/lu-7737/lola-8/127.0.0.1-2016-02-03-10:51:51/&lt;/tt&gt;&lt;/p&gt;</comment>
                            <comment id="141220" author="jgmitter" created="Thu, 4 Feb 2016 18:35:57 +0000"  >&lt;p&gt;Hi Di,&lt;br/&gt;
Can you comment on this one?&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="141231" author="gerrit" created="Thu, 4 Feb 2016 19:29:42 +0000"  >&lt;p&gt;wangdi (di.wang@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/18308&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18308&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7737&quot; title=&quot;osd_handler.c:2777:osd_object_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7737&quot;&gt;&lt;del&gt;LU-7737&lt;/del&gt;&lt;/a&gt; lod: not return -EIO during process updates log&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 56050a1fd3e41b769a9b14ac8adfa128939215a3&lt;/p&gt;</comment>
                            <comment id="141232" author="di.wang" created="Thu, 4 Feb 2016 19:31:40 +0000"  >&lt;p&gt;According to the log, it seems the log records are incorrectly deleted during umount when the target is doing recovery. &lt;/p&gt;</comment>
                            <comment id="141375" author="jamesanunez" created="Fri, 5 Feb 2016 17:09:56 +0000"  >&lt;p&gt;We just started seeing this on master. replay-single test 102c is failing on unmount of the MDS1 in review-dne-part-2.&lt;/p&gt;

&lt;p&gt;Here are the logs:&lt;br/&gt;
2016-02-04 08:17:35 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c7aac802-cb65-11e5-a59a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c7aac802-cb65-11e5-a59a-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-04 09:26:36 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1bfe48b2-cb74-11e5-be8d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1bfe48b2-cb74-11e5-be8d-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-06 11:49:35 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8cca149e-cd0f-11e5-8b0e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8cca149e-cd0f-11e5-8b0e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-07 21:58:41 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/107745d8-ce36-11e5-af15-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/107745d8-ce36-11e5-af15-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-07 21:59:13 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1d39d220-ce33-11e5-876a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1d39d220-ce33-11e5-876a-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-07 22:23:26 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3b006fae-ce37-11e5-90aa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3b006fae-ce37-11e5-90aa-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in sanity-quota test_7c in review-dne-part-2. Logs at:&lt;br/&gt;
2016-02-04 11:52:10 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e1d55ec6-cb70-11e5-a59a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e1d55ec6-cb70-11e5-a59a-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-04 14:52:30 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a5588724-cb8a-11e5-b49e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a5588724-cb8a-11e5-b49e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-04 19:18:03 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/dc1de50a-cbae-11e5-be8d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/dc1de50a-cbae-11e5-be8d-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-04 21:16:36 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/cc8b486c-cbbd-11e5-b2cb-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/cc8b486c-cbbd-11e5-b2cb-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-06 03:40:20 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6d50e5cc-cccc-11e5-8b0e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6d50e5cc-cccc-11e5-8b0e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-06 11:22:59 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/cf38a7e2-ccff-11e5-963e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/cf38a7e2-ccff-11e5-963e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-06 11:25:21 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8b554926-cd00-11e5-963e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8b554926-cd00-11e5-963e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-06 20:01:18 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6f228e60-cd48-11e5-b1fa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6f228e60-cd48-11e5-b1fa-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-07 04:05:05 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/95528d4e-cd87-11e5-8c5d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/95528d4e-cd87-11e5-8c5d-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-02-07 10:14:30 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d1108a5c-cdbe-11e5-9bc0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d1108a5c-cdbe-11e5-9bc0-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141482" author="bogl" created="Sat, 6 Feb 2016 16:42:42 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
2016-02-06 02:57:54 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/edc883e8-ccba-11e5-b80c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/edc883e8-ccba-11e5-b80c-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141505" author="rhenwood" created="Mon, 8 Feb 2016 15:13:39 +0000"  >&lt;p&gt;One more failure on master (from the canary patch) over the weekend:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/66f80856-cddd-11e5-9bc0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/66f80856-cddd-11e5-9bc0-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141506" author="rhenwood" created="Mon, 8 Feb 2016 15:20:27 +0000"  >&lt;p&gt;And another over the weekend:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8c4963c2-cc3c-11e5-b2cb-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8c4963c2-cc3c-11e5-b2cb-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141511" author="bogl" created="Mon, 8 Feb 2016 15:58:42 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6f228e60-cd48-11e5-b1fa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6f228e60-cd48-11e5-b1fa-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141582" author="bzzz" created="Tue, 9 Feb 2016 12:53:49 +0000"  >&lt;p&gt;I think we should also avoid the case when we try to destroy llog twice: in llog_process() (due to cancels on an error) and in llog_cat_close(). the patch is coming..&lt;/p&gt;</comment>
                            <comment id="141583" author="gerrit" created="Tue, 9 Feb 2016 13:03:27 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/18362&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18362&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7737&quot; title=&quot;osd_handler.c:2777:osd_object_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7737&quot;&gt;&lt;del&gt;LU-7737&lt;/del&gt;&lt;/a&gt; llog: do not destroy llog twice&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 866689144744cae95116b69a992abbfaca806517&lt;/p&gt;</comment>
                            <comment id="141596" author="bogl" created="Tue, 9 Feb 2016 14:28:58 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/f9c7e7e6-cee1-11e5-b578-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/f9c7e7e6-cee1-11e5-b578-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141662" author="gerrit" created="Tue, 9 Feb 2016 19:03:04 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/18308/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18308/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7737&quot; title=&quot;osd_handler.c:2777:osd_object_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7737&quot;&gt;&lt;del&gt;LU-7737&lt;/del&gt;&lt;/a&gt; lod: not return -EIO during process updates log&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 028e65b03dac9497256978d2266acb8c20b48a99&lt;/p&gt;</comment>
                            <comment id="141758" author="pjones" created="Wed, 10 Feb 2016 14:22:57 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                            <comment id="141759" author="pjones" created="Wed, 10 Feb 2016 14:24:01 +0000"  >&lt;p&gt;Oh! There is a second patch - sorry&lt;/p&gt;</comment>
                            <comment id="141965" author="bzzz" created="Thu, 11 Feb 2016 14:25:33 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/18362&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18362&lt;/a&gt; got another ticket - &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7772&quot; title=&quot;catalogs shouldn&amp;#39;t destroy plain llogs twice&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7772&quot;&gt;&lt;del&gt;LU-7772&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141970" author="pjones" created="Thu, 11 Feb 2016 14:53:09 +0000"  >&lt;p&gt;Thanks Alex so I&apos;ll re-resolve this &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10322">
                    <name>Gantt End to Start</name>
                                                                <inwardlinks description="has to be done after">
                                        <issuelink>
            <issuekey id="34151">LU-7680</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="22652">LU-4448</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="34151">LU-7680</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="20310" name="console-lola-8.log.bz2" size="118474" author="heckes" created="Thu, 4 Feb 2016 11:25:52 +0000"/>
                            <attachment id="20311" name="messages-lola-8.log.bz2" size="80072" author="heckes" created="Thu, 4 Feb 2016 11:25:52 +0000"/>
                            <attachment id="20312" name="soak.log.bz2" size="42982" author="heckes" created="Thu, 4 Feb 2016 11:25:52 +0000"/>
                            <attachment id="20313" name="vmcore-dmesg.txt.bz2" size="27409" author="heckes" created="Thu, 4 Feb 2016 11:25:52 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzy09r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>