<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:26:09 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9433] sanity-scrub test_6: Error in dmesg detected</title>
                <link>https://jira.whamcloud.com/browse/LU-9433</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/cb12c60c-613a-44b3-bfef-03c0651d2607&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/cb12c60c-613a-44b3-bfef-03c0651d2607&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: This was also seen in v2.8: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8279&quot; title=&quot;sanity-scrub test_4b: @@@@@@ FAIL: Error in dmesg detected&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8279&quot;&gt;&lt;del&gt;LU-8279&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This sanity-scrub subtest failure in test_6 was followed by the same failure in the next ~375 subtests (in sanity-scrub, sanity-benchmark, sanity-lfsck, sanityn, and sanity-hsm.&lt;/p&gt;

&lt;p&gt;From test_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: onyx-48vm1.onyx.hpdd.intel.com,onyx-48vm2,onyx-48vm3,onyx-48vm7,onyx-48vm8 dmesg
Kernel error detected: [11155.947772] VFS: Busy inodes after unmount of dm-1. Self-destruct in 5 seconds.  Have a nice day...
 sanity-scrub test_6: @@@@@@ FAIL: Error in dmesg detected 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:4931:error()
  = /usr/lib64/lustre/tests/test-framework.sh:5212:run_one()
  = /usr/lib64/lustre/tests/test-framework.sh:5246:run_one_logged()
  = /usr/lib64/lustre/tests/test-framework.sh:5093:run_test()
  = /usr/lib64/lustre/tests/sanity-scrub.sh:773:main()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note: The &quot;VFS: Busy inodes after unmount&quot; message is also present in the MDS2/MDS4 console log.&lt;/p&gt;</description>
                <environment>onyx-48, full, DNE/ldiskfs&lt;br/&gt;
&amp;nbsp;&amp;nbsp;EL7, master branch, v2.9.56.11, b3565</environment>
        <key id="45821">LU-9433</key>
            <summary>sanity-scrub test_6: Error in dmesg detected</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="jcasper">James Casper</reporter>
                        <labels>
                    </labels>
                <created>Tue, 2 May 2017 20:17:03 +0000</created>
                <updated>Sat, 10 Jun 2017 12:43:07 +0000</updated>
                            <resolved>Sat, 10 Jun 2017 12:43:07 +0000</resolved>
                                    <version>Lustre 2.10.0</version>
                                    <fixVersion>Lustre 2.10.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="194586" author="yujian" created="Fri, 5 May 2017 00:14:56 +0000"  >&lt;p&gt;More failure instance on master branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4215ab26-3090-11e7-8847-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4215ab26-3090-11e7-8847-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="194704" author="gerrit" created="Fri, 5 May 2017 16:49:26 +0000"  >&lt;p&gt;Wei Liu (wei3.liu@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/26967&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/26967&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9433&quot; title=&quot;sanity-scrub test_6: Error in dmesg detected&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9433&quot;&gt;&lt;del&gt;LU-9433&lt;/del&gt;&lt;/a&gt; test: rerun sanity-scrub to reproduce it&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1889b2073c087be7328044cfa309996721717ff8&lt;/p&gt;</comment>
                            <comment id="195292" author="sarah" created="Wed, 10 May 2017 16:54:14 +0000"  >&lt;p&gt;After adding umount/remount between each subtest, cannot reproduce the problem, any suggestion?&lt;/p&gt;

&lt;p&gt;Maloo report&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/74dc4308-1a49-494d-9472-c75d12430ca3&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/74dc4308-1a49-494d-9472-c75d12430ca3&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="195347" author="casperjx" created="Wed, 10 May 2017 20:35:09 +0000"  >&lt;p&gt;FYI: In tag 56, this was only seen in the following config: DNE/ldiskfs.  In tag 57, sanity-scrub passed for DNE/ldiskfs.  &lt;/p&gt;</comment>
                            <comment id="195388" author="sarah" created="Wed, 10 May 2017 23:48:46 +0000"  >&lt;p&gt;Thank you Jim. The debug patch I pushed is using DNE/ldiskfs config. Please let me know if you see the bug in other config or tests. Thanks.&lt;/p&gt;</comment>
                            <comment id="195562" author="jamesanunez" created="Thu, 11 May 2017 20:08:53 +0000"  >&lt;p&gt;We had a sanity-scrub failure last night at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/999eb6ae-3607-11e7-8847-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/999eb6ae-3607-11e7-8847-5254006e85c2&lt;/a&gt;; sanity-scrub test 9 timed out. When we run the same test suite multiple times in one patch (for example when we specify testlist=sanity-scrub,sanity-scrub), the logs get combined; see ATM-13. Thus, all the logs for all the sanity-scrub runs are combined and located at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9a81d60a-3607-11e7-8847-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9a81d60a-3607-11e7-8847-5254006e85c2&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Looking at test 9 MDS1/MDS3 console logs, about half way through the log, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;15:58:47:[15886.272894] Lustre: DEBUG MARKER: onyx-61vm3.onyx.hpdd.intel.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4
15:58:47:[15887.708303] Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds3
15:58:47:[15887.978941] Lustre: DEBUG MARKER: test -b /dev/lvm-Role_MDS/P3
16:58:33:********** Timeout by autotest system **********18:10:53:[ 2558.290504] Lustre: DEBUG MARKER: == sanity-scrub test 9: OI scrub speed control 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at the MDS2/MDS4 console log, if you search, you will find&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;15:59:31:[15925.512212] Lustre: DEBUG MARKER: /usr/sbin/lctl set_param -n mdd.lustre-MDT0003.lfsck_speed_limit 300
15:59:31:[15936.070002] NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [OI_scrub:28076]
15:59:31:[15936.070002] Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgc(OE) osd_ldiskfs(OE) lquota(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) ldiskfs(OE) libcfs(OE) dm_mod rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod crc_t10dif crct10dif_generic ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_core iosf_mbi crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ppdev pcspkr virtio_balloon i2c_piix4 parport_pc parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi virtio_blk crct10dif_pclmul crct10dif_common cirrus crc32c_intel drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops 8139too ttm serio_raw virtio_pci virtio_ring virtio ata_piix drm libata i2c_core 8139cp mii floppy
15:59:31:[15936.070002] CPU: 1 PID: 28076 Comm: OI_scrub Tainted: G           OE  ------------   3.10.0-514.16.1.el7_lustre.x86_64 #1
15:59:31:[15936.070002] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
15:59:31:[15936.070002] task: ffff880063478fb0 ti: ffff88007a8cc000 task.ti: ffff88007a8cc000
15:59:31:[15936.070002] RIP: 0010:[&amp;lt;ffffffffa0d24a09&amp;gt;]  [&amp;lt;ffffffffa0d24a09&amp;gt;] osd_scrub_exec+0x79/0x5b0 [osd_ldiskfs]
15:59:31:[15936.070002] RSP: 0018:ffff88007a8cfc78  EFLAGS: 00000202
15:59:31:[15936.070002] RAX: 0000000000000000 RBX: ffff88007a8cfc37 RCX: 00000000555939a4
15:59:31:[15936.070002] RDX: ffff88007a8cfd78 RSI: ffff88005f5ce000 RDI: ffff880061399000
15:59:31:[15936.070002] RBP: ffff88007a8cfd08 R08: ffff88007a8cfd57 R09: 0000000000000004
15:59:31:[15936.070002] R10: ffff88007fd19a80 R11: ffffea0001e90800 R12: ffffffffa0d214d1
15:59:31:[15936.070002] R13: ffff88007a8cfc68 R14: ffff88005f5cf110 R15: ffff88007a420a00
15:59:31:[15936.070002] FS:  0000000000000000(0000) GS:ffff88007fd00000(0000) knlGS:0000000000000000
15:59:31:[15936.070002] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
15:59:31:[15936.070002] CR2: 00007f94144fa000 CR3: 00000000019be000 CR4: 00000000000406e0
15:59:31:[15936.070002] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
15:59:31:[15936.070002] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
15:59:31:[15936.070002] Stack:
15:59:31:[15936.070002]  0000000000000004 ffff88007a8cfd57 ffff88007a8cfdf0 0000000000000000
15:59:31:[15936.070002]  ffff88007a8cfd57 ffff88007fd19a80 0000000000000004 ffff88007a8cfd57
15:59:31:[15936.070002]  ffff880068c13800 ffff880068c13800 0000000000005404 ffff8800550dc000
15:59:31:[15936.070002] Call Trace:
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d261e9&amp;gt;] osd_inode_iteration+0x499/0xcc0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d24990&amp;gt;] ? osd_ios_ROOT_scan+0x300/0x300 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d20a20&amp;gt;] ? osd_preload_next+0xb0/0xb0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d27370&amp;gt;] osd_scrub_main+0x960/0xf30 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffff810c54c0&amp;gt;] ? wake_up_state+0x20/0x20
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d26a10&amp;gt;] ? osd_inode_iteration+0xcc0/0xcc0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b0a4f&amp;gt;] kthread+0xcf/0xe0
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
15:59:31:[15936.070002]  [&amp;lt;ffffffff81697318&amp;gt;] ret_from_fork+0x58/0x90
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
15:59:31:[15936.070002] Code: 07 74 4f 41 83 f9 02 74 78 48 89 ca 44 89 c9 e8 0e c6 ff ff 85 c0 41 89 c1 0f 84 a3 01 00 00 41 80 a7 74 14 00 00 fe 48 8b 4d d0 &amp;lt;65&amp;gt; 48 33 0c 25 28 00 00 00 0f 85 b1 03 00 00 48 83 c4 68 5b 41 
15:59:31:[15936.070002] Kernel panic - not syncing: softlockup: hung tasks
15:59:31:[15936.070002] CPU: 1 PID: 28076 Comm: OI_scrub Tainted: G           OEL ------------   3.10.0-514.16.1.el7_lustre.x86_64 #1
15:59:31:[15936.070002] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
15:59:31:[15936.070002]  ffffffff818da70d 00000000555939a4 ffff88007fd03e18 ffffffff81686d1f
15:59:31:[15936.070002]  ffff88007fd03e98 ffffffff8168014a 0000000000000008 ffff88007fd03ea8
15:59:31:[15936.070002]  ffff88007fd03e48 00000000555939a4 ffff88007fd03e67 0000000000000000
15:59:31:[15936.070002] Call Trace:
15:59:31:[15936.070002]  &amp;lt;IRQ&amp;gt;  [&amp;lt;ffffffff81686d1f&amp;gt;] dump_stack+0x19/0x1b
15:59:31:[15936.070002]  [&amp;lt;ffffffff8168014a&amp;gt;] panic+0xe3/0x1f2
15:59:31:[15936.070002]  [&amp;lt;ffffffff8112f3bc&amp;gt;] watchdog_timer_fn+0x20c/0x220
15:59:31:[15936.070002]  [&amp;lt;ffffffff8112f1b0&amp;gt;] ? watchdog+0x50/0x50
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b4d72&amp;gt;] __hrtimer_run_queues+0xd2/0x260
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b5310&amp;gt;] hrtimer_interrupt+0xb0/0x1e0
15:59:31:[15936.070002]  [&amp;lt;ffffffff81698e5c&amp;gt;] ? call_softirq+0x1c/0x30
15:59:31:[15936.070002]  [&amp;lt;ffffffff81050fd7&amp;gt;] local_apic_timer_interrupt+0x37/0x60
15:59:31:[15936.070002]  [&amp;lt;ffffffff81699acf&amp;gt;] smp_apic_timer_interrupt+0x3f/0x60
15:59:31:[15936.070002]  [&amp;lt;ffffffff8169801d&amp;gt;] apic_timer_interrupt+0x6d/0x80
15:59:31:[15936.070002]  &amp;lt;EOI&amp;gt;  [&amp;lt;ffffffffa0d24a09&amp;gt;] ? osd_scrub_exec+0x79/0x5b0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d261e9&amp;gt;] osd_inode_iteration+0x499/0xcc0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d24990&amp;gt;] ? osd_ios_ROOT_scan+0x300/0x300 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d20a20&amp;gt;] ? osd_preload_next+0xb0/0xb0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d27370&amp;gt;] osd_scrub_main+0x960/0xf30 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffff810c54c0&amp;gt;] ? wake_up_state+0x20/0x20
15:59:31:[15936.070002]  [&amp;lt;ffffffffa0d26a10&amp;gt;] ? osd_inode_iteration+0xcc0/0xcc0 [osd_ldiskfs]
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b0a4f&amp;gt;] kthread+0xcf/0xe0
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
15:59:31:[15936.070002]  [&amp;lt;ffffffff81697318&amp;gt;] ret_from_fork+0x58/0x90
15:59:31:[15936.070002]  [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The stack trace is similar to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9488&quot; title=&quot;soft lockup in osd_inode_iteration()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9488&quot;&gt;&lt;del&gt;LU-9488&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="195567" author="pjones" created="Thu, 11 May 2017 20:43:34 +0000"  >&lt;p&gt;Fan Yong&lt;/p&gt;

&lt;p&gt;Can you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="196453" author="gerrit" created="Fri, 19 May 2017 14:02:55 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/27212&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27212&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9433&quot; title=&quot;sanity-scrub test_6: Error in dmesg detected&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9433&quot;&gt;&lt;del&gt;LU-9433&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: fix inode reference leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d3204613fcc23ef97a632425d4ff49ce399b40ee&lt;/p&gt;</comment>
                            <comment id="196454" author="yong.fan" created="Fri, 19 May 2017 14:04:28 +0000"  >&lt;p&gt;The patch  &lt;a href=&quot;https://review.whamcloud.com/27212&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27212&lt;/a&gt; is for fixing inode reference leak.&lt;br/&gt;
As for the OI scrub soft lookup, it is another failure instance of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9488&quot; title=&quot;soft lockup in osd_inode_iteration()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9488&quot;&gt;&lt;del&gt;LU-9488&lt;/del&gt;&lt;/a&gt;, I will make another patch to fix it.&lt;/p&gt;</comment>
                            <comment id="196606" author="yong.fan" created="Mon, 22 May 2017 14:29:46 +0000"  >&lt;p&gt;The &lt;a href=&quot;https://review.whamcloud.com/27228&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27228&lt;/a&gt; is used to fix the OI scrub soft lockup issue.&lt;/p&gt;</comment>
                            <comment id="198807" author="gerrit" created="Sat, 10 Jun 2017 02:49:18 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/27212/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27212/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9433&quot; title=&quot;sanity-scrub test_6: Error in dmesg detected&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9433&quot;&gt;&lt;del&gt;LU-9433&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: fix inode reference leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 1cdb212683824ff24f8366c4e32efb559c46aee3&lt;/p&gt;</comment>
                            <comment id="198825" author="pjones" created="Sat, 10 Jun 2017 12:43:07 +0000"  >&lt;p&gt;Landed for 2.10&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="37594">LU-8279</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="26976">LU-5729</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzbpb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>