<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:16:37 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8332] sanity-hsm soft lock up in journal write</title>
                <link>https://jira.whamcloud.com/browse/LU-8332</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity-hsm hangs before any tests are run. From the suite_stdout, we see that there is a problem with the OST 4 setup:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;06:11:37:Starting ost4:   /dev/lvm-Role_OSS/P4 /mnt/lustre-ost4
06:11:37:CMD: onyx-34vm8 mkdir -p /mnt/lustre-ost4; mount -t lustre   		                   /dev/lvm-Role_OSS/P4 /mnt/lustre-ost4
06:11:37:CMD: onyx-34vm8 /usr/sbin/lctl get_param -n health_check
06:11:37:CMD: onyx-34vm8 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests//usr/lib64/lustre/tests/utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/compat-openmpi16/bin:/usr/bin:/bin:/usr/sbin:/sbin::/sbin:/usr/sbin:/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh set_default_debug \&quot;vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\&quot; \&quot;all -lnet -lnd -pinger\&quot; 4 
06:11:37:CMD: onyx-34vm8 e2label /dev/lvm-Role_OSS/P4 				2&amp;gt;/dev/null | grep -E &apos;:[a-zA-Z]{3}[0-9]{4}&apos;
06:11:37:CMD: onyx-34vm8 e2label /dev/lvm-Role_OSS/P4 				2&amp;gt;/dev/null | grep -E &apos;:[a-zA-Z]{3}[0-9]{4}&apos;
06:11:37:CMD: onyx-34vm8 sync; sync; sync
07:11:20:********** Timeout by autotest system **********
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On OST4 (vm8), we see &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;06:12:07:[16441.962399] Lustre: DEBUG MARKER: e2label /dev/lvm-Role_OSS/P4 				2&amp;gt;/dev/null | grep -E &apos;:[a-zA-Z]{3}[0-9]{4}&apos;
06:12:07:[16442.192158] Lustre: DEBUG MARKER: sync; sync; sync
06:12:07:[16456.204737] Lustre: lustre-OST0003: Export ffff88005981e800 already connecting from 10.2.4.131@tcp
06:12:07:[16461.204570] Lustre: lustre-OST0003: Export ffff88005981e800 already connecting from 10.2.4.131@tcp
06:12:07:[16461.207146] Lustre: Skipped 1 previous similar message
06:12:07:[16466.204701] Lustre: lustre-OST0003: Export ffff88005981e800 already connecting from 10.2.4.131@tcp
06:12:07:[16466.207151] Lustre: Skipped 1 previous similar message
06:12:07:[16468.053003] BUG: soft lockup - CPU#0 stuck for 23s! [ll_ost00_003:29743]
06:12:07:[16468.053003] Modules linked in: osp(OE) ofd(OE) lfsck(OE) ost(OE) mgc(OE) osd_ldiskfs(OE) lquota(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) sha512_generic crypto_null libcfs(OE) ldiskfs(OE) dm_mod rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache xprtrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod crc_t10dif crct10dif_generic crct10dif_common ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ppdev parport_pc pcspkr virtio_balloon i2c_piix4 parport nfsd nfs_acl lockd grace auth_rpcgss sunrpc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi cirrus syscopyarea sysfillrect sysimgblt drm_kms_helper ttm virtio_blk 8139too drm serio_raw ata_piix virtio_pci virtio_ring virtio i2c_core 8139cp mii floppy libata
06:12:07:[16468.053003] CPU: 0 PID: 29743 Comm: ll_ost00_003 Tainted: G           OE  ------------   3.10.0-327.18.2.el7_lustre.x86_64 #1
06:12:07:[16468.053003] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
06:12:07:[16468.053003] task: ffff88004b1a5c00 ti: ffff88004cd40000 task.ti: ffff88004cd40000
06:12:07:[16468.053003] RIP: 0010:[&amp;lt;ffffffff8163d5f2&amp;gt;]  [&amp;lt;ffffffff8163d5f2&amp;gt;] _raw_spin_lock+0x32/0x50
06:12:07:[16468.053003] RSP: 0018:ffff88004cd435d0  EFLAGS: 00000287
06:12:07:[16468.053003] RAX: 00000000000003e0 RBX: ffff88007b87e680 RCX: 0000000000000b42
06:12:07:[16468.053003] RDX: 0000000000000b38 RSI: 0000000000000b38 RDI: ffff880077768ba0
06:12:07:[16468.053003] RBP: ffff88004cd435d0 R08: c010000000000000 R09: 0035f32b60080000
06:12:07:[16468.053003] R10: ffac0ce21cc2d802 R11: 0000000000004000 R12: ffff880035f32958
06:12:07:[16468.053003] R13: ffff880035f32b60 R14: ffffffff8121255b R15: ffff88004cd435e0
06:12:07:[16468.053003] FS:  0000000000000000(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000
06:12:07:[16468.053003] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
06:12:07:[16468.053003] CR2: 00007f7f78dc54a9 CR3: 000000000194a000 CR4: 00000000000006f0
06:12:07:[16468.053003] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
06:12:07:[16468.053003] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
06:12:07:[16468.053003] Stack:
06:12:07:[16468.053003]  ffff88004cd43658 ffffffffa0155bfc ffff88004cd43600 ffffffff81212e6c
06:12:07:[16468.053003]  00000000811c11da ffff880077768800 0000000000000000 0000000000000025
06:12:07:[16468.053003]  0000000000000004 0000000000000c35 ffff88007b8ee4b0 000000007608c8d8
06:12:07:[16468.053003] Call Trace:
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0155bfc&amp;gt;] do_get_write_access+0x32c/0x4e0 [jbd2]
06:12:07:[16468.053003]  [&amp;lt;ffffffff81212e6c&amp;gt;] ? __find_get_block+0xbc/0x120
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0155dd7&amp;gt;] jbd2_journal_get_write_access+0x27/0x40 [jbd2]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa05e6c0b&amp;gt;] __ldiskfs_journal_get_write_access+0x3b/0x80 [ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0620ec7&amp;gt;] __ldiskfs_new_inode+0x447/0x1300 [ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa05f7038&amp;gt;] ldiskfs_mkdir+0x148/0x280 [ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffff811ea557&amp;gt;] vfs_mkdir+0xb7/0x160
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c3a5c9&amp;gt;] simple_mkdir.isra.17.constprop.26+0x429/0x4c0 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c4ebfe&amp;gt;] osd_seq_load_locked.isra.19+0x19a/0x6a0 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffff811c13be&amp;gt;] ? kmem_cache_alloc_trace+0x1ce/0x1f0
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c3a9c1&amp;gt;] osd_seq_load+0x361/0x520 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c3eb96&amp;gt;] osd_obj_spec_lookup+0x66/0x300 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c2bf07&amp;gt;] osd_oi_lookup+0x47/0x150 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c2886b&amp;gt;] osd_fid_lookup+0x92b/0x1780 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffff8163d59b&amp;gt;] ? _raw_spin_unlock_bh+0x1b/0x40
06:12:07:[16468.053003]  [&amp;lt;ffffffffa096f0c2&amp;gt;] ? ksocknal_queue_tx_locked+0x132/0x4d0 [ksocklnd]
06:12:07:[16468.053003]  [&amp;lt;ffffffff812fc6a2&amp;gt;] ? put_dec+0x72/0x90
06:12:07:[16468.053003]  [&amp;lt;ffffffff811c13be&amp;gt;] ? kmem_cache_alloc_trace+0x1ce/0x1f0
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0c29715&amp;gt;] osd_object_init+0x55/0xf0 [osd_ldiskfs]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa07e42df&amp;gt;] lu_object_alloc+0xdf/0x310 [obdclass]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa07e46dc&amp;gt;] lu_object_find_try+0x16c/0x2b0 [obdclass]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa07e48cc&amp;gt;] lu_object_find_at+0xac/0xe0 [obdclass]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0a41b06&amp;gt;] ? null_alloc_rs+0x176/0x330 [ptlrpc]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa07e5c58&amp;gt;] dt_locate_at+0x18/0xb0 [obdclass]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa07e7c65&amp;gt;] dt_find_or_create+0x55/0x8d0 [obdclass]
06:12:07:[16468.053003]  [&amp;lt;ffffffff811c13be&amp;gt;] ? kmem_cache_alloc_trace+0x1ce/0x1f0
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0d69ebc&amp;gt;] ofd_seq_load+0x2ac/0x9c0 [ofd]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0d61afa&amp;gt;] ofd_get_info_hdl+0x76a/0x14e0 [ofd]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0a66f25&amp;gt;] tgt_request_handle+0x915/0x1320 [ptlrpc]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0a134bb&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0a11078&amp;gt;] ? ptlrpc_wait_event+0x98/0x340 [ptlrpc]
06:12:07:[16468.053003]  [&amp;lt;ffffffff810b88b2&amp;gt;] ? default_wake_function+0x12/0x20
06:12:07:[16468.053003]  [&amp;lt;ffffffff810af018&amp;gt;] ? __wake_up_common+0x58/0x90
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0a17570&amp;gt;] ptlrpc_main+0xaa0/0x1dd0 [ptlrpc]
06:12:07:[16468.053003]  [&amp;lt;ffffffffa0a16ad0&amp;gt;] ? ptlrpc_register_service+0xe40/0xe40 [ptlrpc]
06:12:07:[16468.053003]  [&amp;lt;ffffffff810a5acf&amp;gt;] kthread+0xcf/0xe0
06:12:07:[16468.053003]  [&amp;lt;ffffffff810b47e6&amp;gt;] ? finish_task_switch+0x56/0x170
06:12:07:[16468.053003]  [&amp;lt;ffffffff810a5a00&amp;gt;] ? kthread_create_on_node+0x140/0x140
06:12:07:[16468.053003]  [&amp;lt;ffffffff81646318&amp;gt;] ret_from_fork+0x58/0x90
06:12:07:[16468.053003]  [&amp;lt;ffffffff810a5a00&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Logs for this failure are at:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e6bdb3d2-3bc0-11e6-a0ce-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e6bdb3d2-3bc0-11e6-a0ce-5254006e85c2&lt;/a&gt;&lt;/p&gt;
</description>
                <environment>autotest review-dne</environment>
        <key id="37838">LU-8332</key>
            <summary>sanity-hsm soft lock up in journal write</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="bzzz">Alex Zhuravlev</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Mon, 27 Jun 2016 17:49:12 +0000</created>
                <updated>Wed, 5 Aug 2020 21:00:50 +0000</updated>
                            <resolved>Wed, 5 Aug 2020 21:00:50 +0000</resolved>
                                    <version>Lustre 2.9.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="157842" author="cheneva1" created="Wed, 6 Jul 2016 16:48:16 +0000"  >&lt;p&gt;Assign this to Alex.&lt;/p&gt;

&lt;p&gt;James, do you have crash dumps to share? Thanks!&lt;/p&gt;</comment>
                            <comment id="157844" author="jamesanunez" created="Wed, 6 Jul 2016 16:53:31 +0000"  >&lt;p&gt;This test failed in our autotest clusters. All logs are at the link at the bottom of the ticket description; at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e6bdb3d2-3bc0-11e6-a0ce-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e6bdb3d2-3bc0-11e6-a0ce-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="276755" author="adilger" created="Wed, 5 Aug 2020 21:00:50 +0000"  >&lt;p&gt;Closing old issue that has not been seen in a long time.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="39111">LU-8542</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="37823">LU-8327</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyfvr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>