<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:19:17 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8637] cont-sanity test_71c failed: class_export_put+0x18/0x310 [obdclass]</title>
                <link>https://jira.whamcloud.com/browse/LU-8637</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== conf-sanity test 71c: start OST0, OST1, MDT1, MDT0 ================================================ 08:11:30 (1473408690)
Loading modules from /usr/lib64/lustre/tests/..
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2 cpu_npartitions=2&apos;
debug=-1
subsystem_debug=all -lnet -lnd -pinger
../lnet/lnet/lnet options: &apos;accept=all&apos;
../lnet/klnds/socklnd/ksocklnd options: &apos;sock_timeout=10&apos;
gss/krb5 is not supported
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
start ost1 service on fre1318
Starting ost1: -o user_xattr  /dev/vdb /mnt/ost1
Started lustre-OST0000
start ost2 service on fre1318
Starting ost2: -o user_xattr  /dev/vdc /mnt/ost2
Started lustre-OST0001
start mds service on fre1317
Starting mds2: -o rw,user_xattr  /dev/vdd /mnt/mds2
Started lustre-MDT0001
start mds service on fre1317
Starting mds1: -o rw,user_xattr  /dev/vdc /mnt/mds1
Started lustre-MDT0000
mount lustre on /mnt/lustre.....
Starting client: fre1319:  -o user_xattr,flock fre1317@tcp:/lustre /mnt/lustre
umount lustre on /mnt/lustre.....
Stopping client fre1319 /mnt/lustre (opts:)
stop mds service on fre1317
Stopping /mnt/mds1 (opts:-f) on fre1317
stop mds service on fre1317
Stopping /mnt/mds2 (opts:-f) on fre1317
stop ost1 service on fre1318
Stopping /mnt/ost1 (opts:-f) on fre1318
stop ost2 service on fre1318
Stopping /mnt/ost2 (opts:-f) on fre1318
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Stack trace:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 3122.379501] Lustre: DEBUG MARKER: == conf-sanity test 71c: start OST0, OST1, MDT1, MDT0 ================================================ 08:11:30 (1473408690)
[ 3122.773761] LDISKFS-fs (vdb): mounted filesystem with ordered data mode. Opts: errors=remount-ro,user_xattr,no_mbcache
[ 3123.560675] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro
[ 3123.569798] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro,user_xattr,no_mbcache
[ 3124.465480] LustreError: 10753:0:(fid_handler.c:283:__seq_server_alloc_meta()) srv-lustre-OST0001: Can&apos;t allocate super-sequence, rc -115
[ 3128.613157] Lustre: 7753:0:(client.c:2093:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1473408699/real 1473408699]  req@ffff88007ab1ad00 x1544977752527532/t0(0) o38-&amp;gt;lustre-MDT0001-lwp-OST0001@192.168.113.17@tcp:12/10 lens 520/544 e 0 to 1 dl 1473408704 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
[ 3135.465450] LustreError: 10753:0:(fid_handler.c:283:__seq_server_alloc_meta()) srv-lustre-OST0001: Can&apos;t allocate super-sequence, rc -115
[ 3138.614931] LustreError: 11-0: lustre-MDT0001-lwp-OST0001: operation obd_ping to node 192.168.113.17@tcp failed: rc = -107
[ 3138.621801] LustreError: Skipped 1 previous similar message
[ 3138.626747] Lustre: lustre-MDT0001-lwp-OST0001: Connection to lustre-MDT0001 (at 192.168.113.17@tcp) was lost; in progress operations using this service will wait for recovery to complete
[ 3138.634090] Lustre: Skipped 1 previous similar message
[ 3147.467503] LustreError: 10967:0:(ofd_fs.c:506:ofd_register_lwp_callback()) lustre-OST0001: cannot update controller: rc = -5
[ 3147.472893] general protection fault: 0000 [#1] SMP 
[ 3147.473831] Modules linked in: lod(OE) mdt(OE) mdd(OE) mgs(OE) obdecho(OE) osc(OE) osp(OE) ofd(OE) lfsck(OE) ost(OE) mgc(OE) osd_ldiskfs(OE) lquota(OE) ldiskfs(OE) lustre(OE) lmv(OE) mdc(OE) lov(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) sha512_generic crypto_null libcfs(OE) rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache ppdev parport_pc pcspkr virtio_balloon parport i2c_piix4 nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi virtio_net virtio_blk cirrus syscopyarea sysfillrect sysimgblt drm_kms_helper ttm ata_piix serio_raw drm virtio_pci virtio_ring i2c_core virtio libata floppy
[ 3147.477233] CPU: 1 PID: 10967 Comm: lwp_notify_lust Tainted: G           OE  ------------   3.10.0-327.13.1.x3.0.80.x86_64 #1
[ 3147.477233] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
[ 3147.477233] task: ffff88007948b980 ti: ffff880077e40000 task.ti: ffff880077e40000
[ 3147.477233] RIP: 0010:[&amp;lt;ffffffffa0522e18&amp;gt;]  [&amp;lt;ffffffffa0522e18&amp;gt;] class_export_put+0x18/0x310 [obdclass]
[ 3147.477233] RSP: 0018:ffff880077e43e40  EFLAGS: 00010206
[ 3147.477233] RAX: ffff8800551b2f10 RBX: 5a5a5a5a5a5a5a5a RCX: 000000018040002b
[ 3147.477233] RDX: 000000018040002c RSI: ffffea0001e59f40 RDI: 5a5a5a5a5a5a5a5a
[ 3147.477233] RBP: ffff880077e43e50 R08: ffff88007967d000 R09: 000000018040002b
[ 3147.477233] R10: ffffea0001e59f40 R11: ffffffffa0db8685 R12: ffff880077ea800c
[ 3147.477233] R13: ffff8800553b9000 R14: ffff8800551b2f10 R15: 0000000000000000
[ 3147.477233] FS:  0000000000000000(0000) GS:ffff88007fd00000(0000) knlGS:0000000000000000
[ 3147.477233] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 3147.477233] CR2: 0000000000448bb0 CR3: 0000000001946000 CR4: 00000000000006e0
[ 3147.477233] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3147.477233] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 3147.477233] Stack:
[ 3147.477233]  ffff880054ded880 ffff880077ea800c ffff880077e43e68 ffffffffa057bc6a
[ 3147.477233]  ffff880054ded880 ffff880077e43e98 ffffffffa057c042 ffff8800553b9000
[ 3147.477233]  ffff880059b3a000 ffff880059b3a0b0 0000000000000000 ffff880077e43ec0
[ 3147.477233] Call Trace:
[ 3147.477233]  [&amp;lt;ffffffffa057bc6a&amp;gt;] lustre_put_lwp_item+0x3a/0x2b0 [obdclass]
[ 3147.477233]  [&amp;lt;ffffffffa057c042&amp;gt;] lustre_notify_lwp_list+0xc2/0x100 [obdclass]
[ 3147.477233]  [&amp;lt;ffffffffa0e38c64&amp;gt;] lwp_notify_main+0x54/0xb0 [osp]
[ 3147.477233]  [&amp;lt;ffffffffa0e38c10&amp;gt;] ? lwp_import_event+0xb0/0xb0 [osp]
[ 3147.477233]  [&amp;lt;ffffffff810a5acf&amp;gt;] kthread+0xcf/0xe0
[ 3147.477233]  [&amp;lt;ffffffff810a5a00&amp;gt;] ? kthread_create_on_node+0x140/0x140
[ 3147.477233]  [&amp;lt;ffffffff816442d8&amp;gt;] ret_from_fork+0x58/0x90
[ 3147.477233]  [&amp;lt;ffffffff810a5a00&amp;gt;] ? kthread_create_on_node+0x140/0x140
[ 3147.477233] Code: c7 c7 a0 37 5b a0 e8 58 f1 ec ff 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 85 ff 48 89 e5 41 54 53 48 89 fb 0f 84 27 02 00 00 &amp;lt;8b&amp;gt; 4f 40 8d 41 ff 3d 58 5a 5a 5a 0f 87 e4 01 00 00 f6 05 6c 26 
[ 3147.477233] RIP  [&amp;lt;ffffffffa0522e18&amp;gt;] class_export_put+0x18/0x310 [obdclass]
[ 3147.477233]  RSP &amp;lt;ffff880077e43e40&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="40038">LU-8637</key>
            <summary>cont-sanity test_71c failed: class_export_put+0x18/0x310 [obdclass]</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="yong.fan">nasf</reporter>
                        <labels>
                    </labels>
                <created>Sun, 25 Sep 2016 11:28:42 +0000</created>
                <updated>Fri, 13 Oct 2017 15:10:29 +0000</updated>
                            <resolved>Wed, 5 Oct 2016 11:58:13 +0000</resolved>
                                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="167160" author="gerrit" created="Sun, 25 Sep 2016 11:45:17 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/22724&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/22724&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8637&quot; title=&quot;cont-sanity test_71c failed: class_export_put+0x18/0x310 [obdclass]&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8637&quot;&gt;&lt;del&gt;LU-8637&lt;/del&gt;&lt;/a&gt; obdclass: LWP callback hold export reference&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 073c2a5218a5bd90132607ad017823183f3b8a7e&lt;/p&gt;</comment>
                            <comment id="168299" author="gerrit" created="Wed, 5 Oct 2016 03:51:26 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/22724/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/22724/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8637&quot; title=&quot;cont-sanity test_71c failed: class_export_put+0x18/0x310 [obdclass]&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8637&quot;&gt;&lt;del&gt;LU-8637&lt;/del&gt;&lt;/a&gt; obdclass: LWP callback hold export reference&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: acf46c8846d6c3893a52f5caba1eabea67c1bdba&lt;/p&gt;</comment>
                            <comment id="168322" author="pjones" created="Wed, 5 Oct 2016 11:58:14 +0000"  >&lt;p&gt;Landed for 2.9&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzypj3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>