<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:39:10 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-10898] conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;</title>
                <link>https://jira.whamcloud.com/browse/LU-10898</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Looking at logs at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6f53a458-3c92-11e8-8f8a-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6f53a458-3c92-11e8-8f8a-52540065bddc&lt;/a&gt;, we see conf-sanity test_32a and test_32d fail with the following in the client test_log after trying to rmmod the ZFS module 19 times&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;trevis-49vm4: trevis-49vm4.trevis.hpdd.intel.com: executing /usr/sbin/lustre_rmmod zfs
trevis-49vm4: rmmod: ERROR: Module zfs is in use
CMD: trevis-49vm4 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/compat-openmpi16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin:/bin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh check_mem_leak
trevis-49vm4: trevis-49vm4.trevis.hpdd.intel.com: executing check_mem_leak
Unloading modules on trevis-49vm4: Attempt 19
CMD: trevis-49vm4 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/compat-openmpi16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin:/bin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh /usr/sbin/lustre_rmmod zfs
trevis-49vm4: trevis-49vm4.trevis.hpdd.intel.com: executing /usr/sbin/lustre_rmmod zfs
trevis-49vm4: rmmod: ERROR: Module zfs is in use
CMD: trevis-49vm4 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/compat-openmpi16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin:/bin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh check_mem_leak
trevis-49vm4: trevis-49vm4.trevis.hpdd.intel.com: executing check_mem_leak
Unloading modules on trevis-49vm4: Given up
&#160;conf-sanity test_32a: @@@@@@ FAIL: Reloading modules
&#160; Trace dump:
&#160; = /usr/lib64/lustre/tests/test-framework.sh:5726:error_noexit()
&#160; = /usr/lib64/lustre/tests/conf-sanity.sh:2292:t32_test()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Looking at the console log on vm4, the MDS, we see some errors prior to trying to unload the ZFS module&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 7085.963959] Lustre: DEBUG MARKER: mount -t lustre -onomgs -omgsnode=10.9.6.58@tcp t32fs-ost1/ost1 /tmp/t32/mnt/ost
[ 7086.669656] Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n obdfilter.t32fs-OST0000.uuid
[ 7087.000523] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-OST0000.osc.max_dirty_mb=15
[ 7087.335563] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-OST0000.failover.node=10.9.6.58@tcp
[ 7087.663836] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-MDT0000.mdc.max_rpcs_in_flight=9
[ 7087.993803] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-MDT0000.failover.node=10.9.6.58@tcp
[ 7088.322614] Lustre: DEBUG MARKER: /usr/sbin/lctl pool_new t32fs.interop
[ 7093.175575] LustreError: 9067:0:(mgc_request.c:1576:mgc_apply_recover_logs()) mgc: cannot find uuid by nid 10.9.6.58@tcp
[ 7093.177860] Lustre: 9067:0:(mgc_request.c:1802:mgc_process_recover_nodemap_log()) MGC10.9.6.58@tcp: error processing recovery log t32fs-mdtir: rc = -2
[ 7093.181698] LustreError: 9067:0:(mgc_request.c:2132:mgc_process_log()) MGC10.9.6.58@tcp: recover log t32fs-mdtir failed, not fatal: rc= -2
[ 7093.187057] Lustre: 10864:0:(obd_mount.c:972:lustre_check_exclusion()) Excluding t32fs-OST0000-osc-MDT0000 (on exclusion list)
[ 7093.191117] LustreError: 10864:0:(obd_config.c:1501:class_process_proc_param()) t32fs-OST0000-osc-MDT0000: unknown config parameter &apos;osc.max_dirty_mb=15&apos;
[ 7094.656698] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-MDT0000.lov.stripesize=4M
[ 7094.993137] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-MDT0000.mdd.atime_diff=70
[ 7095.327475] Lustre: DEBUG MARKER: umount -d /tmp/t32/mnt/mdt
[ 7095.500065] Lustre: Failing over t32fs-MDT0000
[ 7095.809239] Lustre: DEBUG MARKER: umount -d /tmp/t32/mnt/ost
[ 7102.179414] Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lust
[ 7102.796961] Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-49vm4.trevis.hpdd.intel.com: executing \/usr\/sbin\/lustre_rmmod zfs
[ 7102.801643] Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-49vm4.trevis.hpdd.intel.com: executing \/usr\/sbin\/lustre_rmmod zfs
[ 7102.998133] Lustre: DEBUG MARKER: trevis-49vm4.trevis.hpdd.intel.com: executing /usr/sbin/lustre_rmmod zfs
[ 7103.010598] Lustre: DEBUG MARKER: trevis-49vm4.trevis.hpdd.intel.com: executing /usr/sbin/lustre_rmmod zfs
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;It looks like this these tests started failing during testing for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8066&quot; title=&quot;Move lustre procfs handling to sysfs and debugfs.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8066&quot;&gt;LU-8066&lt;/a&gt; on 2018-04-10 01:04:07 UTC.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Logs for these test failures are at&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6f53a458-3c92-11e8-8f8a-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6f53a458-3c92-11e8-8f8a-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/32358bce-3cb3-11e8-960d-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/32358bce-3cb3-11e8-960d-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/827118f8-3c8e-11e8-8f8a-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/827118f8-3c8e-11e8-8f8a-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</description>
                <environment></environment>
        <key id="51741">LU-10898</key>
            <summary>conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="utopiabound">Nathaniel Clark</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Tue, 10 Apr 2018 16:02:43 +0000</created>
                <updated>Sat, 15 Dec 2018 18:06:32 +0000</updated>
                            <resolved>Thu, 17 May 2018 03:06:21 +0000</resolved>
                                    <version>Lustre 2.12.0</version>
                    <version>Lustre 2.10.5</version>
                                    <fixVersion>Lustre 2.12.0</fixVersion>
                    <fixVersion>Lustre 2.10.5</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="225636" author="simmonsja" created="Tue, 10 Apr 2018 16:16:15 +0000"  >&lt;p&gt;Let me take a look at this. Its only ZFS? I haven&apos;t ported ZFS to sysfs/debugfs. I wonder if osd-zfs never cleaned up properly /procfs and now the move to sysfs/debugs exposed a long buried bug. procfs was more forgiving.&lt;/p&gt;</comment>
                            <comment id="225638" author="jamesanunez" created="Tue, 10 Apr 2018 16:17:48 +0000"  >&lt;p&gt;Yes. So far, only ZFS.&lt;/p&gt;</comment>
                            <comment id="225785" author="gerrit" created="Wed, 11 Apr 2018 17:50:27 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/31960&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31960&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10898&quot; title=&quot;conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10898&quot;&gt;&lt;del&gt;LU-10898&lt;/del&gt;&lt;/a&gt; tests: disable failing conf-sanity test&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: eec072312c894657cc863b6c0973094c3ac4ac6e&lt;/p&gt;</comment>
                            <comment id="225819" author="gerrit" created="Thu, 12 Apr 2018 00:39:09 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/31960/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31960/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10898&quot; title=&quot;conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10898&quot;&gt;&lt;del&gt;LU-10898&lt;/del&gt;&lt;/a&gt; tests: disable failing conf-sanity test&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 55bac2a58bd320ab675c724e925f6b58eafe757d&lt;/p&gt;</comment>
                            <comment id="225938" author="sarah" created="Thu, 12 Apr 2018 21:55:33 +0000"  >&lt;p&gt;It seems the problematic patch may cause this problem is from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9551&quot; title=&quot;I/O errors when lustre uses multipath devices&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9551&quot;&gt;&lt;del&gt;LU-9551&lt;/del&gt;&lt;/a&gt; &lt;a href=&quot;https://review.whamcloud.com/#/c/31464/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/31464/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the bisect try on previous patch &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10773&quot; title=&quot;soft lockup when remove changelog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10773&quot;&gt;&lt;del&gt;LU-10773&lt;/del&gt;&lt;/a&gt;, conf-sanity passed&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://review.whamcloud.com/#/c/31974/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/31974/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then try on &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9551&quot; title=&quot;I/O errors when lustre uses multipath devices&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9551&quot;&gt;&lt;del&gt;LU-9551&lt;/del&gt;&lt;/a&gt; patch, it started failing&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://review.whamcloud.com/#/c/31968/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/31968/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="225995" author="utopiabound" created="Fri, 13 Apr 2018 16:26:25 +0000"  >&lt;p&gt;I&apos;m confused why this worked at all (even before my patch), since it&apos;s trying to rmmod zfs w/o exporting the pools.&lt;/p&gt;</comment>
                            <comment id="225996" author="gerrit" created="Fri, 13 Apr 2018 16:30:30 +0000"  >&lt;p&gt;Nathaniel Clark (nathaniel.l.clark@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/31991&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31991&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10898&quot; title=&quot;conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10898&quot;&gt;&lt;del&gt;LU-10898&lt;/del&gt;&lt;/a&gt; tests: enable conf-sanity 32a/32d&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 14761b794bb3f3f6ff2485c0ffa25a22f4f288ff&lt;/p&gt;</comment>
                            <comment id="227011" author="utopiabound" created="Tue, 1 May 2018 21:22:08 +0000"  >&lt;p&gt;ZED is holding zfs module open, which is why it&apos;s not rmmod&apos;ing correcly.&lt;/p&gt;</comment>
                            <comment id="227018" author="jamesanunez" created="Tue, 1 May 2018 22:02:46 +0000"  >&lt;p&gt;Assigning this ticket to Nathaniel since he has a patch for this issue. Thanks, Nathaniel.&lt;/p&gt;</comment>
                            <comment id="228051" author="gerrit" created="Thu, 17 May 2018 02:30:36 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/31991/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31991/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10898&quot; title=&quot;conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10898&quot;&gt;&lt;del&gt;LU-10898&lt;/del&gt;&lt;/a&gt; tests: enable conf-sanity 32a/32d&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: e8e6e3210e5eae78eefcc3f05e078a60c04dc80e&lt;/p&gt;</comment>
                            <comment id="228065" author="pjones" created="Thu, 17 May 2018 03:06:21 +0000"  >&lt;p&gt;Landed for 2.12&lt;/p&gt;</comment>
                            <comment id="228444" author="gerrit" created="Wed, 23 May 2018 16:08:17 +0000"  >&lt;p&gt;Minh Diep (minh.diep@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32520&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32520&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10898&quot; title=&quot;conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10898&quot;&gt;&lt;del&gt;LU-10898&lt;/del&gt;&lt;/a&gt; tests: enable conf-sanity 32a/32d&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 57e82001d462efe3001570a4f11210c6e44509a3&lt;/p&gt;</comment>
                            <comment id="231446" author="yujian" created="Sat, 4 Aug 2018 18:23:39 +0000"  >&lt;p&gt;The same failure occurred on Lustre b2_10 branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/2defd962-97b6-11e8-87f3-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/2defd962-97b6-11e8-87f3-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above patch needs land.&lt;/p&gt;</comment>
                            <comment id="231605" author="gerrit" created="Tue, 7 Aug 2018 20:07:15 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/32520/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32520/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10898&quot; title=&quot;conf-sanity test 32a and 32d fail with &#8216;rmmod: ERROR: Module zfs is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10898&quot;&gt;&lt;del&gt;LU-10898&lt;/del&gt;&lt;/a&gt; tests: enable conf-sanity 32a/32d&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 574e63fc86553510d87d02cd6d72785f341e48dc&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="46284">LU-9551</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="51914">LU-10933</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzvl3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>