<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:30:39 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9940] posix no sub tests failed: </title>
                <link>https://jira.whamcloud.com/browse/LU-9940</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;&lt;a href=&quot;https://testing.whamcloud.com/test_sessions/0cca9fbf-af0d-4ad7-ad37-7797a1864e19&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/0cca9fbf-af0d-4ad7-ad37-7797a1864e19&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From suite_log:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Setup mgs, mdt, osts
CMD: trevis-10vm4 mkdir -p /mnt/lustre-mds1
CMD: trevis-10vm4 test -b /dev/lvm-Role_MDS/P1
CMD: trevis-10vm4 e2label /dev/lvm-Role_MDS/P1
Starting mds1:   /dev/lvm-Role_MDS/P1 /mnt/lustre-mds1
CMD: trevis-10vm4 mkdir -p /mnt/lustre-mds1; mount -t lustre   		                   /dev/lvm-Role_MDS/P1 /mnt/lustre-mds1
trevis-10vm4: mount.lustre: according to /etc/mtab /dev/mapper/lvm--Role_MDS-P1 is already mounted on /mnt/lustre-mds1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note: The &quot;already mounted&quot; message was also present in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9487&quot; title=&quot;mmp test_2: test_2 failed with 22&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9487&quot;&gt;&lt;del&gt;LU-9487&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</description>
                <environment>Trevis, full&lt;br/&gt;
server: RHEL 7.3, ldiskfs, branch b2_10, v2.10.0.38, b12&lt;br/&gt;
client: RHEL 7.4, branch master, v2.10.52, b3631&lt;br/&gt;
</environment>
        <key id="48102">LU-9940</key>
            <summary>posix no sub tests failed: </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jcasper">James Casper</reporter>
                        <labels>
                    </labels>
                <created>Fri, 1 Sep 2017 19:48:09 +0000</created>
                <updated>Thu, 19 Nov 2020 18:31:17 +0000</updated>
                            <resolved>Mon, 18 May 2020 17:11:24 +0000</resolved>
                                    <version>Lustre 2.11.0</version>
                    <version>Lustre 2.10.2</version>
                    <version>Lustre 2.12.0</version>
                    <version>Lustre 2.10.3</version>
                    <version>Lustre 2.10.5</version>
                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.10.7</version>
                    <version>Lustre 2.12.1</version>
                    <version>Lustre 2.12.4</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="207283" author="jamesanunez" created="Fri, 1 Sep 2017 20:03:56 +0000"  >&lt;p&gt;IF you look at the posix.test_complete.stack_trace.trevis-10vm1.log for this test suite, you&apos;ll see multiple &quot;errors&quot;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;23:16:40:[10995.167125] nfs: server trevis-10vm4 not responding, timed out
23:16:40:[10995.168097] nfs: server trevis-10vm4 not responding, timed out
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So, something is wrong with the NFS server upon entering this test suite. &lt;/p&gt;

&lt;p&gt;If you look at the preceding test suite, parallel-scale-nfsv4, at the parallel-scale-nfsv4.suite_stdout.trevis-10vm1.log, we see an issue unmounting the Lustre NFS mount&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;22:15:35:PASS racer_on_nfs (307s)
22:15:35:== parallel-scale-nfsv4 test complete, duration 2509 sec ============================================= 22:15:26 (1503526526)
22:15:35:
22:15:35:Unmounting NFS clients...
22:15:35:CMD: trevis-10vm1.trevis.hpdd.intel.com,trevis-10vm2 umount -f /mnt/lustre
22:15:35:trevis-10vm1: umount.nfs4: /mnt/lustre: device is busy
22:15:35:
22:15:35:Unexporting Lustre filesystem...
22:15:35:CMD: trevis-10vm1.trevis.hpdd.intel.com,trevis-10vm2 chkconfig --list rpcidmapd 2&amp;gt;/dev/null |
22:15:35:			       grep -q rpcidmapd &amp;amp;&amp;amp; service rpcidmapd stop ||
22:15:35:			       true
22:15:35:CMD: trevis-10vm4 chkconfig --list nfsserver &amp;gt; /dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp;
22:15:35:				 service nfsserver stop || service nfs stop
22:15:35:trevis-10vm4: Redirecting to /bin/systemctl stop nfs.service
22:15:35:CMD: trevis-10vm4 exportfs -u *:/mnt/lustre
22:15:35:trevis-10vm4: exportfs: Could not find &apos;*:/mnt/lustre&apos; to unexport.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="246356" author="jamesanunez" created="Thu, 25 Apr 2019 15:43:46 +0000"  >&lt;p&gt;We&#8217;re seeing a similar issue for ARM architectures with Lustre 2.12.1 RC1; &lt;a href=&quot;https://testing.whamcloud.com/test_sets/4fdace66-66c7-11e9-bd0e-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/4fdace66-66c7-11e9-bd0e-52540065bddc&lt;/a&gt; .&lt;/p&gt;

&lt;p&gt;In the console log for client 2 (vm26), we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;======================================= 18:16:30 \(1556129790\)
[79692.698703] Lustre: DEBUG MARKER: == parallel-scale-nfsv4 test racer_on_nfs: racer on NFS client ======================================= 18:16:30 (1556129790)
[79693.576177] Lustre: DEBUG MARKER: MDSCOUNT=4 OSTCOUNT=8 LFS=/usr/bin/lfs /usr/lib64/lustre/tests/racer/racer.sh /mnt/lustre/d0.parallel-scale-nfs
[79796.948911] 16[32169]: unhandled level 3 translation fault (11) at 0x00000008, esr 0x92000007, in ld-2.17.so[ffffa3810000+20000]
[79796.964022] CPU: 1 PID: 32169 Comm: 16 Kdump: loaded Tainted: G           OE  ------------   4.14.0-115.2.2.el7a.aarch64 #1
[79796.970552] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[79796.974656] task: ffff8000389dcc00 task.stack: ffff000010320000
[79796.978150] PC is at 0xffffa381ab3c
[79796.980218] LR is at 0xffffa3813ce4
[79796.982295] pc : [&amp;lt;0000ffffa381ab3c&amp;gt;] lr : [&amp;lt;0000ffffa3813ce4&amp;gt;] pstate: 60000000
[79796.986812] sp : 0000ffffd4d2c3c0
[79796.988760] x29: 0000ffffd4d2c3c0 x28: 0000ffffa3840ff8 
[79796.991864] x27: 0000000000000000 x26: 0000000000000000 
[79796.995014] x25: 0000ffffa3840000 x24: 0000ffffa3840a20 
[79796.998168] x23: 0000000000000000 x22: 0000ffffa3841168 
[79797.001282] x21: 0000ffffa383f000 x20: 0000000000000001 
[79797.004501] x19: 0000000000000001 x18: 0000000000000000 
[79797.007655] x17: 0000ffffa382587c x16: 0000ffffa383ff80 
[79797.010772] x15: 0000ffffa38252e4 x14: 0000ffffa3840000 
[79797.013994] x13: 0000000000010000 x12: 0000000400000006 
[79797.017088] x11: 756e694c00000000 x10: 00000078756e694c 
[79797.020238] x9 : 0000000000000000 x8 : 0000ffffa3840000 
[79797.023462] x7 : 000000000000001c x6 : 0000ffffa383fc70 
[79797.026778] x5 : 0000ffffa3842260 x4 : 0000ffffa3840000 
[79797.029896] x3 : 0000000000000000 x2 : 0000ffffa383f000 
[79797.033018] x1 : 0000000000000000 x0 : 0000000000000000 
[79994.698005] NFS: server trevis-54vm11 error: fileid changed
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;for racer_on_nfs in parallel-scale-nfsv3 and nfs-v4. We see a similar message in the client 1 console log.&lt;/p&gt;</comment>
                            <comment id="270491" author="jamesanunez" created="Mon, 18 May 2020 17:11:24 +0000"  >&lt;p&gt;We will not fix this issue because we&#8217;ve replaced the POSIX test suite with pjdfstest.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="61681">LU-14137</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzjgn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>