<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:10:48 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7656] replay-single_70c test failed tar: Exiting with failure status due to previous errors</title>
                <link>https://jira.whamcloud.com/browse/LU-7656</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-single test 70c: tar 1mdts recovery == 02:32:52 (1441506772)
Starting client fre1211,fre1212:  -o user_xattr,flock fre1209@tcp:/lustre /mnt/lustre
Started clients fre1211,fre1212: 
fre1209@tcp:/lustre on /mnt/lustre type lustre (rw,user_xattr,flock)
fre1209@tcp:/lustre on /mnt/lustre type lustre (rw,user_xattr,flock)
Started tar 8730
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
Filesystem          1K-blocks  Used Available Use% Mounted on
fre1209@tcp:/lustre   1377952 68056   1233908   6% /mnt/lustre
tar: Removing leading `/&apos; from member names
test_70c fail mds1 1 times
Failing mds1 on fre1209
Stopping /mnt/mds1 (opts:) on fre1209
pdsh@fre1211: fre1209: ssh exited with exit code 1
reboot facets: mds1
Failover mds1 to fre1209
02:35:20 (1441506920) waiting for fre1209 network 900 secs ...
02:35:20 (1441506920) network interface is UP
mount facets: mds1
Starting mds1: -o rw,user_xattr  /dev/vdb /mnt/mds1
fre1209: mount.lustre: set /sys/block/vdb/queue/max_sectors_kb to 2147483647
fre1209: 
Started lustre-MDT0000
fre1212: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 11 sec
fre1211: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 11 sec
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
tar: Removing leading `/&apos; from member names
Filesystem          1K-blocks  Used Available Use% Mounted on
fre1209@tcp:/lustre   1377952 68056   1237060   6% /mnt/lustre
test_70c fail mds1 2 times
Failing mds1 on fre1209
Stopping /mnt/mds1 (opts:) on fre1209
pdsh@fre1211: fre1209: ssh exited with exit code 1
reboot facets: mds1
Failover mds1 to fre1209
02:38:01 (1441507081) waiting for fre1209 network 900 secs ...
02:38:01 (1441507081) network interface is UP
mount facets: mds1
Starting mds1: -o rw,user_xattr  /dev/vdb /mnt/mds1
fre1209: mount.lustre: set /sys/block/vdb/queue/max_sectors_kb to 2147483647
fre1209: 
Started lustre-MDT0000
fre1212: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 9 sec
fre1211: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 9 sec
Resetting fail_loc on all nodes.../usr/lib64/lustre/tests/test-framework.sh: line 2976:  8730 Killed                  ( while true; do
    test_mkdir -p -c$MDSCOUNT $DIR/$tdir || break; if [ $MDSCOUNT -ge 2 ]; then
        $LFS setdirstripe -D -c$MDSCOUNT $DIR/$tdir || error &quot;set default dirstripe failed&quot;;
    fi; cd $DIR/$tdir || break; tar cf - /etc | tar xf - || error &quot;tar failed&quot;; cd $DIR || break; rm -rf $DIR/$tdir || break;
done )
done.
tar: etc/ssl: Cannot stat: No such file or directory
tar: etc/sysconfig/network-scripts: Cannot stat: No such file or directory
tar: etc/sysconfig: Cannot stat: No such file or directory
tar: etc/pam.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc0.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc5.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc2.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc4.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc6.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc3.d: Cannot stat: No such file or directory
tar: etc/rc.d/rc1.d: Cannot stat: No such file or directory
tar: etc/rc.d: Cannot stat: No such file or directory
tar: etc/profile.d: Cannot stat: No such file or directory
tar: etc/alternatives: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>Configuration : 4 Node - ( 1 MDS/1 OSS/2 Clients)&lt;br/&gt;
Release&lt;br/&gt;
191_2.6.32_431.17.1.x2.0.62.x86_64_gb0424d1 Build Date: Thu 03 Sep 2015 12:25:48 AM UTC&lt;br/&gt;
2.6.32_431.29.2.el6.x86_64_g01ca899 Build Date: Sat 05 Sep 2015 05:39:37 PM UTC&lt;br/&gt;
Server 2.5.1.x6&lt;br/&gt;
Client 2.7.59</environment>
        <key id="34069">LU-7656</key>
            <summary>replay-single_70c test failed tar: Exiting with failure status due to previous errors</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="jamesanunez">James Nunez</assignee>
                                    <reporter username="noopur.maheshwari">Noopur Maheshwari</reporter>
                        <labels>
                            <label>patch</label>
                    </labels>
                <created>Tue, 12 Jan 2016 20:03:23 +0000</created>
                <updated>Fri, 13 May 2016 18:25:31 +0000</updated>
                            <resolved>Fri, 13 May 2016 18:25:31 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="138729" author="gerrit" created="Tue, 12 Jan 2016 20:19:26 +0000"  >&lt;p&gt;Noopur Maheshwari (noopur.maheshwari@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/17959&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17959&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7656&quot; title=&quot;replay-single_70c test failed tar: Exiting with failure status due to previous errors&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7656&quot;&gt;&lt;del&gt;LU-7656&lt;/del&gt;&lt;/a&gt; tests: tar a temporary folder&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 76c60777cc75f9d1d6c870ce986d793517e21969&lt;/p&gt;</comment>
                            <comment id="138952" author="jgmitter" created="Thu, 14 Jan 2016 18:51:13 +0000"  >&lt;p&gt;James,&lt;br/&gt;
Can you have a look at the patch?&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="138953" author="adilger" created="Thu, 14 Jan 2016 18:57:26 +0000"  >&lt;p&gt;Have you verified that this is related to trying to archive dangling symlinks from the source /etc folder, or what is the source of the error? Have you tried using &quot;tar -cf --ignore-failed-read&quot; to avoid an error on tar during read?  It may also be that these errors are generated at restore time because the files are being deleted during cleanup while tar is still running.&lt;/p&gt;</comment>
                            <comment id="140949" author="noopur.maheshwari" created="Wed, 3 Feb 2016 05:16:21 +0000"  >&lt;p&gt;Hello Andreas,&lt;/p&gt;

&lt;p&gt;Dangling symlinks do not cause tar to fail. I created a dangling symlink in a temporary folder and performed tar on that folder, tar did not fail.&lt;br/&gt;
I tried using &quot;tar -cf --ignore-failed-read&quot;, it gives a warning instead of an error for read. Yes, it avoids an error on tar during read.&lt;/p&gt;</comment>
                            <comment id="143176" author="jamesanunez" created="Mon, 22 Feb 2016 15:25:21 +0000"  >&lt;p&gt;Noopur - In the patch, you stated &quot;Changing directory to /tmp does not help in this case. We see these tar failures without Lustre mounted as well. There is a problem with the tar utility, OS or VM (kvm or vmware). This isn&apos;t a lustre problem. Abandoning.&quot; &lt;/p&gt;

&lt;p&gt;So, I am closing this ticket as &quot;Not a Bug&quot; &lt;/p&gt;</comment>
                            <comment id="144090" author="noopur.maheshwari" created="Mon, 29 Feb 2016 03:08:16 +0000"  >&lt;p&gt;Hello James,&lt;/p&gt;

&lt;p&gt;I figured out that it isn&apos;t a tar utility issue, instead it is a test case issue.&lt;/p&gt;

&lt;p&gt;kill -0, used in the test case, is to determine if one had permissions to send signals to a running process via kill.&lt;br/&gt;
kill -0, neither kills tar, nor waits for it to complete.&lt;/p&gt;

&lt;p&gt;The tar process is running in an infinite loop, and the removal/cleanup of files interferes in the process and causes tar to fail.&lt;br/&gt;
Main process should wait for the tar process to complete before cleanup and then exit gracefully. I&apos;ll push the patch for the same.&lt;/p&gt;

&lt;p&gt;Could you please reopen the ticket?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;</comment>
                            <comment id="144259" author="gerrit" created="Tue, 1 Mar 2016 08:38:42 +0000"  >&lt;p&gt;Noopur Maheshwari (noopur.maheshwari@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/18732&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18732&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7656&quot; title=&quot;replay-single_70c test failed tar: Exiting with failure status due to previous errors&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7656&quot;&gt;&lt;del&gt;LU-7656&lt;/del&gt;&lt;/a&gt; tests: tar fix for replay-single/70c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: b346da54ead50afc6f72615a33f4ed0e1f27b41e&lt;/p&gt;</comment>
                            <comment id="151876" author="gerrit" created="Wed, 11 May 2016 16:37:03 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/18732/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18732/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7656&quot; title=&quot;replay-single_70c test failed tar: Exiting with failure status due to previous errors&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7656&quot;&gt;&lt;del&gt;LU-7656&lt;/del&gt;&lt;/a&gt; tests: tar fix for replay-single/70c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 13f4d2a5ab81b479fcc1cd2263c2cd8db8b616c5&lt;/p&gt;</comment>
                            <comment id="152263" author="jgmitter" created="Fri, 13 May 2016 18:25:31 +0000"  >&lt;p&gt;Landed to master for 2.9.0&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="20091" name="70c.lctl.tgz" size="692898" author="noopur.maheshwari" created="Tue, 12 Jan 2016 20:03:23 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxxxz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>