<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:06:57 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-430] Issues with mount.lustre and automounter</title>
                <link>https://jira.whamcloud.com/browse/LU-430</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We are using automounter to mount some of our lustre filesystems on workernodes around the cluster.&lt;br/&gt;
These filesystems get unmounted after a long period of inactivity.&lt;/p&gt;

&lt;p&gt;The problem i&apos;d like to point is not frequent but may affect other users as well.&lt;br/&gt;
In some cases, when fs gets unmounted, the related entry is not removed from /etc/mtab file.&lt;br/&gt;
This leads to the situation when automounter is unable to mount lustre again.&lt;/p&gt;

&lt;p&gt;Part of strace log from automount daemon:&lt;/p&gt;

&lt;p&gt;...&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;pid 19154&amp;#93;&lt;/span&gt; execve(&quot;/sbin/mount.lustre&quot;, &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;quot;/sbin/mount.lustre&amp;quot;, &amp;quot;10.8.1.101:/scratch&amp;quot;, &amp;quot;/mnt/auto/scratch-lustre&amp;quot;, &amp;quot;-f&amp;quot;, &amp;quot;-o&amp;quot;, &amp;quot;rw,nosuid,nodev,localflock&amp;quot;&amp;#93;&lt;/span&gt;, &lt;span class=&quot;error&quot;&gt;&amp;#91;/* 14 vars */&amp;#93;&lt;/span&gt;) = 0&lt;/p&gt;

&lt;p&gt;...&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;pid 19154&amp;#93;&lt;/span&gt; write(2, &quot;mount.lustre: according to /etc/mtab 10.8.1.101:/scratch is already mounted on /mnt/auto/scratch-lustre\n&quot;, 104) = 104&lt;br/&gt;
...&lt;/p&gt;

&lt;p&gt;To make mounting possible again, the related entry needs to be removed from /etc/mtab &lt;br/&gt;
I am not sure which part of the lustre-automount pair is mis-behaving here.&lt;br/&gt;
Is it automounter not removing the entry from /etc/mtab or mount.lustre ifself not checking &lt;br/&gt;
mount status in /proc/mounts?&lt;/p&gt;

&lt;p&gt;More details:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@n2-1-1 ~&amp;#93;&lt;/span&gt;# grep -e lustre  /etc/mtab&lt;br/&gt;
10.8.1.101:/scratch /mnt/auto/scratch-lustre lustre rw,nosuid,nodev,localflock 0 0&lt;br/&gt;
10.8.1.101:/storage /mnt/auto/storage-lustre lustre rw,nosuid,nodev,localflock 0 0&lt;br/&gt;
172.16.193.1@o2ib:/scratch /mnt/lustre/scratch lustre rw,nosuid,nodev,user_xattr,flock,acl,user_xattr,flock,acl 0 0&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@n2-1-1 ~&amp;#93;&lt;/span&gt;# grep -e lustre  /proc/mounts &lt;br/&gt;
10.8.1.101@tcp:/storage /mnt/auto/storage-lustre lustre rw,nosuid,nodev,localflock,acl 0 0&lt;br/&gt;
172.16.193.1@o2ib:/scratch /mnt/lustre/scratch lustre rw,nosuid,nodev,flock,acl 0 0&lt;/p&gt;


&lt;p&gt;Best Regards&lt;br/&gt;
&amp;#8211;&lt;br/&gt;
Lukasz Flis&lt;br/&gt;
ACC Cyfronet&lt;/p&gt;


</description>
                <environment>Lustre, RHEL5.6, Automounter</environment>
        <key id="11192">LU-430</key>
            <summary>Issues with mount.lustre and automounter</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="5" iconUrl="https://jira.whamcloud.com/images/icons/priorities/trivial.svg">Trivial</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="lflis">Lukasz Flis</reporter>
                        <labels>
                    </labels>
                <created>Sat, 18 Jun 2011 05:11:14 +0000</created>
                <updated>Wed, 4 Feb 2015 22:25:28 +0000</updated>
                            <resolved>Wed, 4 Feb 2015 22:25:28 +0000</resolved>
                                    <version>Lustre 1.8.6</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="16560" author="pjones" created="Sun, 19 Jun 2011 00:09:06 +0000"  >&lt;p&gt;Lukasz&lt;/p&gt;

&lt;p&gt;Thanks for your submission. Could you please clarify about the release you are running. You have selected 1.8.6 as the Lustre version this occurs on. What is your source for this release? Also, did this issue also occur on earlier 1.8.x releases or is it a regression since you upgraded?&lt;/p&gt;

&lt;p&gt;Regards&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="16561" author="lflis" created="Sun, 19 Jun 2011 04:40:26 +0000"  >&lt;p&gt;Peter, &lt;/p&gt;

&lt;p&gt;Thank you for quick reply &lt;br/&gt;
The problem i&apos;ve described was known for us since we&apos;re using lustre.&lt;/p&gt;

&lt;p&gt;I remember that all lustre versions from 1.8.x are known to have this issue here so it doesn&apos;t look like a regression to me.&lt;br/&gt;
We&apos;re not sure how about  1.6.x as we have never used it with automounter.&lt;/p&gt;

&lt;p&gt;The most recent version we have right now is:&lt;br/&gt;
1.8.5.56-2.cyfronet.2.6.18_238.12.1.el5&lt;br/&gt;
this is our internal rpm package compiled from b1_8 from git with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-376&quot; title=&quot;Client hangs when listing big directory with ls -la &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-376&quot;&gt;&lt;del&gt;LU-376&lt;/del&gt;&lt;/a&gt; patches applied just before they landed in a branch&lt;/p&gt;

&lt;p&gt;Regards&lt;br/&gt;
&amp;#8211;&lt;br/&gt;
Lukasz&lt;/p&gt;</comment>
                            <comment id="16563" author="pjones" created="Sun, 19 Jun 2011 11:40:11 +0000"  >&lt;p&gt;Lukasz&lt;/p&gt;

&lt;p&gt;Ah, thanks for clarifying about the code that you are running with. With the information that you have supplied, I do not think that this warrants being a blocker for 1.8.6-wc (which is in release testing) but is something that we could consider fixing for a future release.&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="105745" author="adilger" created="Wed, 4 Feb 2015 22:25:28 +0000"  >&lt;p&gt;Apparently there was a bug in automount for RHEL5 that was fixed in RHEL5.7:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://bugzilla.redhat.com/show_bug.cgi?id=520745&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.redhat.com/show_bug.cgi?id=520745&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://bugzilla.redhat.com/show_bug.cgi?id=632006&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.redhat.com/show_bug.cgi?id=632006&lt;/a&gt;&lt;br/&gt;
    The autofs utility failed to mount Lustre metadata target (MDT) failover mounts because it could not understand the mount point syntax. With this update, the mount point syntax is processed correctly and the failover is mounted as expected.&lt;/p&gt;&lt;/blockquote&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw22n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>10417</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>