<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:19:36 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1778] Root Squash is not always properly enforced</title>
                <link>https://jira.whamcloud.com/browse/LU-1778</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;On a node with root_squash activated, if root try to access to attributes of file (fstat) which has not been previously accessed, the operation return ENOPERM.&lt;br/&gt;
If the attributes file were accessed by an authorized user, then root can access attributes without troubles.&lt;/p&gt;

&lt;p&gt;as root :&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae ~&amp;#93;&lt;/span&gt;# mount -t lustre 192.168.1.100:/scratch /scratch&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae ~&amp;#93;&lt;/span&gt;# cd /scratch/&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae scratch&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 16&lt;br/&gt;
drwxrwxrwx   4 root  root  4096 Aug 21 18:03 .&lt;br/&gt;
dr-xr-xr-x. 28 root  root  4096 Aug 22 15:53 ..&lt;br/&gt;
drwxr-xr-x   2 root  root  4096 Jun 21 18:42 .lustre&lt;br/&gt;
drwx------   2 slurm users 4096 Aug 21 18:03 test_dir&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae scratch&amp;#93;&lt;/span&gt;# cd test_dir/&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae test_dir&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
ls: cannot open directory .: Permission denied&lt;/p&gt;

&lt;p&gt;then, as user &apos;slurm&apos; :&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;slurm@clientae ~&amp;#93;&lt;/span&gt;$ cd /scratch/test_dir&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;slurm@clientae test_dir&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 16&lt;br/&gt;
drwx------ 2 slurm users 4096 Aug 21 18:03 .&lt;br/&gt;
drwxrwxrwx 4 root  root  4096 Aug 22 16:47 ..&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 slurm users 7007 Aug 22 15:58 afile&lt;/p&gt;

&lt;p&gt;now, come back as user root an replay the &apos;ls&apos; command :&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae test_dir&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 16&lt;br/&gt;
drwx------ 2 slurm users 4096 Aug 21 18:03 .&lt;br/&gt;
drwxrwxrwx 4 root  root  4096 Aug 22 16:47 ..&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 slurm users 7007 Aug 22 15:58 afile&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae test_dir&amp;#93;&lt;/span&gt;# stat afile&lt;br/&gt;
  File: `afile&apos;&lt;br/&gt;
  Size: 7007            Blocks: 16         IO Block: 2097152 regular file&lt;br/&gt;
Device: d61f715ah/3592384858d   Inode: 144115238826934275  Links: 1&lt;br/&gt;
Access: (0644/&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-)  Uid: (  500/   slurm)   Gid: (  100/   users)&lt;br/&gt;
Access: 2012-08-22 15:59:26.000000000 +0200&lt;br/&gt;
Modify: 2012-08-22 15:58:55.000000000 +0200&lt;br/&gt;
Change: 2012-08-22 15:58:55.000000000 +0200&lt;/p&gt;

&lt;p&gt;At this point if you try to have a look into the file as root, you get ENOPERM&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae test_dir&amp;#93;&lt;/span&gt;# cat afile&lt;br/&gt;
cat: afile: Permission denied&lt;br/&gt;
even if you already got access to the content with the authorized user.&lt;/p&gt;

&lt;p&gt;But, if the file is opened by the user (&apos;tail -f afile&apos; for exemple), root get access to the content of the file as well&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@clientae test_dir&amp;#93;&lt;/span&gt;# tail afile&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;br/&gt;
coucou&lt;/p&gt;

&lt;p&gt;As soon as the file is closed by the user, root left access to the content(at least can&apos;t open the file any more)&lt;/p&gt;

&lt;p&gt;Alex.&lt;/p&gt;</description>
                <environment></environment>
        <key id="15565">LU-1778</key>
            <summary>Root Squash is not always properly enforced</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="louveta">Alexandre Louvet</reporter>
                        <labels>
                    </labels>
                <created>Wed, 22 Aug 2012 11:11:22 +0000</created>
                <updated>Tue, 28 Feb 2023 11:53:27 +0000</updated>
                            <resolved>Fri, 9 May 2014 15:05:47 +0000</resolved>
                                    <version>Lustre 2.1.1</version>
                    <version>Lustre 2.1.2</version>
                                    <fixVersion>Lustre 2.6.0</fixVersion>
                    <fixVersion>Lustre 2.5.4</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>15</watches>
                                                                            <comments>
                            <comment id="43698" author="pjones" created="Thu, 23 Aug 2012 12:00:47 +0000"  >&lt;p&gt;Bob&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="45557" author="dmoreno" created="Wed, 26 Sep 2012 03:53:25 +0000"  >&lt;p&gt;Hi, any news on this ticket? Do you need some more information?&lt;/p&gt;</comment>
                            <comment id="45809" author="bogl" created="Mon, 1 Oct 2012 12:24:35 +0000"  >&lt;p&gt;I haven&apos;t been able to reproduce this failure in the current b2_1:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 ~&amp;#93;&lt;/span&gt;# mount -t lustre centos53:/lustre /mnt/lustre&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 ~&amp;#93;&lt;/span&gt;# lctl get_param mdt/*/root_squash&lt;br/&gt;
mdt.lustre-MDT0000.root_squash=500:500&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 ~&amp;#93;&lt;/span&gt;# cd /mnt/lustre&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 lustre&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 16&lt;br/&gt;
drwxrwxrwx  4 root root 4096 Oct  1 09:06 .&lt;br/&gt;
drwxr-xr-x. 6 root root 4096 Oct  1 09:05 ..&lt;br/&gt;
drwx------  2 bogl bogl 4096 Oct  1 09:07 bogl&lt;br/&gt;
drwxr-xr-x  2 root root 4096 Oct  1 09:05 .lustre&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 lustre&amp;#93;&lt;/span&gt;# cd bogl&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 12&lt;br/&gt;
drwx------ 2 bogl bogl 4096 Oct  1 09:07 .&lt;br/&gt;
drwxrwxrwx 4 root root 4096 Oct  1 09:06 ..&lt;br/&gt;
&lt;del&gt;rw&lt;/del&gt;------ 1 bogl bogl    4 Oct  1 09:07 f1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# cat f1&lt;br/&gt;
foo&lt;/p&gt;

&lt;p&gt;Am I doing something incorrect in my reproduction attempt?  Is there some other precondition to making this happen?&lt;/p&gt;
</comment>
                            <comment id="45840" author="louveta" created="Tue, 2 Oct 2012 05:52:14 +0000"  >&lt;p&gt;This is worst than in my case ...&lt;/p&gt;

&lt;p&gt;1/ as root has been remapped to something different than 0:0, I would expect that you wasn&apos;t able to enter in bogl directory&lt;br/&gt;
2/ for the same reason, root shouldn&apos;t be able to watch the content of f1&lt;/p&gt;

&lt;p&gt;That said,  I did a new test on a vanilla 2.1.3 (ie the rpm downloaded from whamcloud, without recompilation) on top of centos 6.x up to date to confirm that it fail at the latest available version.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@server ~&amp;#93;&lt;/span&gt;# lctl get_param mdt/*/root_squash&lt;br/&gt;
mdt.scratch1-MDT0000.root_squash=0:0&lt;br/&gt;
mdt.scratch2-MDT0000.root_squash=0:0&lt;br/&gt;
mdt.scratch3-MDT0000.root_squash=0:0&lt;/p&gt;

&lt;p&gt;=&amp;gt; set root_squash to an id which doesn&apos;t match my user id&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@server ~&amp;#93;&lt;/span&gt;# lctl conf_param scratch1.mdt.root_squash=&quot;65535:65535&quot;&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@server ~&amp;#93;&lt;/span&gt;# lctl get_param mdt/*/root_squash&lt;br/&gt;
mdt.scratch1-MDT0000.root_squash=0:0&lt;br/&gt;
mdt.scratch2-MDT0000.root_squash=0:0&lt;br/&gt;
mdt.scratch3-MDT0000.root_squash=0:0&lt;/p&gt;

&lt;p&gt;On the client, running as a simple user&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client scratch1&amp;#93;&lt;/span&gt;$ id&lt;br/&gt;
uid=500(test) gid=100(users) groups=100(users)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client scratch1&amp;#93;&lt;/span&gt;$ pwd&lt;br/&gt;
/scratch1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client scratch1&amp;#93;&lt;/span&gt;$ mkdir test&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client scratch1&amp;#93;&lt;/span&gt;$ chmod 700 test&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client scratch1&amp;#93;&lt;/span&gt;$ ls -la&lt;br/&gt;
total 16&lt;br/&gt;
drwxrwxrwx   3 root root  4096 Sep 11 22:15 .&lt;br/&gt;
dr-xr-xr-x. 29 root root  4096 Oct  2 09:22 ..&lt;br/&gt;
drwxr-xr-x   2 root root  4096 Sep 11 22:15 .lustre&lt;br/&gt;
drwx------   2 test users 4096 Oct  2 10:37 test&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client scratch1&amp;#93;&lt;/span&gt;$ cd test/&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client test&amp;#93;&lt;/span&gt;$ echo coucou &amp;gt; afile&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client test&amp;#93;&lt;/span&gt;$ ls -la&lt;br/&gt;
total 9&lt;br/&gt;
drwx------ 2 test users 4096 Oct  2 10:37 .&lt;br/&gt;
drwxrwxrwx 3 root root  4096 Sep 11 22:15 ..&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 test users    7 Oct  2 10:37 afile&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client test&amp;#93;&lt;/span&gt;$ cat afile&lt;br/&gt;
coucou&lt;/p&gt;

&lt;p&gt;now log as root on the client&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client scratch1&amp;#93;&lt;/span&gt;# pwd&lt;br/&gt;
/scratch1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client scratch1&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 16&lt;br/&gt;
drwxrwxrwx   3 root root  4096 Sep 11 22:15 .&lt;br/&gt;
dr-xr-xr-x. 29 root root  4096 Oct  2 09:22 ..&lt;br/&gt;
drwxr-xr-x   2 root root  4096 Sep 11 22:15 .lustre&lt;br/&gt;
drwx------   2 test users 4096 Oct  2 10:37 test&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client scratch1&amp;#93;&lt;/span&gt;# cd test/&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client test&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 12&lt;br/&gt;
drwx------ 2 test users 4096 Oct  2 10:37 .&lt;br/&gt;
drwxrwxrwx 3 root root  4096 Sep 11 22:15 ..&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 test users    7 Oct  2 10:37 afile&lt;/p&gt;

&lt;p&gt;=&amp;gt; There is already something funny at this point. As root was mapped to 65535:65535, I expect to not be able to enter in this directory (700) &lt;span class=&quot;error&quot;&gt;&amp;#91;it was also shown in your test&amp;#93;&lt;/span&gt;. Flushing the cache on the client (ie echo 3 &amp;gt; /proc/sys/vm/drop_caches make a different situation. Root can enter the &apos;test&apos; directory, but can&apos;t stat files :&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client test&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
ls: cannot access afile: Permission denied&lt;br/&gt;
total 8&lt;br/&gt;
drwx------ 2 test users 4096 Oct  2 10:37 .&lt;br/&gt;
drwxrwxrwx 3 root root  4096 Sep 11 22:15 ..&lt;br/&gt;
-????????? ? ?    ?        ?            ? afile&lt;/p&gt;

&lt;p&gt;I imagine this is due to the fact that the uid:gid translation in &apos;only&apos; made at the mdt side and not at client side, letting root to access to attribute in client side cache without problem. Am I right ?&lt;/p&gt;

&lt;p&gt;Whatever, return as test user and stat the &apos;afile&apos; again&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client test&amp;#93;&lt;/span&gt;$ ls -la&lt;br/&gt;
total 12&lt;br/&gt;
drwx------ 2 test users 4096 Oct  2 10:37 .&lt;br/&gt;
drwxrwxrwx 3 root root  4096 Sep 11 22:15 ..&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 test users    7 Oct  2 10:37 afile&lt;/p&gt;

&lt;p&gt;switch back as root and run &apos;ls&apos; once again let root again access to attributes :&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client test&amp;#93;&lt;/span&gt;# ls -la&lt;br/&gt;
total 12&lt;br/&gt;
drwx------ 2 test users 4096 Oct  2 10:37 .&lt;br/&gt;
drwxrwxrwx 3 root root  4096 Sep 11 22:15 ..&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 test users    7 Oct  2 10:37 afile&lt;/p&gt;

&lt;p&gt;at this point root can&apos;t access to &apos;afile&apos; content&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client test&amp;#93;&lt;/span&gt;# cat afile&lt;br/&gt;
cat: afile: Permission denied&lt;/p&gt;

&lt;p&gt;unless an authorized user run tail -f afile, and keep it running&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;test@client test&amp;#93;&lt;/span&gt;$ tail -f afile&lt;br/&gt;
coucou&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client test&amp;#93;&lt;/span&gt;# cat afile &lt;br/&gt;
coucou&lt;/p&gt;

</comment>
                            <comment id="45867" author="bogl" created="Tue, 2 Oct 2012 11:13:15 +0000"  >&lt;p&gt;In my case id 500 == bogl.  With root squash set to 500 (bogl) root should be able to see into bogl owned dir and file, and it does.&lt;/p&gt;

&lt;p&gt;I will retry with setting root squash to some other id.&lt;/p&gt;</comment>
                            <comment id="45871" author="bogl" created="Tue, 2 Oct 2012 11:44:19 +0000"  >&lt;p&gt;Have set root_squash to 65535:65535, shown by:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos54 bogl&amp;#93;&lt;/span&gt;# lctl set_param mdt/*/root_squash=65535:65535&lt;br/&gt;
mdt.lustre-MDT0000.root_squash=65535:65535&lt;/p&gt;

&lt;p&gt;On client accessing as bogl, tree looks like:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;bogl@centos53 lustre-release&amp;#93;&lt;/span&gt;$ ll -R /mnt/lustre&lt;br/&gt;
/mnt/lustre:&lt;br/&gt;
total 4&lt;br/&gt;
drwx------ 2 bogl bogl 4096 Oct  1 15:18 bogl&lt;/p&gt;

&lt;p&gt;/mnt/lustre/bogl:&lt;br/&gt;
total 4&lt;br/&gt;
&lt;del&gt;rw&lt;/del&gt;------ 1 bogl bogl 4 Oct  1 15:18 file&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;bogl@centos53 lustre-release&amp;#93;&lt;/span&gt;$ &lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;bogl@centos53 lustre-release&amp;#93;&lt;/span&gt;$ cat /mnt/lustre/bogl/file&lt;br/&gt;
foo&lt;/p&gt;

&lt;p&gt;Note permissions on dir and file only for bogl (id==500).&lt;/p&gt;

&lt;p&gt;Accessing as root, I consistently see no access for ls or file content to dir or file:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 ~&amp;#93;&lt;/span&gt;# ll -R /mnt/lustre&lt;br/&gt;
/mnt/lustre:&lt;br/&gt;
total 4&lt;br/&gt;
drwx------ 2 bogl bogl 4096 Oct  1 15:18 bogl&lt;br/&gt;
ls: cannot open directory /mnt/lustre/bogl: Permission denied&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# cat /mnt/lustre/bogl/file&lt;br/&gt;
cat: /mnt/lustre/bogl/file: Permission denied&lt;/p&gt;

&lt;p&gt;I do see access being allowed for cd to the bogl owned dir. stat of the file is initially refused:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# cd /mnt/lustre/bogl&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# stat file&lt;br/&gt;
stat: cannot stat `file&apos;: Permission denied&lt;/p&gt;

&lt;p&gt;Then after doing a stat of the file as bogl:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;bogl@centos53 lustre-release&amp;#93;&lt;/span&gt;$ stat /mnt/lustre/bogl/file&lt;br/&gt;
  File: `/mnt/lustre/bogl/file&apos;&lt;br/&gt;
  Size: 4         	Blocks: 8          IO Block: 2097152 regular file&lt;br/&gt;
Device: 2c54f966h/743766374d	Inode: 144115205255725058  Links: 1&lt;br/&gt;
Access: (0600/&lt;del&gt;rw&lt;/del&gt;------)  Uid: (  500/    bogl)   Gid: (  500/    bogl)&lt;br/&gt;
Access: 2012-10-02 08:20:40.000000000 -0700&lt;br/&gt;
Modify: 2012-10-01 15:18:07.000000000 -0700&lt;br/&gt;
Change: 2012-10-01 15:18:46.000000000 -0700&lt;/p&gt;

&lt;p&gt;A later stat of the file as root is allowed:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# stat file&lt;br/&gt;
  File: `file&apos;&lt;br/&gt;
  Size: 4         	Blocks: 8          IO Block: 2097152 regular file&lt;br/&gt;
Device: 2c54f966h/743766374d	Inode: 144115205255725058  Links: 1&lt;br/&gt;
Access: (0600/&lt;del&gt;rw&lt;/del&gt;------)  Uid: (  500/    bogl)   Gid: (  500/    bogl)&lt;br/&gt;
Access: 2012-10-02 08:20:40.000000000 -0700&lt;br/&gt;
Modify: 2012-10-01 15:18:07.000000000 -0700&lt;br/&gt;
Change: 2012-10-01 15:18:46.000000000 -0700&lt;/p&gt;

&lt;p&gt;I see no case where access to the file content as root is allowed:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# cat file&lt;br/&gt;
cat: file: Permission denied&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# cat /mnt/lustre/bogl/file&lt;br/&gt;
cat: /mnt/lustre/bogl/file: Permission denied&lt;/p&gt;

&lt;p&gt;This behavior looks consistent in all versions of 2.X right up to master.&lt;/p&gt;</comment>
                            <comment id="45872" author="bogl" created="Tue, 2 Oct 2012 11:53:27 +0000"  >&lt;p&gt;correction:  on another retry I do see incorrect access to file content allowed.  If I do a stat and then a persistent access as bogl:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;bogl@centos53 lustre-release&amp;#93;&lt;/span&gt;$ stat /mnt/lustre/bogl/file&lt;br/&gt;
  File: `/mnt/lustre/bogl/file&apos;&lt;br/&gt;
  Size: 4         	Blocks: 8          IO Block: 2097152 regular file&lt;br/&gt;
Device: 2c54f966h/743766374d	Inode: 144115205255725058  Links: 1&lt;br/&gt;
Access: (0600/&lt;del&gt;rw&lt;/del&gt;------)  Uid: (  500/    bogl)   Gid: (  500/    bogl)&lt;br/&gt;
Access: 2012-10-02 08:35:56.000000000 -0700&lt;br/&gt;
Modify: 2012-10-01 15:18:07.000000000 -0700&lt;br/&gt;
Change: 2012-10-01 15:18:46.000000000 -0700&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;bogl@centos53 lustre-release&amp;#93;&lt;/span&gt;$ tail -f /mnt/lustre/bogl/file&lt;br/&gt;
foo&lt;/p&gt;

&lt;p&gt;After that a stat and access as root is allowed, at least for a while:&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# stat file&lt;br/&gt;
  File: `file&apos;&lt;br/&gt;
  Size: 4         	Blocks: 8          IO Block: 2097152 regular file&lt;br/&gt;
Device: 2c54f966h/743766374d	Inode: 144115205255725058  Links: 1&lt;br/&gt;
Access: (0600/&lt;del&gt;rw&lt;/del&gt;------)  Uid: (  500/    bogl)   Gid: (  500/    bogl)&lt;br/&gt;
Access: 2012-10-02 08:50:02.000000000 -0700&lt;br/&gt;
Modify: 2012-10-01 15:18:07.000000000 -0700&lt;br/&gt;
Change: 2012-10-01 15:18:46.000000000 -0700&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@centos53 bogl&amp;#93;&lt;/span&gt;# cat file&lt;br/&gt;
foo&lt;/p&gt;

&lt;p&gt;It seems to require both a (permitted) stat and file access as bogl before the access that should be forbidden as root gets allowed.&lt;/p&gt;</comment>
                            <comment id="47075" author="pjones" created="Tue, 30 Oct 2012 01:28:37 +0000"  >&lt;p&gt;Niu is going to look into this one&lt;/p&gt;</comment>
                            <comment id="47077" author="niu" created="Tue, 30 Oct 2012 04:43:56 +0000"  >&lt;p&gt;Hi, Alex&lt;/p&gt;

&lt;p&gt;As you mentioned, root_suqash is just a server side id remapping (like NFS root_squash), it doesn&apos;t affect client cache, so this looks an expected behaviour to me. You need to make sure the cache is cleared before you want the root_squash enforced. (I think it&apos;s same for NFS, isn&apos;t it?) Thanks.&lt;/p&gt;</comment>
                            <comment id="47928" author="louveta" created="Fri, 16 Nov 2012 08:58:33 +0000"  >&lt;p&gt;Hi Niu,&lt;/p&gt;

&lt;p&gt;I made some test with NFS and it works as expected : root (under root_squash) never get access to user data if rights for &apos;others&apos; are not set. It doesn&apos;t depend of the activity of an authorized user on the same client.&lt;br/&gt;
I should add that I can&apos;t make sure the cache is cleared before we want the root_squash enforced : root_squash is expectected to be enforced all the time and the cache content depend of the authorized user activity.&lt;/p&gt;</comment>
                            <comment id="48517" author="niu" created="Wed, 28 Nov 2012 23:32:14 +0000"  >&lt;blockquote&gt;
&lt;p&gt;I made some test with NFS and it works as expected : root (under root_squash) never get access to user data if rights for &apos;others&apos; are not set. It doesn&apos;t depend of the activity of an authorized user on the same client.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;I think NFS client should not know if server is squashing root as well, but there are several reasons that I can think of, which could make NFS root_squash doesn&apos;t affected by the client cache much:&lt;/p&gt;

&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;NFS is kind of WCC (Weak Cache Consistency) filesystem, it revalidates client cache every few seconds (default is 3?), and with some mount options, the cache could be revalidated before every operation.&lt;/li&gt;
&lt;/ul&gt;


&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;NFS client sends ACCESS RPC for root user before operations.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Lustre is a strong cache consistency filesystem, we can&apos;t afford the extra ACCESS RPC or cache revalidation like NFS does, maybe we can make the client aware of the root_squash setting on server, and let the user to configure if they want always do access check on server side (with sacrificing the performance), but I&apos;m not sure if we have enough resource to implement it for this moment. Anyway, I think we should state the root_squash on caching problem clearly in the manual.&lt;/p&gt;

&lt;p&gt;Alex, what do you think about? Is this feature (enforce root_squash no matter of caching) very important for you? Or just improving manual is ok for you? Thanks.&lt;/p&gt;</comment>
                            <comment id="48527" author="sebastien.buisson" created="Thu, 29 Nov 2012 04:07:18 +0000"  >&lt;p&gt;Hi Niu,&lt;/p&gt;

&lt;p&gt;We do not ask that setting or unsetting&lt;br/&gt;
the root_squash parameter is taken into account on the whole Lustre cluster in real time. We could really accommodate with unmounting then remounting the clients if the root_squash parameter has changed on the&lt;br/&gt;
server.&lt;br/&gt;
Our real issue is that the root user accessing a file from a client where the same file has already been accessed by a legitimate user will gain access to this file, whatever is the root_squash parameter, because data will be read from the client cache.&lt;br/&gt;
I think it could be possible to store the root_squash information on the client at mount time. So there would be no need to verify this on the server for every request, and there would be no impact on performance.&lt;/p&gt;

&lt;p&gt;What do you think?&lt;/p&gt;

&lt;p&gt;Sebastien.&lt;/p&gt;</comment>
                            <comment id="48528" author="niu" created="Thu, 29 Nov 2012 04:48:19 +0000"  >&lt;p&gt;Hi, Sebastien&lt;/p&gt;

&lt;p&gt;Yes, I agree with you on this. Adding a permission checking hook (and check the squash setting there) for llite and make the llite aware of root_squash setting could save the RPCs to server.&lt;/p&gt;

&lt;p&gt;In my opinion, this could be a feature enhancement rather than a bug, I&apos;m glad to implement this when the time is available, and if you want propose a patch for this, I&apos;m glad to help on review. Thank you.&lt;/p&gt;</comment>
                            <comment id="48808" author="sebastien.buisson" created="Wed, 5 Dec 2012 10:05:25 +0000"  >&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;I would like to propose a patch to address this issue, so I carried out some tests to try to understand which functions are involved in getting file permissions and granting or not file access.&lt;br/&gt;
Unfortunately, my tests left me a little bit confused...&lt;/p&gt;

&lt;p&gt;Here is what I did.&lt;br/&gt;
File owner is user buisso1s. root_squash is enforced.&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;accessing as user pichong:&lt;br/&gt;
  &lt;del&gt;EACCES in ll_file_open() (file&lt;/del&gt;&amp;gt;private_data is not NULL)&lt;/li&gt;
	&lt;li&gt;accessing as root:&lt;br/&gt;
  &lt;del&gt;EACCES in ll_file_open() (file&lt;/del&gt;&amp;gt;private_data is not NULL)&lt;/li&gt;
	&lt;li&gt;accessing as user pichong, while buisso1s runs &apos;tail -f file&apos;:&lt;br/&gt;
  -EACCES in ll_inode_permission()&lt;/li&gt;
	&lt;li&gt;accessing as root, while buisso1s runs &apos;tail -f file&apos;:&lt;br/&gt;
  Access granted, both ll_file_open() and ll_inode_permission() return 0 (file-&amp;gt;private_data is NULL)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;In the end, I could not figure out who is in charge of checking file permissions.&lt;br/&gt;
Can you shed my light on this?&lt;/p&gt;

&lt;p&gt;TIA,&lt;br/&gt;
Sebastien.&lt;/p&gt;</comment>
                            <comment id="48838" author="niu" created="Wed, 5 Dec 2012 20:34:13 +0000"  >&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;When there isn&apos;t cache on client, permission checking is done on server side on the open RPC(the first two cases), when there is cache on client (no open RPC is needed), permission check is done on client by the permisson checking hook (ll_inode_permission, which is invoked by kernel, see may_open()).&lt;/p&gt;</comment>
                            <comment id="48846" author="sebastien.buisson" created="Thu, 6 Dec 2012 03:56:47 +0000"  >&lt;p&gt;So, is it OK if I propose to modify ll_inode_permission() to add a check for some kind of root_squash parameter that would be fetched by the client at mount time and stored somewhere?&lt;/p&gt;</comment>
                            <comment id="48851" author="niu" created="Thu, 6 Dec 2012 06:14:07 +0000"  >&lt;p&gt;Hi, Sebastien, I think it&apos;s doable. The current root_squash option is stored in mdt config log (because it&apos;s a mds only option), we could probably populate this option into the client config log as well, then the client can be notified whenever the option is changed. Thanks.&lt;/p&gt;</comment>
                            <comment id="49383" author="sebastien.buisson" created="Tue, 18 Dec 2012 10:22:33 +0000"  >&lt;p&gt;Hi,&lt;br/&gt;
I think I need some help regarding the way to store the root_squash option in the client config log. At the moment this is stored in the mdt config log, so how is it possible to pass it to the client config log? Via the mgs config log? What are the functions involved in that case?&lt;br/&gt;
TIA,&lt;br/&gt;
Sebastien.&lt;/p&gt;</comment>
                            <comment id="49417" author="niu" created="Tue, 18 Dec 2012 22:15:07 +0000"  >&lt;p&gt;Hi, Sebastien&lt;/p&gt;

&lt;p&gt;Please look at the mgs_write_log_param(), root_squash is now a PARAM_MDT param, which is stored in the $FSNAME-mdt0001, you might want it be stored in the client log either ($FSNAME-client), I think a simple way is to have the amdinistrator runnig two configure commands:&lt;br/&gt;
1. lctl conf_param $FSNAME.mdt.rootsquash=$ID:$ID&lt;br/&gt;
2. lctl conf_param $FSNAME.llite.rootsquash=$ID:$ID&lt;/p&gt;

&lt;p&gt;And the other options related to root_squash should be treated carefully as well, such as nosquash_nids. Thanks.&lt;/p&gt;</comment>
                            <comment id="51464" author="pichong" created="Wed, 30 Jan 2013 11:14:02 +0000"  >&lt;p&gt;I have posted a patch for b2_1 on gerrit: &lt;a href=&quot;http://review.whamcloud.com/5212&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5212&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="51515" author="pichong" created="Thu, 31 Jan 2013 03:00:56 +0000"  >&lt;p&gt;For information, here is the note from Andreas Dilger in the gerrit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Patch Set 1: I would prefer that you didn&apos;t submit this&lt;/p&gt;

&lt;p&gt;I don&apos;t think this patch introduces any useful security to the system. If the user is root on the client, then it is trivial to &quot;su&quot; to another user and bypass the client-side root squash entirely.&lt;/p&gt;&lt;/blockquote&gt;</comment>
                            <comment id="51520" author="louveta" created="Thu, 31 Jan 2013 04:15:32 +0000"  >&lt;p&gt;This is also true for NFS, but this is not the problem. Lustre claim to support root_squash (at least there is a chapter in the documentation about this feature) and customer hope that this functionality avoid root to access files for which root user doesn&apos;t have access. I agree that root can modify it credential and access to the file, but this is another story.&lt;/p&gt;

&lt;p&gt;The only real interest of this feature is to avoid root to make stupid actions that will damage the content of the filesystem, but the comportment of the feature should be consistent over time and not change due to the client state. Currently root_squash comportment is confusing and the request is just to make it clean.&lt;/p&gt;
</comment>
                            <comment id="51954" author="pichong" created="Thu, 7 Feb 2013 05:45:05 +0000"  >&lt;p&gt;Excerpt from Andreas comment in the gerrit&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;...&lt;br/&gt;
In summary, there is absolutely nothing to be gained except code complexity if the user already has root access on the client. This has to be enforced at the server, and at most root squash can only prevent the user from accessing files owned by root in the filesystem, or other root-only operations.&lt;/p&gt;

&lt;p&gt;Only with Kerberos and/or the upcoming UID/GID mapping could root be denied access to &lt;em&gt;new&lt;/em&gt; files from that client, and I can&apos;t think of any way that root could be denied access to cached files on the client. Even if the user&apos;s keys were only in memory and the kernel itself blocked access from root locally (in an irrevocable manner), the root user could replace the lustre kernel modules with an insecure version and reboot, and then wait until the user accessed secure data again.&lt;/p&gt;

&lt;p&gt;The only way to avoid this is to never allow root access on the client in the first place.&lt;/p&gt;&lt;/blockquote&gt;


&lt;p&gt;Andreas,&lt;/p&gt;

&lt;p&gt;The current implementation of root squash feature by Lustre is not working as expected by the customer and as specified in &quot;Using Root Squash&quot; section of the Lustre Operations Manual.&lt;/p&gt;

&lt;p&gt;What do you propose to make progress on this issue ?&lt;/p&gt;

&lt;p&gt;If you think this feature is senseless, then why not reducing its scope to security configurations only (MDT sec-level), or even remove the feature completely ?&lt;/p&gt;

&lt;p&gt;My feeling is that we should be able to make it work properly. We could perform the root squashing on the client by overwritting the fsuid and gsuid of the task with the root_squash uid:guid specified on the MDS. These settings could be transmitted to the client either at mount time or each time file attributes are retrieved from the MDS (LDLM_INTENT_OPEN or LDLM_INTENT_GETATTR rpcs for instance). The patch I proposed last week is not suitable. Ok, let&apos;s find a better solution.&lt;/p&gt;</comment>
                            <comment id="52070" author="adilger" created="Fri, 8 Feb 2013 16:56:50 +0000"  >&lt;p&gt;Actually, the description in the use manual is correctly describing how the code functions:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#dbdoclet.50438221_64726&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#dbdoclet.50438221_64726&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Root squash is a security feature which restricts super-user access rights to a Lustre file system. Without the root squash feature enabled, Lustre users on untrusted clients could access or modify files owned by root on the filesystem, including deleting them. Using the root squash feature restricts file access/modifications as the root user to only the specified clients.  Note, however, that &lt;em&gt;this does not prevent users on insecure clients from accessing files owned by other users&lt;/em&gt;.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Like I wrote in the Gerrit comment, there is nothing that can be done by root squash to prevent access to files when someone has root access on the client.  In that case, the root user could &quot;&lt;tt&gt;su - other_user&lt;/tt&gt;&quot; and immediately circumvent all of the checking that was added to squash the root user access.  The root squash feature is only to prevent &quot;root&quot; on clients to be able to access and/or modify files &lt;em&gt;owned by root&lt;/em&gt; on the filesystem.  The same &quot;&lt;tt&gt;su - other_user&lt;/tt&gt;&quot; hole is present for NFS, and the fact that &quot;root&quot; is denied direct access on NFS is like a sheet of paper protecting a bank vault.&lt;/p&gt;

&lt;p&gt;The OpenSFS UID/GIF mapping and shared-key authentication features being developed by IU could allow for much more robust protection in the future.  This would allow mapping users from specific nodes to one set of UIDs that don&apos;t overlap with UIDs from other nodes, and with the shared-key node authentication it would be impossible for even root to access files for UIDs that are not mapped to that cluster.  If you are interested to follow this design and development, please email me and I will provide meeting and list details.&lt;/p&gt;</comment>
                            <comment id="52089" author="louveta" created="Sat, 9 Feb 2013 03:27:10 +0000"  >&lt;p&gt;Andreas, I think we are moving away from this ticket objective. I do agree with all points about security limitations of the root_squash feature, but this is not the problem there. The problem is that the manual says that access to root user is only granted to object for which it is allowed and this is not &lt;b&gt;always&lt;/b&gt; true.&lt;/p&gt;

&lt;p&gt;Case for which root try to get read access to object for witch the inode is already in client cache, doesn&apos;t get root_squash applied. Client code doesn&apos;t have knowledge about root_squash and only apply traditional security checking. The result is that root get access granted or denied depending of the cache content, which is very confusing for users. This is the only reason on the jira ticket.&lt;/p&gt;</comment>
                            <comment id="53895" author="pichong" created="Wed, 13 Mar 2013 07:54:24 +0000"  >&lt;p&gt;I have posted a patch for master on gerrit: &lt;a href=&quot;http://review.whamcloud.com/#change,5700&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,5700&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="62971" author="pichong" created="Thu, 25 Jul 2013 14:39:28 +0000"  >&lt;p&gt;Tests on the patchset 7 and 8 have made the client hang after conf-sanity test_43 (the one for root squash). I have been able to reproduce the hang (after 16 successful runs) and took a dump.&lt;/p&gt;

&lt;p&gt;It is available on ftp.whamcloud.com in /uploads/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1778&quot; title=&quot;Root Squash is not always properly enforced&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1778&quot;&gt;&lt;del&gt;LU-1778&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
ftp&amp;gt; dir&lt;br/&gt;
227 Entering Passive Mode (72,18,218,227,205,178).&lt;br/&gt;
150 Here comes the directory listing.&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114       3387608 Jul 25 07:22 lustre-2.4.51-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114      45824206 Jul 25 07:23 lustre-debuginfo-2.4.51-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114        181316 Jul 25 07:23 lustre-ldiskfs-4.1.0-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114       1674715 Jul 25 07:23 lustre-ldiskfs-debuginfo-4.1.0-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114       3312152 Jul 25 07:23 lustre-modules-2.4.51-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114        165060 Jul 25 07:24 lustre-osd-ldiskfs-2.4.51-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114       5067172 Jul 25 07:24 lustre-source-2.4.51-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114       4757320 Jul 25 07:24 lustre-tests-2.4.51-2.6.32_358.el6.x86_64_g4c66dbd.x86_64.rpm&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;-    1 123      114      100181834 Jul 25 07:27 vmcore&lt;/p&gt;

&lt;p&gt;Here are the information I have extracted from the dump.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
The unmount command seems hung. Higher part of the stack is due to the dump signal.

crash&amp;gt; bt 2723
PID: 2723   TASK: ffff88046ab98040  CPU: 4   COMMAND: &quot;umount&quot;
 #0 [ffff880028307e90] crash_nmi_callback at ffffffff8102d2c6
 #1 [ffff880028307ea0] notifier_call_chain at ffffffff815131d5
 #2 [ffff880028307ee0] atomic_notifier_call_chain at ffffffff8151323a
 #3 [ffff880028307ef0] notify_die at ffffffff8109cbfe
 #4 [ffff880028307f20] do_nmi at ffffffff81510e9b
 #5 [ffff880028307f50] nmi at ffffffff81510760
    [exception RIP: page_fault]
    RIP: ffffffff815104b0  RSP: ffff880472c13bc0  RFLAGS: 00000082
    RAX: ffffc9001dd57008  RBX: ffff880470b27e40  RCX: 000000000000000f
    RDX: ffffc9001dd1d000  RSI: ffff880472c13c08  RDI: ffff880470b27e40
    RBP: ffff880472c13c48   R8: 0000000000000000   R9: 00000000fffffffe
    R10: 0000000000000001  R11: 5a5a5a5a5a5a5a5a  R12: ffff880472c13c08 = struct cl_site *
    R13: 00000000000000c4  R14: 0000000000000000  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- &amp;lt;NMI exception stack&amp;gt; ---
 #6 [ffff880472c13bc0] page_fault at ffffffff815104b0
 #7 [ffff880472c13bc8] cfs_hash_putref at ffffffffa04305c1 [libcfs]
 #8 [ffff880472c13c50] lu_site_fini at ffffffffa0588841 [obdclass]
 #9 [ffff880472c13c70] cl_site_fini at ffffffffa0591d0e [obdclass]
#10 [ffff880472c13c80] ccc_device_free at ffffffffa0e6c16a [lustre]
#11 [ffff880472c13cb0] lu_stack_fini at ffffffffa058b22e [obdclass]
#12 [ffff880472c13cf0] cl_stack_fini at ffffffffa059132e [obdclass]
#13 [ffff880472c13d00] cl_sb_fini at ffffffffa0e703bd [lustre]
#14 [ffff880472c13d40] client_common_put_super at ffffffffa0e353d4 [lustre]
#15 [ffff880472c13d70] ll_put_super at ffffffffa0e35ef9 [lustre]
#16 [ffff880472c13e30] generic_shutdown_super at ffffffff8118326b
#17 [ffff880472c13e50] kill_anon_super at ffffffff81183356
#18 [ffff880472c13e70] lustre_kill_super at ffffffffa057d37a [obdclass]
#19 [ffff880472c13e90] deactivate_super at ffffffff81183af7
#20 [ffff880472c13eb0] mntput_no_expire at ffffffff811a1b6f
#21 [ffff880472c13ee0] sys_umount at ffffffff811a25db
#22 [ffff880472c13f80] system_call_fastpath at ffffffff8100b072
    RIP: 00007f0e6a971717  RSP: 00007fff17919878  RFLAGS: 00010206
    RAX: 00000000000000a6  RBX: ffffffff8100b072  RCX: 0000000000000010
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 00007f0e6c3cfb90
    RBP: 00007f0e6c3cfb70   R8: 00007f0e6c3cfbb0   R9: 0000000000000000
    R10: 00007fff179196a0  R11: 0000000000000246  R12: 0000000000000000
    R13: 0000000000000000  R14: 0000000000000000  R15: 00007f0e6c3cfbf0
    ORIG_RAX: 00000000000000a6  CS: 0033  SS: 002b

ccc_device_free() is called on lu_device 0xffff880475ad06c0

crash&amp;gt; struct lu_device ffff880475ad06c0
struct lu_device {
  ld_ref = {
    counter = 1
  }, 
  ld_type = 0xffffffffa0ea22e0, 
  ld_ops = 0xffffffffa0e787a0, 
  ld_site = 0xffff880472cf05c0, 
  ld_proc_entry = 0x0, 
  ld_obd = 0x0, 
  ld_reference = {&amp;lt;No data fields&amp;gt;}, 
  ld_linkage = {
    next = 0xffff880472cf05f0, 
    prev = 0xffff880472cf05f0
  }
}

ld_type-&amp;gt;ldt_tags
crash&amp;gt; rd -8 ffffffffa0ea22e0
ffffffffa0ea22e0:  04 = LU_DEVICE_CL

ld_type-&amp;gt;ldt_name
crash&amp;gt; rd  ffffffffa0ea22e8
ffffffffa0ea22e8:  ffffffffa0e7d09f = &quot;vvp&quot;


lu_site=ffff880472cf05c0
crash&amp;gt; struct lu_site ffff880472cf05c0
struct lu_site {
  ls_obj_hash = 0xffff880470b27e40, 
  ls_purge_start = 0, 
  ls_top_dev = 0xffff880475ad06c0, 
  ls_bottom_dev = 0x0, 
  ls_linkage = {
    next = 0xffff880472cf05e0, 
    prev = 0xffff880472cf05e0
  }, 
  ls_ld_linkage = {
    next = 0xffff880475ad06f0, 
    prev = 0xffff880475ad06f0
  }, 
  ls_ld_lock = {
    raw_lock = {
      slock = 65537
    }
  }, 
  ls_stats = 0xffff880470b279c0, 
  ld_seq_site = 0x0
}

crash&amp;gt; struct cfs_hash 0xffff880470b27e40
struct cfs_hash {
  hs_lock = {
    rw = {
      raw_lock = {
        lock = 0
      }
    }, 
    spin = {
      raw_lock = {
        slock = 0
      }
    }
  }, 
  hs_ops = 0xffffffffa05edee0, 
  hs_lops = 0xffffffffa044e320, 
  hs_hops = 0xffffffffa044e400,
  hs_buckets = 0xffff880471e4f000, 
  hs_count = {
    counter = 0
  }, 
  hs_flags = 6184, = 0x1828 = CFS_HASH_SPIN_BKTLOCK | CFS_HASH_NO_ITEMREF | CFS_HASH_ASSERT_EMPTY | CFS_HASH_DEPTH 
  hs_extra_bytes = 48, 
  hs_iterating = 0 &apos;\000&apos;, 
  hs_exiting = 1 &apos;\001&apos;, 
  hs_cur_bits = 23 &apos;\027&apos;, 
  hs_min_bits = 23 &apos;\027&apos;, 
  hs_max_bits = 23 &apos;\027&apos;, 
  hs_rehash_bits = 0 &apos;\000&apos;, 
  hs_bkt_bits = 15 &apos;\017&apos;, 
  hs_min_theta = 0, 
  hs_max_theta = 0, 
  hs_rehash_count = 0, 
  hs_iterators = 0, 
  hs_rehash_wi = {
    wi_list = {
      next = 0xffff880470b27e88, 
      prev = 0xffff880470b27e88
    }, 
    wi_action = 0xffffffffa04310f0 &amp;lt;cfs_hash_rehash_worker&amp;gt;, 
    wi_data = 0xffff880470b27e40, 
    wi_running = 0, 
    wi_scheduled = 0
  }, 
  hs_refcount = {
    counter = 0
  }, 
  hs_rehash_buckets = 0x0, 
  hs_name = 0xffff880470b27ec0 &quot;lu_site_vvp&quot;
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I am going to attach the log of the Maloo test that hung (Jul 19 10:12 PM).&lt;/p&gt;</comment>
                            <comment id="62972" author="pichong" created="Thu, 25 Jul 2013 14:41:14 +0000"  >&lt;p&gt;client log from Maloo test on patchset 7 (Jul 19 10:12 PM)&lt;/p&gt;</comment>
                            <comment id="72801" author="pichong" created="Wed, 4 Dec 2013 14:32:47 +0000"  >&lt;p&gt;I have posted another patch that adds a service to print a nidlist: &lt;a href=&quot;http://review.whamcloud.com/#/c/8479/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/8479/&lt;/a&gt; . After review of patchset 11 of #5700 patch, it seems to be a requirement.&lt;/p&gt;</comment>
                            <comment id="73508" author="cliffw" created="Fri, 13 Dec 2013 20:19:33 +0000"  >&lt;p&gt;Thank you. Would it be possible for you to rebase this on current master? There are a few conflicts preventing merge. &lt;/p&gt;</comment>
                            <comment id="76722" author="pichong" created="Tue, 11 Feb 2014 15:02:28 +0000"  >&lt;p&gt;The patch #8479 has been landed and then reverted due to a conflit with GNIIPLND patch.&lt;/p&gt;

&lt;p&gt;I have posted a new version of the patch: &lt;a href=&quot;http://review.whamcloud.com/9221&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9221&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="82170" author="jlevi" created="Tue, 22 Apr 2014 17:39:22 +0000"  >&lt;p&gt;Patch landed to Master&lt;/p&gt;</comment>
                            <comment id="82243" author="pichong" created="Wed, 23 Apr 2014 07:29:36 +0000"  >&lt;p&gt;This ticket has not been fixed yet.&lt;br/&gt;
The main patch &lt;a href=&quot;http://review.whamcloud.com/#change,5700&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,5700&lt;/a&gt; is still in progress.&lt;/p&gt;</comment>
                            <comment id="83634" author="pjones" created="Fri, 9 May 2014 15:05:47 +0000"  >&lt;p&gt;Now really landed for 2.6. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="86902" author="pichong" created="Wed, 18 Jun 2014 10:40:11 +0000"  >&lt;p&gt;I have backported the two patches to be integrated in 2.5 maintenance release.&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/10743&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10743&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/10744&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10744&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="92917" author="pichong" created="Mon, 1 Sep 2014 13:50:59 +0000"  >&lt;p&gt;The two above patches #10743 and #10744 have been posted and are ready for review since end of June.&lt;br/&gt;
Would it be possible to have them included in the next 2.5 maintenance release: 2.5.3 ?&lt;/p&gt;</comment>
                            <comment id="100262" author="gerrit" created="Mon, 1 Dec 2014 04:15:56 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/10743/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10743/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1778&quot; title=&quot;Root Squash is not always properly enforced&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1778&quot;&gt;&lt;del&gt;LU-1778&lt;/del&gt;&lt;/a&gt; libcfs: add a service that prints a nidlist&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 57a8a6bec4dc965388b5bba48e7501f79bdab44b&lt;/p&gt;</comment>
                            <comment id="100263" author="gerrit" created="Mon, 1 Dec 2014 04:16:01 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/10744/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10744/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1778&quot; title=&quot;Root Squash is not always properly enforced&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1778&quot;&gt;&lt;del&gt;LU-1778&lt;/del&gt;&lt;/a&gt; llite: fix inconsistencies of root squash feature&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: d82b4f54cbbe269519330e88639dd8e197636496&lt;/p&gt;</comment>
                            <comment id="162974" author="pichong" created="Wed, 24 Aug 2016 07:22:46 +0000"  >&lt;p&gt;Closing as the issue has been fixed (several months ago) in master and 2.5 maintenance release.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10120">
                    <name>Blocker</name>
                                                                <inwardlinks description="is blocked by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="25017">LU-5142</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31445">LU-6990</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="13246" name="conf-sanity.test_43.console.wtm-14vm2.log" size="206352" author="pichong" created="Thu, 25 Jul 2013 14:41:14 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvskf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8532</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>