Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-14121

EACESS Permission denied for slurm log files when using nodemap with admin=0

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.14.0, Lustre 2.12.7
    • Lustre 2.12.5
    • 2
    • 9223372036854775807

    Description

      Hello,
      We have recently deployed new filesystems running 2.12.5 and our users are hitting a problem with Slurm jobs which write the slurm log files onto the new filesystems.

      A bit of context on our environment:

      Our main HPC cluster is mounting a subdirectory mount of the wider filesystem via a nodemap. The nodemap settings are as follows:

      [root@rds-mds9 ~]# lctl get_param nodemap.csd3.*
      nodemap.csd3.admin_nodemap=0
      nodemap.csd3.audit_mode=1
      nodemap.csd3.deny_unknown=0
      nodemap.csd3.exports=
      [
      ...
      ]
      nodemap.csd3.fileset=/tenant/csd3
      nodemap.csd3.id=2
      nodemap.csd3.idmap=[
      
      ]
      nodemap.csd3.map_mode=both
      nodemap.csd3.ranges=
      [
      ...
      ]
      nodemap.csd3.sepol=
      
      nodemap.csd3.squash_gid=99
      nodemap.csd3.squash_uid=99
      nodemap.csd3.trusted_nodemap=1
      

      Of special note, we are setting the 'trusted' property to '1' as we do not do any id-mapping on the main cluster, which are all bound into our central LDAP domain. We are also setting 'admin' to 0, which I understand to be equivalent to 'root_squash' across all nodes in the nodemap.

      Consequently we aren't setting the usual settings to configure root_squash, relying on nodemaps for this feature entirely:

      [root@rds-mds9 ~]# lctl get_param mdt.*.root_squash
      mdt.rds-d7-MDT0000.root_squash=0:0
      mdt.rds-d7-MDT0002.root_squash=0:0
      [root@rds-mds9 ~]# lctl get_param mdt.*.nosquash_nids
      mdt.rds-d7-MDT0000.nosquash_nids=NONE
      mdt.rds-d7-MDT0002.nosquash_nids=NONE
      

      The problem we are seeing is that when a user launches a slurm job that will write it's slurm output log files into a directory on this filesystem, then it will sometimes fail to create this log file and abort the job.

      I have included the full strace capture from the slurmd daemon when it hits this error, along with a simple slurm job file showing the job definition:

      • slurm.FAIL.strace
      example job script
      #!/bin/bash
      #SBATCH -A SUPPORT-CPU
      #SBATCH --output=/rds-d7/project/mjr208/testdir_groups/slurm.out.5
      #SBATCH --nodes=1
      #SBATCH --ntasks=1
      #SBATCH --time=1:00
      
      echo "STARTING JOB"
      pwd
      exit 0
      

      The key portion of the strace is here:

      Unable to find source-code formatter for language: strace slurmd. Available languages are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml
      $ grep 72268 slurm.FAIL.strace | grep -v -E '(futex|tgkill|TKILL)'
      ...
      72268 20:44:49.224728 geteuid()         = 0 <0.000005>                                                                                                                                                              
      72268 20:44:49.224747 umask(002)        = 022 <0.000005>                                                                                                                                                            
      72268 20:44:49.224764 getuid()          = 0 <0.000004>                             
      72268 20:44:49.224780 getgid()          = 0 <0.000005>                                                                                                                                                              
      72268 20:44:49.224796 getcwd("/var/spool/slurm/slurmd", 4096) = 24 <0.000005>
      72268 20:44:49.224814 getgroups(0, NULL) = 51 <0.000005>                                                                                                                                                            
      72268 20:44:49.224831 getgroups(51, [815, 901, 902, 904, 905, 910, 1099, 1100, 1101, 8053, 8054, 10573, 14500, 14998, 17501, 41000, 41029, 41037, 41357, 42000, 42006, 42042, 42080, 42100, 42112, 42113, 43000, 430
      01, 43003, 43005, 43007, 43009, ...]) = 51 <0.000005>                                                                                                                                                               
      72268 20:44:49.224870 getuid()          = 0 <0.000005>                           
      72268 20:44:49.225163 setresgid(-1, 10573, -1 <unfinished ...>                                                                                                                                                      
      72268 20:44:49.225175 <... setresgid resumed>) = 0 <0.000008>                                                                                                                                                       
      72268 20:44:49.225574 setgroups(51, [10573, 50045, 80001, 80004, 80007, 80016, 80019, 43003, 43001, 43005, 43009, 43007, 41357, 80017, 43010, 42042, 42006, 41029, 42080, 41037, 17501, 901, 8054, 905, 14998, 904, 
      14500, 90022, 815, 902, 1100, 8053, ...] <unfinished ...>                                                                                                                                                           
      72268 20:44:49.225608 <... setgroups resumed>) = 0 <0.000008>                                                                                                                                                       
      72268 20:44:49.225895 setresuid(-1, 10573, -1 <unfinished ...>                                                                                                                                                      
      72268 20:44:49.225907 <... setresuid resumed>) = 0 <0.000008>                                      
      72268 20:44:49.225919 open("/dev/null", O_RDONLY) = 10 <0.000006>
      72268 20:44:49.225938 fcntl(10, F_SETFD, FD_CLOEXEC) = 0 <0.000005>                                                                                                                                                 
      72268 20:44:49.225955 open("/rds-d7/project/mjr208/testdir_groups/slurm.out.5", O_WRONLY|O_CREAT|O_TRUNC|O_APPEND, 0666) = -1 EACCES (Permission denied) <0.000785>
      

      So this process is the slurmstepd process running as root forked by slurmd when starting the job, and it appears to be setting it's effective UID and GID to that of the job's user, setting it's groups to that of the user, and then attempting to open the slurm output file at which point it fails.

      The permissions on this directory are clearly writeable by the user:

      [mjr208@cpu-p-129]:~/ $ ls -ld /rds-d7/project/mjr208/testdir_groups
      drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov  5 20:30 /rds-d7/project/mjr208/testdir_groups
      

      I have also captured debug traces from the MDS when this happens, which are attached in the file:

      • slurm.FAIL.mds.llog

      I have noticed in this file the following line which I think might be significant:

      00000004:08000000:35.0:1604606961.472081:0:150250:0:(mdd_permission.c:315:__mdd_permission_internal()) permission denied, mode 45f8, fsuid 99, uid 0
      

      This made me suspect whether the root_squash setting of the nodemap could be to blame here, and sure enough, if I set the 'admin' property to 1 and re-run the slurm job, it runs successfully without error.

      To test this I added an additional debugging print line to the __mdd_permission_internal function to just print the value of fsiud and uid before it does the permission check:

      --- a/lustre/mdd/mdd_permission.c
      +++ b/lustre/mdd/mdd_permission.c
      @@ -276,6 +276,8 @@ int __mdd_permission_internal(const struct lu_env *env, struct mdd_object *obj,
        LASSERT(la != NULL);
       
        mode = la->la_mode;
      + CDEBUG(D_SEC, "Checking access: mode %x, fsuid %u, uid %u\n",
      +        la->la_mode, uc->uc_fsuid, la->la_uid);
        if (uc->uc_fsuid == la->la_uid) {
          mode >>= 6;
               } else {
      

      and I captured another debug trace from the MDS as before, which is attached in:

      • slurm.no-root-squash.SUCCESS.mds.llog

      The debug line I added above shows:

      00000004:08000000:33.0:1604607230.414011:0:150250:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
      

      So it appears that the 'admin' property on this nodemap is having an effect here that is causing problems for slurm. Are we using this wrong, or is this possibly a bug?

      To help isolate this further, I've made a little reproducer C program that tries to copy the sequence of system calls that the slurmstepd daemon is doing before attempting to open the file. Attached are also strace, and debug traces from both the client and MDS for this program, both with and without the nodemap 'admin' property set:

      • rds-d7-eacces.c
      • reproducer.FAIL.client.llog
      • reproducer.FAIL.client.strace
      • reproducer.FAIL.mds.llog
      • reproducer.no-root-squash.SUCCESS.client.llog
      • reproducer.no-root-squash.SUCCESS.mds.llog

      At the moment, we are having to ask our users to re-direct their slurm output files to another filesystem, as we don't see this behaviour on our (much) older filesystems. We may also just enable the 'admin' property for now, however I would really like to get to the bottom of why this is failing.

      Cheers,
      Matt

      Attachments

        1. rds-d7-eacces.c
          4 kB
        2. reproducer.FAIL.client.llog
          103 kB
        3. reproducer.FAIL.client.strace
          139 kB
        4. reproducer.FAIL.mds.llog
          13.65 MB
        5. reproducer.no-root-squash.SUCCESS.client.llog
          1.55 MB
        6. reproducer.no-root-squash.SUCCESS.mds.llog
          20.93 MB
        7. slurm.FAIL.mds.llog
          18.13 MB
        8. slurm.FAIL.strace
          17.85 MB
        9. slurm.no-root-squash.SUCCESS.mds.llog
          37.69 MB

        Issue Links

          Activity

            [LU-14121] EACESS Permission denied for slurm log files when using nodemap with admin=0

            Thanks for the update Sebastien.

            I will do as you suggest using the traditional root_squash mechanism, however it would be good if it's possible to change this to use the effective uid in future - we are moving to the 'multi-tenant' model with Lustre using nodemaps for all sub-clusters, and they are very convenient administrative grouping, so would be nice to be able to use these for root-squash enablement in future.

            Thanks again,
            Matt

            mrb Matt Rásó-Barnett (Inactive) added a comment - Thanks for the update Sebastien. I will do as you suggest using the traditional root_squash mechanism, however it would be good if it's possible to change this to use the effective uid in future - we are moving to the 'multi-tenant' model with Lustre using nodemaps for all sub-clusters, and they are very convenient administrative grouping, so would be nice to be able to use these for root-squash enablement in future. Thanks again, Matt

            Hi,

            After more investigation, it turns out the problem stems from the way the nodemap feature interprets the squash concept. In the current implementation, if the real uid is squashed, then the fsuid is similarly squashed, no matter what is the value of the effective uid. Maybe this squashing is a little bit too strict, and should take care of the effective uid when making a decision. But the behavior needs to be changed with caution...

            In your case, the fsuid is changed from 10573 to 99 on MDS side, which prevents access to the testdir_groups directory. When you ls as user mjr208, it does not involve any effective uid, and this user is not squashed, so it "loads" into the client's cache the information regarding the testdir_groups directory. This is why the subsequent access by the reproducer succeeds.

            I will try to see if it is possible to change the way squashing in nodemap is currently implemented. Meanwhile, what you can do is to set the admin property to 1 on your nodemap, and use the legacy root_squash mechanism to prevent root access. I checked on my local test system, and it works as expected: the reproducer runs fine, and root cannot access non-world readable/writable files and directories.

            Cheers,
            Sebastien.

            sebastien Sebastien Buisson added a comment - Hi, After more investigation, it turns out the problem stems from the way the nodemap feature interprets the squash concept. In the current implementation, if the real uid is squashed, then the fsuid is similarly squashed, no matter what is the value of the effective uid . Maybe this squashing is a little bit too strict, and should take care of the effective uid when making a decision. But the behavior needs to be changed with caution... In your case, the fsuid is changed from 10573 to 99 on MDS side, which prevents access to the testdir_groups directory. When you ls as user mjr208 , it does not involve any effective uid , and this user is not squashed, so it "loads" into the client's cache the information regarding the testdir_groups directory. This is why the subsequent access by the reproducer succeeds. I will try to see if it is possible to change the way squashing in nodemap is currently implemented. Meanwhile, what you can do is to set the admin property to 1 on your nodemap, and use the legacy root_squash mechanism to prevent root access. I checked on my local test system, and it works as expected: the reproducer runs fine, and root cannot access non-world readable/writable files and directories. Cheers, Sebastien.

            Sure, for the moment we are just asking users to redirect their slurm output files to their home directories as a workaround which is good enough for now. We may also temporarily set the 'admin' property to '1' if it becomes a big hassle.

            Cheers,

            Matt

            mrb Matt Rásó-Barnett (Inactive) added a comment - Sure, for the moment we are just asking users to redirect their slurm output files to their home directories as a workaround which is good enough for now. We may also temporarily set the 'admin' property to '1' if it becomes a big hassle. Cheers, Matt

            In my case, I cannot manage to have the reproducer passing, even on the first iteration, unless I do ls testdir_groups/.. beforehand.

            Maybe you could try to use this workaround in Slurm before I am able to provide you with a fix?

            sebastien Sebastien Buisson added a comment - In my case, I cannot manage to have the reproducer passing, even on the first iteration, unless I do ls testdir_groups/.. beforehand. Maybe you could try to use this workaround in Slurm before I am able to provide you with a fix?

            Ok that's good news that it's reproducible - I was starting to get worried that we had something weird going on at our end!

            Indeed I was just checking the intermediate directory just now, and noticed that creating the output file at that level didn't hit the problem at all, only in the level below. 

            Anyway, thanks for your time looking at this,
            Matt

            mrb Matt Rásó-Barnett (Inactive) added a comment - Ok that's good news that it's reproducible - I was starting to get worried that we had something weird going on at our end! Indeed I was just checking the intermediate directory just now, and noticed that creating the output file at that level didn't hit the problem at all, only in the level below.  Anyway, thanks for your time looking at this, Matt

            Hi Matt,

            Thanks for all this additional information, and for re-running tests on your side with the debug traces.

            I created a tree hierarchy similar to what you have, with same permissions and bits, as I was initially missing an intermediate directory in the path.

            "Good news" is that I can now reproduce your issue! The first iteration works fine, but all subsequent ones hit the problem, no matter we specify the same file or a different one to the reproducer, as long as it is in the same directory.
            I also confirm that doing a simple ls in the parent directory of testdir_groups "resets" the behaviour for just one run of the reproducer, and then it is back to failure.

            It definitely points to something that would get cached somewhere... but I have no clue yet. I need to investigate further now that I can reproduce.

            Thanks,
            Sebastien.

            sebastien Sebastien Buisson added a comment - Hi Matt, Thanks for all this additional information, and for re-running tests on your side with the debug traces. I created a tree hierarchy similar to what you have, with same permissions and bits, as I was initially missing an intermediate directory in the path. "Good news" is that I can now reproduce your issue! The first iteration works fine, but all subsequent ones hit the problem, no matter we specify the same file or a different one to the reproducer, as long as it is in the same directory. I also confirm that doing a simple ls in the parent directory of testdir_groups "resets" the behaviour for just one run of the reproducer, and then it is back to failure. It definitely points to something that would get cached somewhere... but I have no clue yet. I need to investigate further now that I can reproduce. Thanks, Sebastien.

            I've re-run the above with the additional CDEBUG line you suggested added to the code. There are three log captures:

            1) From shell as user mjr208 on the same client, running:

            [mjr208@cpu-p-231]:mjr208/ $ ls -ltr
            total 12K
            drwxrws--- 2 mjr208 mjr208                   4.0K Nov  4 19:05 testdir
            drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov  4 19:09 testdir_acls
            drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov  9 13:44 testdir_groups
            
            MDS logs
            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-144336
            00000001:02000400:3.0F:1604933000.317642:0:5706:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            00000004:08000000:1.0F:1604933011.669748:0:286340:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
            00000004:08000000:1.0:1604933011.670507:0:286340:0:(mdd_dir.c:94:__mdd_lookup()) calling mdd_permission_internal_locked for testdir_groups
            00000004:08000000:1.0:1604933011.670508:0:286340:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
            Debug log: 4 lines, 4 kept, 0 dropped, 0 bad.
            

            2) From shell as user root on the same client, running the reproducer - no error:

            [root@cpu-p-231 ~]# ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 0
            [INFO] Username of ruid: root
            [INFO] Number of groups: '2'
                 0 - root,    988 - sfcb, 
            [INFO] Looking up user: mjr208
            [INFO] 'mjr208' uid: 10573, gid: 10573
            [INFO] Setting egid to: 10573
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] Number of groups for user: '51'
             ... 
            [INFO] Setting groups of calling process to groups of mjr208
            [INFO] Number of groups for calling process: '51'
            [INFO] Printing process groups
             ...
            [INFO] set euid to : 10573
            [INFO] Real UID: 0 | Effective UID: 10573
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] rds directory fd: 7
            
            MDS logs
            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-144344
            00000001:02000400:3.0F:1604933016.182371:0:5860:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            00000004:08000000:1.0F:1604933021.006260:0:285135:0:(mdd_dir.c:94:__mdd_lookup()) calling mdd_permission_internal_locked for slurm.out.7
            00000004:08000000:1.0:1604933021.006262:0:285135:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 10573
            00000004:08000000:35.0F:1604933021.006391:0:285135:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 81a4, fsuid 10573, uid 10573
            Debug log: 4 lines, 4 kept, 0 dropped, 0 bad.
            

            3) From shell as user root on the same client, re-running the reproducer - error:

            [root@cpu-p-231 ~]# ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 0
            [INFO] Username of ruid: root
            [INFO] Number of groups: '2'
                 0 - root,    988 - sfcb, 
            [INFO] Looking up user: mjr208
            [INFO] 'mjr208' uid: 10573, gid: 10573
            [INFO] Setting egid to: 10573
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] Number of groups for user: '51'
             ...
            [INFO] Setting groups of calling process to groups of mjr208
            [INFO] Number of groups for calling process: '51'
            [INFO] Printing process groups
             ...
            [INFO] set euid to : 10573
            [INFO] Real UID: 0 | Effective UID: 10573
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            open error : Permission denied
            
            MDS logs
            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-144350
            00000001:02000400:3.0F:1604933024.533435:0:5905:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            00000004:08000000:35.0F:1604933026.143219:0:285136:0:(mdd_dir.c:94:__mdd_lookup()) calling mdd_permission_internal_locked for testdir_groups
            00000004:08000000:35.0:1604933026.143221:0:285136:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 99, uid 0
            00000004:08000000:35.0:1604933026.143225:0:285136:0:(mdd_permission.c:315:__mdd_permission_internal()) permission denied, mode 45f8, fsuid 99, uid 0
            Debug log: 4 lines, 4 kept, 0 dropped, 0 bad.
            
            mrb Matt Rásó-Barnett (Inactive) added a comment - I've re-run the above with the additional CDEBUG line you suggested added to the code. There are three log captures: 1) From shell as user mjr208 on the same client, running: [mjr208@cpu-p-231]:mjr208/ $ ls -ltr total 12K drwxrws--- 2 mjr208 mjr208 4.0K Nov 4 19:05 testdir drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov 4 19:09 testdir_acls drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov 9 13:44 testdir_groups MDS logs [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-144336 00000001:02000400:3.0F:1604933000.317642:0:5706:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST 00000004:08000000:1.0F:1604933011.669748:0:286340:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0 00000004:08000000:1.0:1604933011.670507:0:286340:0:(mdd_dir.c:94:__mdd_lookup()) calling mdd_permission_internal_locked for testdir_groups 00000004:08000000:1.0:1604933011.670508:0:286340:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0 Debug log: 4 lines, 4 kept, 0 dropped, 0 bad. 2) From shell as user root on the same client, running the reproducer - no error : [root@cpu-p-231 ~]# ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 0 [INFO] Username of ruid: root [INFO] Number of groups: '2' 0 - root, 988 - sfcb, [INFO] Looking up user: mjr208 [INFO] 'mjr208' uid: 10573, gid: 10573 [INFO] Setting egid to: 10573 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] Number of groups for user: '51' ... [INFO] Setting groups of calling process to groups of mjr208 [INFO] Number of groups for calling process: '51' [INFO] Printing process groups ... [INFO] set euid to : 10573 [INFO] Real UID: 0 | Effective UID: 10573 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] rds directory fd: 7 MDS logs [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-144344 00000001:02000400:3.0F:1604933016.182371:0:5860:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST 00000004:08000000:1.0F:1604933021.006260:0:285135:0:(mdd_dir.c:94:__mdd_lookup()) calling mdd_permission_internal_locked for slurm.out.7 00000004:08000000:1.0:1604933021.006262:0:285135:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 10573 00000004:08000000:35.0F:1604933021.006391:0:285135:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 81a4, fsuid 10573, uid 10573 Debug log: 4 lines, 4 kept, 0 dropped, 0 bad. 3) From shell as user root on the same client, re-running the reproducer - error : [root@cpu-p-231 ~]# ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 0 [INFO] Username of ruid: root [INFO] Number of groups: '2' 0 - root, 988 - sfcb, [INFO] Looking up user: mjr208 [INFO] 'mjr208' uid: 10573, gid: 10573 [INFO] Setting egid to: 10573 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] Number of groups for user: '51' ... [INFO] Setting groups of calling process to groups of mjr208 [INFO] Number of groups for calling process: '51' [INFO] Printing process groups ... [INFO] set euid to : 10573 [INFO] Real UID: 0 | Effective UID: 10573 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7 open error : Permission denied MDS logs [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-144350 00000001:02000400:3.0F:1604933024.533435:0:5905:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST 00000004:08000000:35.0F:1604933026.143219:0:285136:0:(mdd_dir.c:94:__mdd_lookup()) calling mdd_permission_internal_locked for testdir_groups 00000004:08000000:35.0:1604933026.143221:0:285136:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 99, uid 0 00000004:08000000:35.0:1604933026.143225:0:285136:0:(mdd_permission.c:315:__mdd_permission_internal()) permission denied, mode 45f8, fsuid 99, uid 0 Debug log: 4 lines, 4 kept, 0 dropped, 0 bad.

            Just a further comment on the above. I just split the MDS logs when I get no error into two files:
            1) what is produced when I 'cd' to the directory as my user, and run 'ls -ltr'

            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-141610
            00000001:02000400:2.0F:1604931356.286982:0:206961:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            00000004:08000000:49.0F:1604931366.341767:0:210612:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
            00000004:08000000:49.0:1604931366.342936:0:210612:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
            Debug log: 3 lines, 3 kept, 0 dropped, 0 bad.
            

            2) when I run the reproducer program

            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-141627
            00000001:02000400:2.0F:1604931370.254125:0:207029:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            Debug log: 1 lines, 1 kept, 0 dropped, 0 bad.
            

            So the reproducer isn't actually triggering that code path it seems? Is it possible that there is a cached lookup of some access permissions on the directory in question?

            mrb Matt Rásó-Barnett (Inactive) added a comment - Just a further comment on the above. I just split the MDS logs when I get no error into two files: 1) what is produced when I 'cd' to the directory as my user, and run 'ls -ltr' [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-141610 00000001:02000400:2.0F:1604931356.286982:0:206961:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST 00000004:08000000:49.0F:1604931366.341767:0:210612:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0 00000004:08000000:49.0:1604931366.342936:0:210612:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0 Debug log: 3 lines, 3 kept, 0 dropped, 0 bad. 2) when I run the reproducer program [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-141627 00000001:02000400:2.0F:1604931370.254125:0:207029:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST Debug log: 1 lines, 1 kept, 0 dropped, 0 bad. So the reproducer isn't actually triggering that code path it seems? Is it possible that there is a cached lookup of some access permissions on the directory in question?

            Hi Sebastien, thanks for looking into this for me.

            Here is some of the information you requested.

            The directory I am writing to has the following permissions, and I also include the parent directory above this, as this is the first directory that does not have world-readable (755) configured in our hierarchy:

            # Parent directory
            [mjr208@cpu-p-231]:~/ $ ls -ld /rds-d7/project/mjr208/testdir_groups
            drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov  9 13:13 /rds-d7/project/mjr208/testdir_groups
            
            # Parent of this directory
            [mjr208@cpu-p-231]:~/ $ ls -ld /rds-d7/project/mjr208
            drwxrws--- 5 nobody mjr208 4.0K Nov  4 19:13 /rds-d7/project/mjr208
            

            I wonder could the 'uid 0, fsid 99' seen in the debug line I added in __mdd_permission_internal() could be from this directory instead?

            I'm just recompiling with the added debug line and will re-run with the extra information as you've suggested and report back shortly. You are running the reproducer correctly, however one thing that I didn't mention before is that I have had inconsistent results with it.

            For example I just re-ran the reproducer now on a fresh client in the nodemap, exactly as before, using the same output file, same directory, and the first iteration succeeded (no error), and then re-running the exact same command, the second iteration failed:

            ########### ITERATION 1
            [root@cpu-p-231 ~]# lctl clear; > client.rds-d7.eacess.llog.daemon; lctl mark "STARTING TEST"; strace -o 
            client.rds-d7.eacess.strace ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7     
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 0
            [INFO] Username of ruid: root
            [INFO] Number of groups: '2'
                 0 - root,    988 - sfcb, 
            [INFO] Looking up user: mjr208
            [INFO] 'mjr208' uid: 10573, gid: 10573
            [INFO] Setting egid to: 10573
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] Number of groups for user: '51'
             ... 
            [INFO] Setting groups of calling process to groups of mjr208
            [INFO] Number of groups for calling process: '51'
            [INFO] Printing process groups
             ...
            [INFO] set euid to : 10573
            [INFO] Real UID: 0 | Effective UID: 10573
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] rds directory fd: 7
            
            ######## ITERATION 2
            [root@cpu-p-231 ~]# lctl clear; > client.rds-d7.eacess.llog.daemon; lctl mark "STARTING TEST"; strace -o client.rds-d7.eacess.strace ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 0
            [INFO] Username of ruid: root
            [INFO] Number of groups: '2'
                 0 - root,    988 - sfcb, 
            [INFO] Looking up user: mjr208
            [INFO] 'mjr208' uid: 10573, gid: 10573
            [INFO] Setting egid to: 10573
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] Number of groups for user: '51'
             ... 
            [INFO] Setting groups of calling process to groups of mjr208
            [INFO] Number of groups for calling process: '51'
            [INFO] Printing process groups
             ... 
            [INFO] set euid to : 10573
            [INFO] Real UID: 0 | Effective UID: 10573
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            open error : Permission denied
            

            Re-running 10 more times after this, they all consistently fail.

            However I've found that I can reset this behaviour so that the reproducer doesn't fail again by simply 'cd' to this directory as my user from another shell on the node. eg:

            Last login: Mon Nov  9 13:31:06 GMT 2020 on pts/3
            [mjr208@cpu-p-231]:~/ $ cd /rds-d7/project/mjr208                                                        
            [mjr208@cpu-p-231]:mjr208/ $ ls -ltr
            total 12K
            drwxrws--- 2 mjr208 mjr208                   4.0K Nov  4 19:05 testdir
            drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov  4 19:09 testdir_acls
            drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov  9 13:13 testdir_groups 

            Then I re-run the reproducer:

            [root@cpu-p-231 ~]# lctl clear; > client.rds-d7.eacess.llog.daemon; lctl mark "STARTING TEST"; strace -o client.rds-d7.eacess.strace ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 0
            [INFO] Username of ruid: root
            [INFO] Number of groups: '2'
                 0 - root,    988 - sfcb, 
            [INFO] Looking up user: mjr208
            [INFO] 'mjr208' uid: 10573, gid: 10573
            [INFO] Setting egid to: 10573
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] Number of groups for user: '51'
            ... 
            [INFO] Setting groups of calling process to groups of mjr208
            [INFO] Number of groups for calling process: '51'
            [INFO] Printing process groups
            ...
            [INFO] set euid to : 10573
            [INFO] Real UID: 0 | Effective UID: 10573
            [INFO] Real GID: 0 | Effective GID: 10573
            [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7
            [INFO] rds directory fd: 7 

            Which runs successfully. Re-running this again, fails as before.

            Here are the MDS logs for the above test with 'debug=sec':

            MDS logs when no error
            # MDS logs when no error is seen.
            # First mjr208 user in another shell cd to /rds-d7/project/mjr208 and runs 'ls -ltr'
            # Then reproducer program is run
            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-135320
            00000001:02000400:2.0F:1604929978.610749:0:183864:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            00000004:08000000:15.0F:1604929984.842158:0:210588:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
            00000004:08000000:15.0:1604929984.842963:0:210588:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0
            00000004:08000000:15.0F:1604929990.902950:0:220614:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 10573
            00000004:08000000:15.0:1604929990.902981:0:220614:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 81a4, fsuid 10573, uid 10573
            Debug log: 5 lines, 5 kept, 0 dropped, 0 bad.
            
            MDS logs when error is seen immediately after
            # Re-running the reproducer immediately after the above
            [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-140338
            00000001:02000400:2.0F:1604930614.679069:0:194884:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST
            00000004:08000000:23.0F:1604930616.962154:0:210578:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 99, uid 0
            00000004:08000000:23.0:1604930616.962160:0:210578:0:(mdd_permission.c:315:__mdd_permission_internal()) permission denied, mode 45f8, fsuid 99, uid 0
            Debug log: 3 lines, 3 kept, 0 dropped, 0 bad.
            

            Sorry for the long reply again, I hope what I'm doing here is clear.
            Can you think of any explanation for this behaviour at all?

            Cheers,
            Matt

            mrb Matt Rásó-Barnett (Inactive) added a comment - Hi Sebastien, thanks for looking into this for me. Here is some of the information you requested. The directory I am writing to has the following permissions, and I also include the parent directory above this, as this is the first directory that does not have world-readable (755) configured in our hierarchy: # Parent directory [mjr208@cpu-p-231]:~/ $ ls -ld /rds-d7/project/mjr208/testdir_groups drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov 9 13:13 /rds-d7/project/mjr208/testdir_groups # Parent of this directory [mjr208@cpu-p-231]:~/ $ ls -ld /rds-d7/project/mjr208 drwxrws--- 5 nobody mjr208 4.0K Nov 4 19:13 /rds-d7/project/mjr208 I wonder could the 'uid 0, fsid 99' seen in the debug line I added in __mdd_permission_internal() could be from this directory instead? I'm just recompiling with the added debug line and will re-run with the extra information as you've suggested and report back shortly. You are running the reproducer correctly, however one thing that I didn't mention before is that I have had inconsistent results with it. For example I just re-ran the reproducer now on a fresh client in the nodemap, exactly as before, using the same output file, same directory, and the first iteration succeeded (no error), and then re-running the exact same command, the second iteration failed: ########### ITERATION 1 [root@cpu-p-231 ~]# lctl clear; > client.rds-d7.eacess.llog.daemon; lctl mark "STARTING TEST"; strace -o client.rds-d7.eacess.strace ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 0 [INFO] Username of ruid: root [INFO] Number of groups: '2' 0 - root, 988 - sfcb, [INFO] Looking up user: mjr208 [INFO] 'mjr208' uid: 10573, gid: 10573 [INFO] Setting egid to: 10573 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] Number of groups for user: '51' ... [INFO] Setting groups of calling process to groups of mjr208 [INFO] Number of groups for calling process: '51' [INFO] Printing process groups ... [INFO] set euid to : 10573 [INFO] Real UID: 0 | Effective UID: 10573 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] rds directory fd: 7 ######## ITERATION 2 [root@cpu-p-231 ~]# lctl clear; > client.rds-d7.eacess.llog.daemon; lctl mark "STARTING TEST"; strace -o client.rds-d7.eacess.strace ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 0 [INFO] Username of ruid: root [INFO] Number of groups: '2' 0 - root, 988 - sfcb, [INFO] Looking up user: mjr208 [INFO] 'mjr208' uid: 10573, gid: 10573 [INFO] Setting egid to: 10573 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] Number of groups for user: '51' ... [INFO] Setting groups of calling process to groups of mjr208 [INFO] Number of groups for calling process: '51' [INFO] Printing process groups ... [INFO] set euid to : 10573 [INFO] Real UID: 0 | Effective UID: 10573 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7 open error : Permission denied Re-running 10 more times after this, they all consistently fail. However I've found that I can reset this behaviour so that the reproducer doesn't fail again by simply 'cd' to this directory as my user from another shell on the node. eg: Last login: Mon Nov 9 13:31:06 GMT 2020 on pts/3 [mjr208@cpu-p-231]:~/ $ cd /rds-d7/project/mjr208 [mjr208@cpu-p-231]:mjr208/ $ ls -ltr total 12K drwxrws--- 2 mjr208 mjr208 4.0K Nov 4 19:05 testdir drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov 4 19:09 testdir_acls drwxrws--- 2 mjr208 rds-kwtZ8ccHIQg-managers 4.0K Nov 9 13:13 testdir_groups Then I re-run the reproducer: [root@cpu-p-231 ~]# lctl clear; > client.rds-d7.eacess.llog.daemon; lctl mark "STARTING TEST"; strace -o client.rds-d7.eacess.strace ./rds-d7-eacces mjr208 /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 0 [INFO] Username of ruid: root [INFO] Number of groups: '2' 0 - root, 988 - sfcb, [INFO] Looking up user: mjr208 [INFO] 'mjr208' uid: 10573, gid: 10573 [INFO] Setting egid to: 10573 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] Number of groups for user: '51' ... [INFO] Setting groups of calling process to groups of mjr208 [INFO] Number of groups for calling process: '51' [INFO] Printing process groups ... [INFO] set euid to : 10573 [INFO] Real UID: 0 | Effective UID: 10573 [INFO] Real GID: 0 | Effective GID: 10573 [INFO] open rds file: /rds-d7/project/mjr208/testdir_groups/slurm.out.7 [INFO] rds directory fd: 7 Which runs successfully. Re-running this again, fails as before. Here are the MDS logs for the above test with 'debug=sec': MDS logs when no error # MDS logs when no error is seen. # First mjr208 user in another shell cd to /rds-d7/project/mjr208 and runs 'ls -ltr' # Then reproducer program is run [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-135320 00000001:02000400:2.0F:1604929978.610749:0:183864:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST 00000004:08000000:15.0F:1604929984.842158:0:210588:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0 00000004:08000000:15.0:1604929984.842963:0:210588:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 0 00000004:08000000:15.0F:1604929990.902950:0:220614:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 10573, uid 10573 00000004:08000000:15.0:1604929990.902981:0:220614:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 81a4, fsuid 10573, uid 10573 Debug log: 5 lines, 5 kept, 0 dropped, 0 bad. MDS logs when error is seen immediately after # Re-running the reproducer immediately after the above [root@rds-mds9 ~]# cat mds.rds-d7.eacess.llog.daemon.20201109-140338 00000001:02000400:2.0F:1604930614.679069:0:194884:0:(debug.c:510:libcfs_debug_mark_buffer()) DEBUG MARKER: STARTING TEST 00000004:08000000:23.0F:1604930616.962154:0:210578:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 99, uid 0 00000004:08000000:23.0:1604930616.962160:0:210578:0:(mdd_permission.c:315:__mdd_permission_internal()) permission denied, mode 45f8, fsuid 99, uid 0 Debug log: 3 lines, 3 kept, 0 dropped, 0 bad. Sorry for the long reply again, I hope what I'm doing here is clear. Can you think of any explanation for this behaviour at all? Cheers, Matt
            sebastien Sebastien Buisson added a comment - - edited

            Hi Matt,

            Thanks for this very well documented ticket. In order to reproduce the issue, I installed Lustre 2.12.5 on my client and server nodes. I started the Lustre file system, and created a directory similar to yours (including the setgid bit), with the uids as shown below:

            # id user0
            uid=500(user0) gid=500(user0) groups=500(user0),5001(user0g1),5002(user0g2)
            
            # grep user0 /etc/passwd
            user00:x:501:501::/home/user00:/bin/bash
            user0:x:500:500::/home/user0:/bin/bash
            # grep user0 /etc/group
            user00:x:501:
            user0:x:500:
            user0g1:x:5001:user0
            user0g2:x:5002:user0
            
            # ls -ld /mnt/lustre/subdir
            drwxr-xr-x 5 root root 4096 Nov  9 20:48 /mnt/lustre/subdir
            
            # ls -ld /mnt/lustre/subdir/testdir_groups_1
            drwxrws--- 2 user0 user0g1 4096 Nov  9 18:34 /mnt/lustre/subdir/testdir_groups_1
            

            Then I unmounted my client, and created the following nodemap definition, similar to what you did:

            # lctl get_param nodemap.c0.*
            nodemap.c0.admin_nodemap=0
            nodemap.c0.audit_mode=1
            nodemap.c0.deny_unknown=0
            nodemap.c0.exports=[
            
            ]
            nodemap.c0.fileset=/subdir
            nodemap.c0.id=1
            nodemap.c0.idmap=[
            
            ]
            nodemap.c0.map_mode=both
            nodemap.c0.ranges=
            [
             { id: 1, start_nid: 10.128.11.159@tcp, end_nid:10.128.11.159@tcp }
            ]
            nodemap.c0.sepol=
            
            nodemap.c0.squash_gid=99
            nodemap.c0.squash_uid=99
            nodemap.c0.trusted_nodemap=1
            

            Then I remounted the client, so that the nodemap is taken into account:

            # mount -t lustre
            10.128.11.155@tcp:/lustre on /mnt/lustre type lustre (rw,flock,user_xattr,lazystatfs)
            
            # ls -l /mnt/seb
            drwxrws--- 2 user0  user0g1 4096 Nov  9 21:07 testdir_groups_1
            

            Line above shows we have subdirectory mount enforced.

            So I tried your reproducer:

            # id
            uid=0(root) gid=0(root) groups=0(root)
            # /tmp/rds-d7-eacces user0 /mnt/lustre/testdir_groups_1/slurm.out
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 0
            [INFO] Username of ruid: root
            [INFO] Number of groups: '1'
                 0 - root,
            [INFO] Looking up user: user0
            [INFO] 'user0' uid: 500, gid: 500
            [INFO] Setting egid to: 500
            [INFO] Real UID: 0 | Effective UID: 0
            [INFO] Real GID: 0 | Effective GID: 500
            [INFO] Number of groups for user: '3'
               500 - user0,   5001 - user0g1,   5002 - user0g2,
            [INFO] Setting groups of calling process to groups of user0
            [INFO] Number of groups for calling process: '3'
            [INFO] Printing process groups
               500 - user0,   5001 - user0g1,   5002 - user0g2,
            [INFO] set euid to : 500
            [INFO] Real UID: 0 | Effective UID: 500
            [INFO] Real GID: 0 | Effective GID: 500
            [INFO] open rds file: /mnt/lustre/testdir_groups_1/slurm.out
            [INFO] rds directory fd: 3
            

            As you can see, no permission denied in my case. I also added the debug trace you inserted in __mdd_permission_internal, it shows:

            00000004:08000000:5.0:1604915212.229442:0:19824:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 500, uid 500
            

            Looking at the result with root:

            # ls -ld /mnt/lustre/testdir_groups_1
            drwxrws--- 2 user0 user0g1 4096 Nov  9 18:36 /mnt/lustre/testdir_groups_1
            # ls -l /mnt/lustre/testdir_groups_1
            ls: cannot open directory /mnt/lustre/testdir_groups_1: Permission denied
            

            Permission denied is expected above, as admin property is set to 0. But when accessing as user user0:

            $ id
            uid=500(user0) gid=500(user0) groups=500(user0),5001(user0g1),5002(user0g2)
            -bash-4.2$ ls -l /mnt/lustre/testdir_groups_1
            total 0
            -rw-r--r-- 1 user0 user0g1 0 Nov  9 18:36 slurm.out
            

            Maybe I am not using the reproducer properly?

            What is weird in your case is that the debug line you added says uid 0. This is the uid of the parent directory in which the file is being created, so I am wondering if your are accessing the right directory. Could you please show the access rights of the target directory, from a client node that is part of the Trusted nodemap? Also, could you please show the mount command you use on client side?

            In order to ease debugging, would you mind applying the following patch to add more information, and then collect Lustre debug logs on client and server (MDS) side with just the sec debug mask, when you launch the reproducer program?

            diff --git a/lustre/mdd/mdd_dir.c b/lustre/mdd/mdd_dir.c
            index ddb3062..1b6b677 100644
            --- a/lustre/mdd/mdd_dir.c
            +++ b/lustre/mdd/mdd_dir.c
            @@ -91,6 +91,7 @@ __mdd_lookup(const struct lu_env *env, struct md_object *pobj,
                                   mdd2obd_dev(m)->obd_name, PFID(mdo2fid(mdd_obj)));
                    }
            
            +       CDEBUG(D_SEC, "calling mdd_permission_internal_locked for %s\n", name);
                    rc = mdd_permission_internal_locked(env, mdd_obj, pattr, mask,
                                                        MOR_TGT_PARENT);
                    if (rc)
            

            The exact command used for the reproducer, and its standard output, would be helpful too.

            Thanks,
            Sebastien.

            sebastien Sebastien Buisson added a comment - - edited Hi Matt, Thanks for this very well documented ticket. In order to reproduce the issue, I installed Lustre 2.12.5 on my client and server nodes. I started the Lustre file system, and created a directory similar to yours (including the setgid bit), with the uids as shown below: # id user0 uid=500(user0) gid=500(user0) groups=500(user0),5001(user0g1),5002(user0g2) # grep user0 /etc/passwd user00:x:501:501::/home/user00:/bin/bash user0:x:500:500::/home/user0:/bin/bash # grep user0 /etc/group user00:x:501: user0:x:500: user0g1:x:5001:user0 user0g2:x:5002:user0 # ls -ld /mnt/lustre/subdir drwxr-xr-x 5 root root 4096 Nov 9 20:48 /mnt/lustre/subdir # ls -ld /mnt/lustre/subdir/testdir_groups_1 drwxrws--- 2 user0 user0g1 4096 Nov 9 18:34 /mnt/lustre/subdir/testdir_groups_1 Then I unmounted my client, and created the following nodemap definition, similar to what you did: # lctl get_param nodemap.c0.* nodemap.c0.admin_nodemap=0 nodemap.c0.audit_mode=1 nodemap.c0.deny_unknown=0 nodemap.c0.exports=[ ] nodemap.c0.fileset=/subdir nodemap.c0.id=1 nodemap.c0.idmap=[ ] nodemap.c0.map_mode=both nodemap.c0.ranges= [ { id: 1, start_nid: 10.128.11.159@tcp, end_nid:10.128.11.159@tcp } ] nodemap.c0.sepol= nodemap.c0.squash_gid=99 nodemap.c0.squash_uid=99 nodemap.c0.trusted_nodemap=1 Then I remounted the client, so that the nodemap is taken into account: # mount -t lustre 10.128.11.155@tcp:/lustre on /mnt/lustre type lustre (rw,flock,user_xattr,lazystatfs) # ls -l /mnt/seb drwxrws--- 2 user0 user0g1 4096 Nov 9 21:07 testdir_groups_1 Line above shows we have subdirectory mount enforced. So I tried your reproducer: # id uid=0(root) gid=0(root) groups=0(root) # /tmp/rds-d7-eacces user0 /mnt/lustre/testdir_groups_1/slurm.out [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 0 [INFO] Username of ruid: root [INFO] Number of groups: '1' 0 - root, [INFO] Looking up user: user0 [INFO] 'user0' uid: 500, gid: 500 [INFO] Setting egid to: 500 [INFO] Real UID: 0 | Effective UID: 0 [INFO] Real GID: 0 | Effective GID: 500 [INFO] Number of groups for user: '3' 500 - user0, 5001 - user0g1, 5002 - user0g2, [INFO] Setting groups of calling process to groups of user0 [INFO] Number of groups for calling process: '3' [INFO] Printing process groups 500 - user0, 5001 - user0g1, 5002 - user0g2, [INFO] set euid to : 500 [INFO] Real UID: 0 | Effective UID: 500 [INFO] Real GID: 0 | Effective GID: 500 [INFO] open rds file: /mnt/lustre/testdir_groups_1/slurm.out [INFO] rds directory fd: 3 As you can see, no permission denied in my case. I also added the debug trace you inserted in __mdd_permission_internal , it shows: 00000004:08000000:5.0:1604915212.229442:0:19824:0:(mdd_permission.c:280:__mdd_permission_internal()) Checking access: mode 45f8, fsuid 500, uid 500 Looking at the result with root : # ls -ld /mnt/lustre/testdir_groups_1 drwxrws--- 2 user0 user0g1 4096 Nov 9 18:36 /mnt/lustre/testdir_groups_1 # ls -l /mnt/lustre/testdir_groups_1 ls: cannot open directory /mnt/lustre/testdir_groups_1: Permission denied Permission denied is expected above, as admin property is set to 0. But when accessing as user user0 : $ id uid=500(user0) gid=500(user0) groups=500(user0),5001(user0g1),5002(user0g2) -bash-4.2$ ls -l /mnt/lustre/testdir_groups_1 total 0 -rw-r--r-- 1 user0 user0g1 0 Nov 9 18:36 slurm.out Maybe I am not using the reproducer properly? What is weird in your case is that the debug line you added says uid 0 . This is the uid of the parent directory in which the file is being created, so I am wondering if your are accessing the right directory. Could you please show the access rights of the target directory, from a client node that is part of the Trusted nodemap? Also, could you please show the mount command you use on client side? In order to ease debugging, would you mind applying the following patch to add more information, and then collect Lustre debug logs on client and server (MDS) side with just the sec debug mask, when you launch the reproducer program? diff --git a/lustre/mdd/mdd_dir.c b/lustre/mdd/mdd_dir.c index ddb3062..1b6b677 100644 --- a/lustre/mdd/mdd_dir.c +++ b/lustre/mdd/mdd_dir.c @@ -91,6 +91,7 @@ __mdd_lookup(const struct lu_env *env, struct md_object *pobj, mdd2obd_dev(m)->obd_name, PFID(mdo2fid(mdd_obj))); } + CDEBUG(D_SEC, "calling mdd_permission_internal_locked for %s\n", name); rc = mdd_permission_internal_locked(env, mdd_obj, pattr, mask, MOR_TGT_PARENT); if (rc) The exact command used for the reproducer, and its standard output, would be helpful too. Thanks, Sebastien.
            pjones Peter Jones added a comment -

            Sebastien

            Could you please advise?

            Thanks

            Peter

            pjones Peter Jones added a comment - Sebastien Could you please advise? Thanks Peter

            People

              sebastien Sebastien Buisson
              mrb Matt Rásó-Barnett (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: