[LU-5150] NULL pointer dereference in posix_acl_valid() under mdc_get_lustre_md() Created: 05/Jun/14  Updated: 26/Feb/15  Resolved: 25/Aug/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.2, Lustre 2.5.3
Fix Version/s: Lustre 2.6.0, Lustre 2.7.0, Lustre 2.5.4

Type: Bug Priority: Blocker
Reporter: Christopher Morrone Assignee: Lai Siyao
Resolution: Fixed Votes: 0
Labels: llnl, mn4, prz

Attachments: Text File vmcore-dmesg.txt    
Issue Links:
Related
is related to LU-4680 Invalid system.posix_acl_default brea... Resolved
is related to LU-4787 OOPS from null pointer dereference in... Resolved
Severity: 2
Rank (Obsolete): 14213

 Description   

After upgrading our servers from Lustre 2.4.0-28chaos to Lustre 2.4.2-11chaos (see github.com/chaos/lustre), we are seeing many client crashes with a NULL pointer dereference in posix_acl_valid() under mdc_get_lustre_md(). Note that both 2.4.0-19chaos client nodes and 2.4.2-11chaos client nodes are exhibiting this behavior.

The backtrace looks like:

PID: 3690   TASK: ffff880338d69540  CPU: 7   COMMAND: "ll_sa_3689"
 #0 [ffff8802ddf51800] machine_kexec+0x18b at ffffffff810391ab
 #1 [ffff8802ddf51860] crash_kexec+0x72 at ffffffff810c5d52
 #2 [ffff8802ddf51930] oops_end+0xc0 at ffffffff8152e630
 #3 [ffff8802ddf51960] no_context+0xfb at ffffffff8104a00b
 #4 [ffff8802ddf519b0] __bad_area_nosemaphore+0x125 at ffffffff8104a295
 #5 [ffff8802ddf51a00] bad_area_nosemaphore+0x13 at ffffffff8104a363
 #6 [ffff8802ddf51a10] __do_page_fault+0x32f at ffffffff8104aacf
 #7 [ffff8802ddf51b30] do_page_fault+0x3e at ffffffff8153057e
 #8 [ffff8802ddf51b60] page_fault+0x25 at ffffffff8152d935
    [exception RIP: posix_acl_valid+9]
    RIP: ffffffff811ea9b9  RSP: ffff8802ddf51c10  RFLAGS: 00010207
    RAX: 0000000000000000  RBX: ffff8805ec607000  RCX: ffff8805ebddda00
    RDX: 0000000000000004  RSI: 0000000000000004  RDI: 0000000000000000
    RBP: ffff8802ddf51c10   R8: 0000000000000000   R9: ffff8805ec46dc40
    R10: 0000000000000000  R11: 0000000000000000  R12: ffff88063987bc00
    R13: ffff8802ddf51cf0  R14: 0000000000000000  R15: 0000000000000050
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff8802ddf51c18] mdc_get_lustre_md+0xc5a at ffffffffa0af4faa [mdc]
#10 [ffff8802ddf51c78] lmv_get_lustre_md+0x153 at ffffffffa0d668d3 [lmv]
#11 [ffff8802ddf51cc8] ll_prep_inode+0x3f7 at ffffffffa0c7e217 [lustre]
#12 [ffff8802ddf51da8] ll_post_statahead+0x2f7 at ffffffffa0ca0577 [lustre]
#13 [ffff8802ddf51e18] ll_statahead_thread+0xd38 at ffffffffa0ca4ff8 [lustre]
#14 [ffff8802ddf51f48] child_rip+0xa at ffffffff8100c24a

The crash is on this line in posix_acl_valid():

crash> gdb list *(posix_acl_valid+9)
0xffffffff811ea9b9 is in posix_acl_valid (fs/posix_acl.c:88).
83              const struct posix_acl_entry *pa, *pe;
84              int state = ACL_USER_OBJ;
85              unsigned int id = 0;  /* keep gcc happy */
86              int needs_mask = 0;
87     
88              FOREACH_ACL_ENTRY(pa, acl, pe) {
89                      if (pa->e_perm & ~(ACL_READ|ACL_WRITE|ACL_EXECUTE))
90                              return -EINVAL;
91                      switch (pa->e_tag) {
92                              case ACL_USER_OBJ:

The problem is not particular to the statahead thread. That was just one example. Here is another where I ran getfattr on the problem file:

2014-06-05 14:51:29 BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
2014-06-05 14:51:29 IP: [<ffffffff811ea9b9>] posix_acl_valid+0x9/0x120
2014-06-05 14:51:29 PGD 638f93067 PUD 5f1873067 PMD 0 
2014-06-05 14:51:29 Oops: 0000 [#1] SMP 
2014-06-05 14:51:29 last sysfs file: /sys/devices/system/edac/pci/pci_parity_count
2014-06-05 14:51:29 CPU 10 
2014-06-05 14:51:29 Modules linked in: lmv(U) mgc(U) zfs(P)(U) zcommon(P)(U) znvpair(P)(U) zavl(P)(U) zunicode(P)(U) spl(U) zlib_deflate lustre(U) lov(U) osc(U) mdc(U
2014-06-05 14:51:29 
2014-06-05 14:51:29 Pid: 6114, comm: getfattr Tainted: P           ---------------    2.6.32-431.17.2.1chaos.ch5.2.x86_64 #1 Dell     XS23-TY35   /0GW08P
2014-06-05 14:51:29 RIP: 0010:[<ffffffff811ea9b9>]  [<ffffffff811ea9b9>] posix_acl_valid+0x9/0x120
2014-06-05 14:51:29 RSP: 0018:ffff8805f0e698d8  EFLAGS: 00010207
2014-06-05 14:51:29 RAX: 0000000000000000 RBX: ffff880639212000 RCX: ffff8805f1870a00
2014-06-05 14:51:29 RDX: 0000000000000004 RSI: 0000000000000004 RDI: 0000000000000000
2014-06-05 14:51:29 RBP: ffff8805f0e698d8 R08: 0000000000000000 R09: 0000000000000040
2014-06-05 14:51:29 R10: 0000000000000000 R11: 0000000000000000 R12: ffff8805f3b22c00
2014-06-05 14:51:29 R13: ffff8805f0e699b8 R14: 0000000000000000 R15: 0000000000000050
2014-06-05 14:51:29 FS:  00002aaaab266fa0(0000) GS:ffff88034ac80000(0000) knlGS:0000000000000000
2014-06-05 14:51:29 CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
2014-06-05 14:51:29 CR2: 0000000000000004 CR3: 0000000638355000 CR4: 00000000000007e0
2014-06-05 14:51:29 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
2014-06-05 14:51:29 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
2014-06-05 14:51:29 Process getfattr (pid: 6114, threadinfo ffff8805f0e68000, task ffff88063a007500)
2014-06-05 14:51:29 Stack:
2014-06-05 14:51:29  ffff8805f0e69938 ffffffffa091dfaa ffff8805f1870a00 ffff8806389acc00
2014-06-05 14:51:29 <d> 0000000000000000 0000000000000000 6c0db3630c468718 ffff8806389acc00
2014-06-05 14:51:29 <d> ffff880639212000 ffff880639212400 ffff8805f0e699b8 ffff8805f3b22c00
2014-06-05 14:51:29 Call Trace:
2014-06-05 14:51:29  [<ffffffffa091dfaa>] mdc_get_lustre_md+0xc5a/0x1420 [mdc]
2014-06-05 14:51:29  [<ffffffffa0d6c8d3>] lmv_get_lustre_md+0x153/0x3d0 [lmv]
2014-06-05 14:51:29  [<ffffffffa0a99217>] ll_prep_inode+0x3f7/0xf60 [lustre]
2014-06-05 14:51:29  [<ffffffffa079faa8>] ? req_capsule_server_get+0x18/0x20 [ptlrpc]
2014-06-05 14:51:29  [<ffffffffa0d85e4a>] ? lmv_intent_lookup+0x25a/0x770 [lmv]
2014-06-05 14:51:29  [<ffffffffa0aa53b0>] ? ll_md_blocking_ast+0x0/0x740 [lustre]
2014-06-05 14:51:29  [<ffffffffa0aa89aa>] ll_lookup_it_finish+0x1da/0xe80 [lustre]
2014-06-05 14:51:29  [<ffffffffa0d86fca>] ? lmv_intent_lock+0x32a/0x380 [lmv]
2014-06-05 14:51:29  [<ffffffffa0aa53b0>] ? ll_md_blocking_ast+0x0/0x740 [lustre]
2014-06-05 14:51:29  [<ffffffffa0aa9a3d>] ll_lookup_it+0x3ed/0xbd0 [lustre]
2014-06-05 14:51:29  [<ffffffffa0aa53b0>] ? ll_md_blocking_ast+0x0/0x740 [lustre]
2014-06-05 14:51:29  [<ffffffffa0aaa2ac>] ll_lookup_nd+0x8c/0x430 [lustre]
2014-06-05 14:51:29  [<ffffffff811a457e>] ? d_alloc+0x13e/0x1b0
2014-06-05 14:51:29  [<ffffffff811998a5>] do_lookup+0x1a5/0x230
2014-06-05 14:51:29  [<ffffffff81199fb7>] __link_path_walk+0x587/0x850
2014-06-05 14:51:29  [<ffffffff811680ea>] ? alloc_pages_current+0xaa/0x110
2014-06-05 14:51:29  [<ffffffff8119a97a>] path_walk+0x6a/0xe0
2014-06-05 14:51:29  [<ffffffff8119ab8b>] filename_lookup+0x6b/0xc0
2014-06-05 14:51:29  [<ffffffff8119bcb7>] user_path_at+0x57/0xa0
2014-06-05 14:51:29  [<ffffffff8104a9a4>] ? __do_page_fault+0x204/0x490
2014-06-05 14:51:29  [<ffffffff8128ae05>] ? rb_insert_color+0x125/0x160
2014-06-05 14:51:29  [<ffffffff8114f020>] ? __vma_link_rb+0x30/0x40
2014-06-05 14:51:29  [<ffffffff8118f7a0>] vfs_fstatat+0x50/0xa0
2014-06-05 14:51:29  [<ffffffff8118f85e>] vfs_lstat+0x1e/0x20
2014-06-05 14:51:29  [<ffffffff8118f884>] sys_newlstat+0x24/0x50
2014-06-05 14:51:29  [<ffffffff8153057e>] ? do_page_fault+0x3e/0xa0
2014-06-05 14:51:29  [<ffffffff8152d935>] ? page_fault+0x25/0x30
2014-06-05 14:51:29  [<ffffffff8100b0b2>] system_call_fastpath+0x16/0x1b


 Comments   
Comment by Peter Jones [ 06/Jun/14 ]

Lai

Could you please look into this?

Thanks

Peter

Comment by Christopher Morrone [ 06/Jun/14 ]

We have had some trouble in this area before. When we do something like this on Lustre:

$ mkdir x
$ cp -rp x y

you wind up with the following xattr set:

$ getfattr -d -m. y
# file: y
system.posix_acl_default=0sAgAAAA==

Now, if we create a file under that directory with our Lustre 2.4.0-28chaos and older version of ZFS, the newly created file has...nothing unusual.

But if we create a file in that directory under Lustre 2.4.0-28chaos with the latest ZFS, we see that it winds up with this acl:

system.posix_acl_access = \002\000\000\000

We needed to use ZDB to get that through the posix mount of the MDT, because both 2.4.0 and 2.4.2 clients crash when trying to access that xattr.

In mdc_unpack_acl(), we have this code:

        acl = posix_acl_from_xattr(buf, body->aclsize);
        if (IS_ERR(acl)) {
                rc = PTR_ERR(acl);
                CERROR("convert xattr to acl: %d\n", rc);
                RETURN(rc);
        }

        rc = posix_acl_valid(acl);
        if (rc) {
                CERROR("validate acl: %d\n", rc);
                posix_acl_release(acl);
                RETURN(rc);
        }

My belief is that bufsize is nonzero, but posix_acl_from_xattr() is returning NULL. We don't check for NULL, only if the returned pointer has an error code set. We then pass the NULL pointer to posix_acl_valid(), which in turn crashes.

I have a patch to handle a NULL pointer that I am about to test, and will push to gerrit if it fixes the client.

However, this still leaves the question of what has changed to allow that almost-empty xattr to be created in the first place, and what we should do about it.

Comment by Christopher Morrone [ 06/Jun/14 ]

Also, upgrading our clients is going to be tough, and this particular problem isn't easy to fix with systemtap on the clients. Is there an easy way to filter out this xattr on the MDS side when the value is bad, so that the client never sees it? A server side patch would get us up an working again quickly.

Comment by Christopher Morrone [ 06/Jun/14 ]

Yes, the client side fix works:

http://review.whamcloud.com/10620

Note that in the patch I added an extra check for NULL rather than just changing IS_ERR to IS_ERR_OR_NULL because I don't want to see those CERROR messages constantly in the NULL case.

Comment by Christopher Morrone [ 06/Jun/14 ]

This is not to say that the ticket is done. We still would love to have a server fix ASAP. Even a server-side workaround to hide the problem xattr would be good until we decide on the longer term server fix.

Comment by nasf (Inactive) [ 06/Jun/14 ]

Hi Chris,

Under the urgent case, there are two temporary solutions may be helpful for your system:

1) I am not sure how much your system depends on the ACL for permission. If it is not too much, you can mount the MDT device with "-o noacl", then as expected, all ACL related code should be disabled automatically.

2) If option 1) is not suitable for you, and if we cannot figure out why the bad ACL is generated in a short time, then we can filter out the bad ACL on the MDS before it returned to the client via checking the ACL size and the ACL entries count (0 case will also cause client side to interpret it as NULL ACL). There are only two entries for such checking: mdt_finish_open() and mdt_getattr_internal(), or we can add the filter inside the mdd_xattr_get(). Anyway, it is relative easy to make the patch.

Comment by Christopher Morrone [ 06/Jun/14 ]

I'll ask if they are OK with disabling acl, and test if that works.

Option 2 is probably our more preferable one. What do you mean about the "0 case"?

Comment by nasf (Inactive) [ 06/Jun/14 ]

"0" case means that there is only ACL header, then the ACL size is not zero, but the ACL entries count in the header in zero. Such case should be an invalid ACL.

Comment by Christopher Morrone [ 06/Jun/14 ]

Mounting the MDT device with "-o noacl" does not seem to help. Clients still crash.

Comment by nasf (Inactive) [ 06/Jun/14 ]

Would you please to check what the connect_flags (/proc/fs/lustre/mdc/xxx/connect_flags) are on the crashed client? Most of users will enable "cal" by default, so I want to make sure whether the switch "-o acl/noacl" really works or not. Thanks!

BTW, the new mounted client also will crash after the MDT mounted with "-o noacl" ?

Comment by Christopher Morrone [ 06/Jun/14 ]

Oh, I didn't newly mount the client. That is not a useful approach if that is needed.

Most of users will enable "cal" by default, so I want to make sure whether the switch "-o acl/noacl" really works or not.

"acl" is in the list of mdc connect_flags, yes.

BTW, the new mounted client also will crash after the MDT mounted with "-o noacl" ?

If we have to remount the client, we might as well just install my client fix. That isn't going to be useful.

Comment by nasf (Inactive) [ 06/Jun/14 ]

I do not want you to remount all the old clients, I just want to make sure whether the "-o noacl" options takes effect or not. As expected, after the MDS remount with "-o noacl", the old client should reconnect and should detect that the MDS claims "noacl", then the MDC connect_flags should NOT show "acl" again. But according to your feedback, the "acl" is still there, that means either the old client is not aware of the "noacl" or the MDS does not handle "noacl" correctly which can be verified via new mounted client.

If it is the former case, we need to consider the option 2); if it is the later case, we can fix the MDS to make the "noacl" works as expected.

Comment by Christopher Morrone [ 06/Jun/14 ]
But according to your feedback

I wouldn't read anything into my feedback except that setting "-o noacl" on the MDT mount line did not fix the problem. I didn't check the mdc flags while it was in that reconnected state, I had already moved on and set the filesystem back to normal mode.

The details there aren't important. It didn't work. Time for option 2.

Comment by Christopher Morrone [ 06/Jun/14 ]

But actually, we are about ready to call it a night. The sysadmin decided he would selectively reboot some clients that are talking to a filesystem (lets call it A) that we know has problem xattrs now set. Those clients with get the patch that I shared.

Another filesystem (call it B) never finished the upgrade process, partly because they were waiting to see what happened with A. We don't need to reboot B's clients.

So I think we're in a state to make it through the night. We are going to head home.

It would be nice to have "option 2" to install some time tomorrow.

Comment by nasf (Inactive) [ 06/Jun/14 ]

Since Lai is on vacation, I will work on the patch temporary. Here it is: http://review.whamcloud.com/10623

Comment by Christopher Morrone [ 06/Jun/14 ]

Thanks! That patch should be a good work-around for us. I'll give it a try.

As I mentioned, the the xattr posix_acl_access has the contents

system.posix_acl_access = \002\000\000\000

which is, in octal, just the little-endian form of:

 #define POSIX_ACL_XATTR_VERSION	0x0002

The version is the first, and only, contents of the acl xattr's header. So we know that this is a case of a xattr being written out for an empty acl list.

Why did the Lustre server start doing that? If at all possible, we don't want to to happen because it is going to have negative performance implications.

Comment by Andreas Dilger [ 06/Jun/14 ]

I have the feeling (though nothing to support it yet) that the presence of posix_acl_access on all of the files may be a side-effect of how the clients or user tools are behaving. I recall seeing elsewhere that "cp" will set the ACL for every file, even if there is no ACL on the source file, and even if the ACL is exactly the same as the UGO mode bits.

I don't know if that is because the kernel is providing a synthetic xattr with just the UGO mode bits when asked for one by userspace, or if userspace is always setting one even when one does not exist. Running an strace cp -a /etc/hosts /myth/tmp/hosts on my somewhat older test system (FC12 userspace and 2.6.32-175.fc12 kernel) I see:

stat("/myth/tmp/hosts", {st_mode=S_IFREG|0644, st_size=2940, ...}) = 0
lstat("/etc/hosts", {st_mode=S_IFREG|0644, st_size=2940, ...}) = 0
stat("/myth/tmp/hosts", {st_mode=S_IFREG|0644, st_size=2940, ...}) = 0
open("/etc/hosts", O_RDONLY|O_NOFOLLOW) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=2940, ...}) = 0
open("/myth/tmp/hosts", O_WRONLY|O_TRUNC) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
mmap(NULL, 4202496, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fe5b58c8000
read(3, "##\n# Host Database\n#\n# Do not re"..., 4194304) = 2940
write(4, "##\n# Host Database\n#\n# Do not re"..., 2940) = 2940
read(3, "", 4194304)                    = 0
utimensat(4, NULL, {{1393396068, 346810283}, {1393396068, 347809548}}, 0) = 0
flistxattr(3, (nil), 0)                 = 0
flistxattr(3, 0x7fffb3618eb0, 0)        = 0
fsetxattr(4, "system.posix_acl_access", "\x02\x00\x00\x00\x01\x00\x06\x00\xff\xff\xff\xff\x04\x00\x00\x00\xff\xff\xff\xff \x00\x00\x00\xff\xff\xff\xff", 28, 0) = 0
fchown(4, 0, 0)                         = -1 EPERM (Operation not permitted)
fchown(4, 4294967295, 0)                = -1 EPERM (Operation not permitted)
fgetxattr(3, "system.posix_acl_access", 0x7fffb3618da0, 132) = -1 ENODATA (No data available)
fstat(3, {st_mode=S_IFREG|0644, st_size=2940, ...}) = 0
fsetxattr(4, "system.posix_acl_access", "\x02\x00\x00\x00\x01\x00\x06\x00\xff\xff\xff\xff\x04\x00\x04\x00\xff\xff\xff\xff \x00\x04\x00\xff\xff\xff\xff", 28, 0) = 0

So it is definitely setting an ACL on the target file even though none exists on the source file. However, checking with getfattr -d -m.* /myth/tmp/hosts shows no ACL was actually created in this case, though I'm not sure where that xattr is filtered out. It might be different on newer kernels.

The "empty" posix_acl_access above "\x02\x00\x00\x00\x01\x00\x06\x00\xff\xff\xff\xff\x04\x00\x04\x00\xff\xff\xff\xff \x00\x04\x00\xff\xff\xff\xff" (note space in there is hex 0x20) decodes to be:

typedef struct {                
        __le16                  e_tag;
        __le16                  e_perm;
        __le32                  e_id;
} posix_acl_xattr_entry; 

typedef struct {
        __le32                  a_version;
        posix_acl_xattr_entry   a_entries[3];
} posix_acl_xattr_header = {
        .a_version = 0x00000002 = POSIX_ACL_XATTR_VERSION;
        .a_entries[0] = { .e_tag = 0x0001 = ACL_USER_OBJ, .e_perm = 0x0006 = 06, .e_id = 0xffffffff = -1 },
        .a_entries[1] = { .e_tag = 0x0004 = ACL_GROUP_OBJ, .e_perm = 0x0004 = 04, .e_id = 0xffffffff = -1 },
        .a_entries[2] = { .e_tag = 0x0020 = ACL_OTHER, .e_perm = 0x0004 = 04, .e_id = 0xffffffff = -1 }
}

so it seems this is kind of a no-op ACL with the e_id = -1?

I also straced getfacl /myth/tmp/hosts on a file that does have an ACL, and it appears that getfacl is also using getxattr("system.posix_acl_access") internally:

getxattr("/myth/tmp/foo2", "system.posix_acl_access", "\x02\x00\x00\x00\x01\x00\x06\x00\xff\xff\xff\xff\x02\x00\x06\x00\xe8\x03\x00\x00\x04\x00\x04\x00\xff\xff\xff\xff\x10\x00\x06\x00\xff\xff\xff\xff \x00\x04\x00\xff\xff\xff\xff", 132) = 44
open("/usr/share/locale/locale.alias", O_RDONLY) = 3

to fetch the ACL from the file, instead of some ACL-specific syscall, so it isn't just a case of cp blindly copying xattrs around that it shouldn't be.

Comment by Christopher Morrone [ 06/Jun/14 ]

I have the feeling (though nothing to support it yet) that the presence of posix_acl_access on all of the files may be a side-effect of how the clients or user tools are behaving.

Note quite correct in this case. I gave the example of how this is happening now.

You are correct that "cp -a" or "cp -pr" is going to make empty ACL xattrs under some situations. We have found (due to other Lustre bugs in the past) that when you do such a copy of a directory from a filesystem that is ACL-enabled, but has no acls set on the file, the newly created directory will have the xattr named "posix_acl_default" set.

That happened with us in the past, and we probably have directories with posix_acl_default set sprinkled all over our filesystems.

Now, any newly created file in one of those directories will automatically get a posix_acl_access xattr. It will be just a header, and contain no actual ACLs.

Granted, the "cp -a" may also result in the same situation, but that is not the creation path at the moment.

We rather suspect that they are getting suppressed elsewhere at the ext4/ldiskfs layer. These posix acl xattrs have no special meaning to ZFS. The POSIX ZFS ACL support is done through ZFS System Attributes instead. So ZFS is just happily storing and recalling whatever Lustre gives it.

Comment by Leon Kos [ 09/Jun/14 ]

I am having the same problem with LUSTRE 2.5.1+ZFS servers. When asking the user how the affected directory was produced, he remembered copying files from remote filesistem back to LUSTRE. I tried also 2.6 clients and they crash too. Crash debug gives the same output as initially reported by LLNL.

Comment by Leon Kos [ 09/Jun/14 ]

2.5.1 client crash debug output

crash /boot/vmlinux-2.6.32.431.17.1.el6_lustre.2.5.1.el6.bz2 /var/crash /127.0.0.1-2014-05-31-18:49:50/vmcore
crash>  gdb list *(posix_acl_valid+9)
0xffffffff811e8ea9 is in posix_acl_valid (fs/posix_acl.c:88).
83              const struct posix_acl_entry *pa, *pe;
84              int state = ACL_USER_OBJ;
85              unsigned int id = 0;  /* keep gcc happy */
86              int needs_mask = 0;
87
88              FOREACH_ACL_ENTRY(pa, acl, pe) {
89                      if (pa->e_perm & ~(ACL_READ|ACL_WRITE|ACL_EXECUTE))
90                              return -EINVAL;
91                      switch (pa->e_tag) {
92                              case ACL_USER_OBJ:
Comment by Leon Kos [ 09/Jun/14 ]

Temporary patch http://review.whamcloud.com/#/c/10623/ resolved my issue on 2.5.1

Comment by Christopher Morrone [ 12/Jun/14 ]

A related issue exists in LU-4680.

Comment by Lai Siyao [ 26/Jun/14 ]

There is patch http://review.whamcloud.com/#/c/10850/ for LU-4680, which should handle this issue with mounting MDS with "noacl".

I'll continue looking into the empty default acl created in `cp`.

Comment by Christopher Morrone [ 26/Jun/14 ]

The 10850 should be for LU-3660.

Comment by Lai Siyao [ 27/Jun/14 ]

`man acl_get_file` shows:

If type is ACL_TYPE_DEFAULT and no default ACL is associated with the directory path_p, then an ACL containing zero ACL entries is returned.

so `cp -rp ...` will set this empty ACL on target, and ldiskfs_xattr_set_acl() will verify this ACL with posix_acl_from_xattr(), which will convert this empty ACL to NULL, and finally ->setxattr(name, NULL) will try to remove the specified ACL if existed. I'll commit a patch for osd-zfs to follow this semantic too.

Comment by Peter Jones [ 30/Jun/14 ]

Lai

How does http://review.whamcloud.com/#/c/10895/ relate to this ticket?

Peter

Comment by Lai Siyao [ 02/Jul/14 ]

Yes, Peter, that's the fix for this issue.

Comment by Li Wei (Inactive) [ 14/Jul/14 ]

Chris,

Now, if we create a file under that directory with our Lustre 2.4.0-28chaos and older version of ZFS, the newly created file has...nothing unusual.
But if we create a file in that directory under Lustre 2.4.0-28chaos with the latest ZFS, we see that it winds up with this acl:

Are you sure it was the "2.4.0-28chaos", but not "2.4.2-11chaos", that wound up with the problematic access ACL? Also, majority (if not all) of the problematic files were created after the server upgrade, weren't they?

I think the crashes started to happen only after the 2.4.2 upgrade is because of b181565, which landed in 2.4.1. Without the patch, when creating a file in a directory with an empty default ACL, mdd would skip setting the access ACL for the file, because the ACL would not provide any additional information then the mode bits. See "reset_acl" in mdd_create(). With the patch, the access ACL is set unconditionally in this case. This explains why 2.4.0 did not wind up with the empty system.posix_acl_access while 2.4.2 did. The patch also adds an assertion in mdd_acl_init() on the default size that essentially comes from the disk.

Comment by Hans Henrik Happe [ 14/Jul/14 ]

I've experienced the same bug on 2.4.2 and 2.5.2 (null pointer client crash in posix_acl_valid). It only happens with zfs backfstype, while I haven't seen it with ldisk. I found it by moving an empty dir from an outside fs. I.e.:

mkdir /tmp/foo
mv /tmp/foo /lustrefs
touch /lustrefs/foo/bar

Comment by Christopher Morrone [ 14/Jul/14 ]

Are you sure it was the "2.4.0-28chaos", but not "2.4.2-11chaos", that wound up with the problematic access ACL? Also, majority (if not all) of the problematic files were created after the server upgrade, weren't they?

You are correct, sorry for the confusion.

Comment by Peter Jones [ 25/Aug/14 ]

Patch http://review.whamcloud.com/11158 landed for 2.7.0

Comment by Jian Yu [ 19/Sep/14 ]

Here is the back-ported patch for Lustre b2_5 branch: http://review.whamcloud.com/11989

Comment by Roland Fehrenbacher [ 31/Oct/14 ]

This issue is a duplicate of LU-4787. Strange that I added a patch for that, nobody noticed the issue and then a duplicate was created here. Don't you think, it's worthwhile to know about such null pointer occurences and hence use the patch of LU-4787?
In the situations I saw, where the bug occurred, it pointed to data corruption, so I think it's worthwhile to see this in the logs.

Comment by Peter Jones [ 31/Oct/14 ]

Roland

I'm sorry that your patch was missed but patches have to be submitted via gerrit. Details are in https://wiki.hpdd.intel.com/display/PUB/Submitting+Changes

Peter

Comment by Roland Fehrenbacher [ 31/Oct/14 ]

Thanks, will follow that next time. Anyway, what do you think about my question? At least on the installations
we manage, we want to know when this happens. In a case that just happened yesterday, the logs were related to an application problem a guy had on the cluster. While I didn't have time to really dig into it deeper, it helped to know
that something strange must have had happened with the directory he was working on.

Comment by Peter Jones [ 31/Oct/14 ]

well, the LU-5150 patch has landed on both master and b2_5. If you have concerns about this work then I think that the clearest thing is to open a new ticket outlining your concerns.

Comment by Roland Fehrenbacher [ 31/Oct/14 ]

Well LU-4787 is still open. Doesn't it make sense to work from there?

Comment by Peter Jones [ 31/Oct/14 ]

It is just easier for our triage process to notice new incoming tickets than new comments added to older open comments.

Comment by Roland Fehrenbacher [ 31/Oct/14 ]

OK will add new one.

Generated at Sat Feb 10 01:48:57 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.