[LU-5565] (osd_handler.c:1959:osd_attr_set()) ASSERTION( dt_object_exists(dt) && !dt_object_remote(dt) ) failed Created: 29/Aug/14  Updated: 03/Feb/16  Resolved: 03/Feb/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.7.0
Fix Version/s: Lustre 2.7.0

Type: Bug Priority: Blocker
Reporter: John Hammond Assignee: John Hammond
Resolution: Fixed Votes: 0
Labels: MB, dne2, mdt, out

Attachments: File console-lola-2.log.bz2     File lustre-log-lu-5565.bz2     File lustre-log.1454409077.7931.bz2     File lustre-log.1454409090.7841.bz2     File messages-lola-2.log.bz2    
Issue Links:
Duplicate
duplicates LU-6100 LustreError: 4256:0:(osd_handler.c:19... Closed
Related
is related to LU-2821 osd_handler.c:2482:osd_declare_object... Resolved
Severity: 3
Rank (Obsolete): 15517

 Description   

I see this running racer with MDSCOUNT=4. I simplified it a bit.

# export MDSCOUNT=4
# export MOUNT_2=y
# llmount.sh
...
# cd /mnt/lustre
# while true; do lfs mkdir -c4 d0; sys_open d0/f0 cw; rm -rf d0; done &
# cd /mnt/lustre2
# while true; do setfattr -n user.0 -v 0 d0; done
u login: [ 6081.349963] LustreError: 5860:0:(osd_handler.c:1959:osd_attr_set()) ASSERTION( dt_object_exists(dt) && !dt_object_remote(dt) ) failed:
[ 6081.354491] LustreError: 5860:0:(osd_handler.c:1959:osd_attr_set()) LBUG
[ 6081.356572] Pid: 5860, comm: ll_ost_out01_00
[ 6081.357962]
[ 6081.357964] Call Trace:
[ 6081.359351]  [<ffffffffa02be8c5>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
[ 6081.361546]  [<ffffffffa02beec7>] lbug_with_loc+0x47/0xb0 [libcfs]
[ 6081.363579]  [<ffffffffa0b2d351>] osd_attr_set+0x1f1/0x540 [osd_ldiskfs]
[ 6081.365686]  [<ffffffffa0b1f07f>] ? osd_object_write_lock+0x9f/0x130 [osd_ldiskfs]
[ 6081.367802]  [<ffffffffa06fb838>] out_tx_attr_set_exec+0x208/0x370 [ptlrpc]
[ 6081.369162]  [<ffffffffa06f758a>] out_tx_end+0xda/0x5c0 [ptlrpc]
[ 6081.370361]  [<ffffffffa06ff0b9>] out_handle+0x5e9/0xdf0 [ptlrpc]
[ 6081.371583]  [<ffffffffa069207c>] ? lustre_msg_get_opc+0x9c/0x110 [ptlrpc]
[ 6081.372926]  [<ffffffffa06f461e>] tgt_request_handle+0x71e/0xb10 [ptlrpc]
[ 6081.374258]  [<ffffffffa06a4a67>] ptlrpc_main+0xe27/0x1980 [ptlrpc]
[ 6081.375509]  [<ffffffffa06a3c40>] ? ptlrpc_main+0x0/0x1980 [ptlrpc]
[ 6081.376702]  [<ffffffff8109eab6>] kthread+0x96/0xa0
[ 6081.377643]  [<ffffffff8100c30a>] child_rip+0xa/0x20
[ 6081.378592]  [<ffffffff81554710>] ? _spin_unlock_irq+0x30/0x40
[ 6081.379716]  [<ffffffff8100bb10>] ? restore_args+0x0/0x30
[ 6081.380747]  [<ffffffff8109ea20>] ? kthread+0x0/0xa0
[ 6081.381702]  [<ffffffff8100c300>] ? child_rip+0x0/0x20
[ 6081.382681]
[ 6081.383251] Kernel panic - not syncing: LBUG

Here are some stack traces from a different occurrence.

PID: 9780   TASK: ffff8801d69708c0  CPU: 0   COMMAND: "setfattr"
 #0 [ffff8801d6bcf6f8] schedule at ffffffff81551457
 #1 [ffff8801d6bcf7c0] schedule_timeout at ffffffff81552373
 #2 [ffff8801d6bcf890] ptlrpc_set_wait at ffffffffa0688c4a [ptlrpc]
 #3 [ffff8801d6bcf930] ptlrpc_queue_wait at ffffffffa0689217 [ptlrpc]
 #4 [ffff8801d6bcf950] mdc_xattr_common at ffffffffa0912d53 [mdc]
 #5 [ffff8801d6bcf9e0] mdc_setxattr at ffffffffa0913146 [mdc]
 #6 [ffff8801d6bcfa30] lmv_setxattr at ffffffffa08ca50b [lmv]
 #7 [ffff8801d6bcfac0] ll_setxattr_common at ffffffffa0e746f0 [lustre]
 #8 [ffff8801d6bcfb80] ll_setxattr at ffffffffa0e75054 [lustre]
 #9 [ffff8801d6bcfd10] __vfs_setxattr_noperm at ffffffff811caeee
#10 [ffff8801d6bcfd70] vfs_setxattr at ffffffff811cb0c4
#11 [ffff8801d6bcfdc0] setxattr at ffffffff811cb1a0
#12 [ffff8801d6bcff10] sys_setxattr at ffffffff811cb488
#13 [ffff8801d6bcff80] system_call_fastpath at ffffffff8100b072
    RIP: 000000377fee53c9  RSP: 00007fff08cfd300  RFLAGS: 00010206
    RAX: 00000000000000bc  RBX: ffffffff8100b072  RCX: 00000000014f0100
    RDX: 00000000014f0000  RSI: 00007fff08cff703  RDI: 00007fff08cff70f
    RBP: 00007fff08cff703   R8: 0000000000000000   R9: 0000000000000000
    R10: 0000000000000001  R11: 0000000000000246  R12: 0000000000000000
    R13: 0000000000000000  R14: 00007fff08cfd580  R15: 00000000014f0000
    ORIG_RAX: 00000000000000bc  CS: 0033  SS: 002b

PID: 3693   TASK: ffff880202176500  CPU: 0   COMMAND: "mdt00_000"
 #0 [ffff880202179790] schedule at ffffffff81551457
 #1 [ffff880202179858] schedule_timeout at ffffffff81552373
 #2 [ffff880202179928] ptlrpc_set_wait at ffffffffa0688c4a [ptlrpc]
 #3 [ffff8802021799c8] ptlrpc_queue_wait at ffffffffa0689217 [ptlrpc]
 #4 [ffff8802021799e8] out_remote_sync at ffffffffa07001b1 [ptlrpc]
 #5 [ffff880202179a38] osp_trans_trigger at ffffffffa0d9e4aa [osp]
 #6 [ffff880202179a78] osp_trans_start at ffffffffa0d9f914 [osp]
 #7 [ffff880202179aa8] lod_trans_start at ffffffffa0d27c11 [lod]
 #8 [ffff880202179ae8] mdd_trans_start at ffffffffa0c1c067 [mdd]
 #9 [ffff880202179af8] mdd_unlink at ffffffffa0c0175a [mdd]
#10 [ffff880202179bb8] mdo_unlink at ffffffffa0c6a688 [mdt]
#11 [ffff880202179bc8] mdt_reint_unlink at ffffffffa0c74468 [mdt]
#12 [ffff880202179c78] mdt_reint_rec at ffffffffa0c6a421 [mdt]
#13 [ffff880202179c98] mdt_reint_internal at ffffffffa0c4fc63 [mdt]
#14 [ffff880202179cd8] mdt_reint at ffffffffa0c504cb [mdt]
#15 [ffff880202179d18] tgt_request_handle at ffffffffa06f461e [ptlrpc]
#16 [ffff880202179d78] ptlrpc_main at ffffffffa06a4a67 [ptlrpc]
#17 [ffff880202179eb8] kthread at ffffffff8109eab6
#18 [ffff880202179f48] kernel_thread at ffffffff8100c30a

PID: 4642   TASK: ffff8801e59707c0  CPU: 1   COMMAND: "mdt00_004"
 #0 [ffff8801e856f7c0] schedule at ffffffff81551457
 #1 [ffff8801e856f888] schedule_timeout at ffffffff81552373
 #2 [ffff8801e856f958] ptlrpc_set_wait at ffffffffa0688c4a [ptlrpc]
 #3 [ffff8801e856f9f8] ptlrpc_queue_wait at ffffffffa0689217 [ptlrpc]
 #4 [ffff8801e856fa18] out_remote_sync at ffffffffa07001b1 [ptlrpc]
 #5 [ffff8801e856fa68] osp_trans_trigger at ffffffffa0d9e4aa [osp]
 #6 [ffff8801e856faa8] osp_trans_start at ffffffffa0d9f914 [osp]
 #7 [ffff8801e856fad8] lod_trans_start at ffffffffa0d27c11 [lod]
 #8 [ffff8801e856fb18] mdd_trans_start at ffffffffa0c1c067 [mdd]
 #9 [ffff8801e856fb28] mdd_attr_set at ffffffffa0c14985 [mdd]
#10 [ffff8801e856fba8] mo_attr_set at ffffffffa0c74bc8 [mdt]
#11 [ffff8801e856fbb8] mdt_reint_setxattr at ffffffffa0c75740 [mdt]
#12 [ffff8801e856fc78] mdt_reint_rec at ffffffffa0c6a421 [mdt]
#13 [ffff8801e856fc98] mdt_reint_internal at ffffffffa0c4fc63 [mdt]
#14 [ffff8801e856fcd8] mdt_reint at ffffffffa0c504cb [mdt]
#15 [ffff8801e856fd18] tgt_request_handle at ffffffffa06f461e [ptlrpc]
#16 [ffff8801e856fd78] ptlrpc_main at ffffffffa06a4a67 [ptlrpc]
#17 [ffff8801e856feb8] kthread at ffffffff8109eab6
#18 [ffff8801e856ff48] kernel_thread at ffffffff8100c30a

PID: 4225   TASK: ffff8801ed6c87c0  CPU: 1   COMMAND: "ll_ost_out00_00"
 #0 [ffff8801ed6cb998] machine_kexec at ffffffff81039950
 #1 [ffff8801ed6cb9f8] crash_kexec at ffffffff810d4372
 #2 [ffff8801ed6cbac8] panic at ffffffff81550d83
 #3 [ffff8801ed6cbb48] lbug_with_loc at ffffffffa02bef1b [libcfs]
 #4 [ffff8801ed6cbb68] osd_attr_set at ffffffffa0b2d351 [osd_ldiskfs]
 #5 [ffff8801ed6cbbc8] out_tx_attr_set_exec at ffffffffa06fb838 [ptlrpc]
 #6 [ffff8801ed6cbc08] out_tx_end at ffffffffa06f758a [ptlrpc]
 #7 [ffff8801ed6cbc58] out_handle at ffffffffa06ff0b9 [ptlrpc]
 #8 [ffff8801ed6cbd18] tgt_request_handle at ffffffffa06f461e [ptlrpc]
 #9 [ffff8801ed6cbd78] ptlrpc_main at ffffffffa06a4a67 [ptlrpc]
#10 [ffff8801ed6cbeb8] kthread at ffffffff8109eab6
#11 [ffff8801ed6cbf48] kernel_thread at ffffffff8100c30a

I'm not sure why the handler is ll_ost_out00_00 but I checked and that task is handling a OUT_UPDATE RPC from lustre-MDT0000 to lustre-MDT0002.

Note that the setattr is for the ctime on d0. Should this ctime update really go to each stripe? If so then shouldn't we lock the stripes as we do from mdt_attr_set()?



 Comments   
Comment by Andreas Dilger [ 22/Oct/14 ]

I hit this same LASSERT running racer on my single-node test system (2x1GB MDT, 3x4GB OST) as part of acceptance-small.sh using v2_6_54_0-16-g0024956:

LustreError: 1492:0:(osd_handler.c:1963:osd_attr_set()) ASSERTION( dt_object_exists(dt) && !dt_object_remote(dt) ) failed: 
Pid: 1492, comm: ll_ost_out00_00
libcfs_debug_dumpstack+0x55/0x80 [libcfs]
lbug_with_loc+0x47/0xb0 [libcfs]
osd_attr_set+0x197/0x4e0 [osd_ldiskfs]
out_tx_attr_set_exec+0x260/0x3f0 [ptlrpc]
out_tx_end+0xda/0x5c0 [ptlrpc]
out_handle+0x7c0/0xe50 [ptlrpc]
tgt_request_handle+0x71e/0xb10 [ptlrpc]
ptlrpc_main+0xe64/0x1990 [ptlrpc]
kthread+0x96/0xa0
Comment by Andreas Dilger [ 22/Oct/14 ]

I've pushed a cleanup/debug patch http://review.whamcloud.com/12398 to at least split this LASSERT() into two separate ones, so that it is easier to see which one is failing. This is not expected to fix the problem here.

Comment by John Hammond [ 23/Oct/14 ]

dt_object_exists() is failing.

Comment by Andreas Dilger [ 27/Oct/14 ]

I hit this problem again, with the same stack in out_handle() as my previous occurrence.

Comment by John Hammond [ 06/Nov/14 ]

Can we replace this with error handling?

Comment by Alex Zhuravlev [ 06/Nov/14 ]

yes, in out_tx_attr_set_exec() once the lock is taken we should check whether the object still exists.

Comment by John Hammond [ 06/Nov/14 ]

Please see http://review.whamcloud.com/12608.

Comment by Di Wang [ 07/Nov/14 ]

"Note that the setattr is for the ctime on d0. Should this ctime update really go to each stripe? If so then shouldn't we lock the stripes as we do from mdt_attr_set()?"

John, I think you are right, we should lock the object both for setting xattr and attr. And ctime does not need to go every stripe, but

1. lod_attr_set needs to be fixed, i.e. set_attr time only goes to master object.
2. lmv_merge_attr needs to be fixed, i.e. it needs to take count in master object time as well, right now it only merge the time attributes from all of the sub-stripes.

Comment by Di Wang [ 07/Nov/14 ]

Can we replace this with error handling?

Please do it in OSD, then local object and remote object are getting equal treatment.

Comment by Di Wang [ 14/Jan/15 ]

The real reason of this LBUG is because of the race between close(unlink orphan) and setxattr(or setattr). Because close is not protected by ldlm lock. In current implementation(mdd_close()), we use a local lock(mdd_write_lock) to protect the unlink orphan process in the MDD layer, which is fine if the directory is a local directory. But if the directory is a striped directory, the local lock might not be enough, because the unlink orphan (triggered by close) will delete the stripe on the other MDT as well. If other threads do setxattr(or setattr) at the same time on this striped directory. it causes this LBUG.
There are a few options to fix this problem,

1. protect close process with ldlm lock.
2. "Open" the file before setxattr (or setattr), of course only on the server side in MDD layer, then "close" the file after setxattr (or setattr), which might trigger unlink orphan as well.

Either of them might need some changes (not tiny), probably not a good choice for 2.7. Any other suggestions?

Comment by John Hammond [ 14/Jan/15 ]

Isn't there a (already broken) rule against taking LDLM locks in close. From mdc_close()

        /* To avoid a livelock (bug 7034), we need to send CLOSE RPCs to a                
         * portal whose threads are not taking any DLM locks and are therefore            
         * always progressing */
        req->rq_request_portal = MDS_READPAGE_PORTAL;
        ptlrpc_at_set_req_timeout(req);

Why not fix this by handling the race?

Comment by Alex Zhuravlev [ 14/Jan/15 ]

using LDLM lock has additional benefits as an object can be accessed by another node and LDLM is the natural mechanism to control access.

Comment by Andreas Dilger [ 14/Jan/15 ]

Couldn't we just return -ENOENT in this case? The setxattr() call is path-based, and not file descriptor based, so there is no expectation that it is free of races. If the object is deleted then it is perfectly fine to return an error to the caller.

Comment by Di Wang [ 14/Jan/15 ]

I had thought the same thing, then I saw LU-6027, and discussed with Li Wei. The problem is if the application open the file and hold the file handle, do setxattr, should we expect succeeds or failure? And it does succeed on ext4? I do not know if posix require this or not? And also if we return -ENOENT here, Li Wei needs to change his new "racer" program and disable sanityn.sh 82.

Comment by John Hammond [ 14/Jan/15 ]

> I had thought the same thing, then I saw LU-6027, and discussed with Li Wei. The problem is if the application open the file and hold the file handle, do setxattr, should we expect succeeds or failure? And it does succeed on ext4? I do not know if posix require this or not? And also if we return -ENOENT here, Li Wei needs to change his new "racer" program and disable sanityn.sh 82.

setxattr() or fsetxattr()?

Comment by Di Wang [ 14/Jan/15 ]

I meant Li Wei used this file handle thing to set xattr in his new "racer" test program. I am not sure which one he used. But if I returned ENOENT here, it will definitely cause sanityn.sh 82 failed.

Comment by John Hammond [ 14/Jan/15 ]

We should be clear about that before we add LDLM locks to close because of it.

Note that http://review.whamcloud.com/#/c/12608/ does not cause -ENOENT to be returned under the following situation.

  1. lfs mkdir -c4 d0
  2. fd = open("d0", O_RDONLY)
  3. rmdir("d0")
  4. fsetxattr(fd, "user.foo", "bar", 3, 0)
Comment by Di Wang [ 15/Jan/15 ]

Yes, right now, it does not return ENOENT, because of the file is still open, and we allow this in current implementation. So is it bad we just return ENOENT for fsetxattr in this case?

Comment by John Hammond [ 15/Jan/15 ]

> Yes, right now, it does not return ENOENT, because of the file is still open, and we allow this in current implementation. So is it bad we just return ENOENT for fsetxattr in this case?

Why would we do that?

Comment by Di Wang [ 15/Jan/15 ]

I mean If it return ENOENT for unlinked object during setattr or setxattr, then we do not need worry about orphans.

Comment by John Hammond [ 15/Jan/15 ]

Why not use http://review.whamcloud.com/#/c/12608/?

Comment by Di Wang [ 15/Jan/15 ]

It is different problem. I thought we are talking about about resolving the race, and http://review.whamcloud.com/#/c/12608 is to avoid panic, which actually is covering the problem somehow. IMHO

Comment by John Hammond [ 15/Jan/15 ]

It is the same problem. http://review.whamcloud.com/#/c/12608 handles the race.

Comment by Di Wang [ 15/Jan/15 ]

Hmm, I thought we suppose to use ldlm lock to protect the race like this, or just not get into setattr or set xattr, once the object is being unlinked.

And also I thought the object should exist once the modification getting into OSD/OSP layer, because it suppose to do sanity check and hold ldlm lock in MDT layer. Though there are some exception for now, like close. I thought we should resolve this in MDT layer, instead of "covering" it in OSD/OSP.

Comment by Gerrit Updater [ 16/Jan/15 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/12608/
Subject: LU-5565 osd-ldiskfs: handle non-existing objects
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 30b0b011dfd53faf8f903401fa0539a8d625b0e5

Comment by John Hammond [ 16/Jan/15 ]

Note that osd-zfs still needs to be fixed.

Comment by Jodi Levi (Inactive) [ 16/Jan/15 ]

Patch landed to Master.

Comment by Gerrit Updater [ 22/Jan/15 ]

John L. Hammond (john.hammond@intel.com) uploaded a new patch: http://review.whamcloud.com/13496
Subject: LU-5565 osd-ldiskfs: handle nonexisting objects
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 0b4fd760b34b1ded5e8fee6db07a5a0f0291389d

Comment by nasf (Inactive) [ 28/Jan/16 ]

We hit the same trouble with ZFS backend on lola. We need enhance the patch http://review.whamcloud.com/#/c/12608/ for ZFS case.

Jan 27 05:49:12 lola-5 kernel: LustreError: 22690:0:(osd_object.c:925:osd_attr_set()) ASSERTION( dt_object_exists(dt) ) failed:
Jan 27 05:49:12 lola-5 kernel: LustreError: 22690:0:(osd_object.c:925:osd_attr_set()) LBUG
Comment by Alex Zhuravlev [ 28/Jan/16 ]

this is done in http://review.whamcloud.com/#/c/18024/ and http://review.whamcloud.com/#/c/18155/

Comment by Frank Heckes (Inactive) [ 02/Feb/16 ]

For build '20160201' (master) executed during soaktest (see: https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160201)
the error happened again and persists even after reboot of the node so that the affected
OSS node and OSTs become unusable.
The MDTs have been formated with ldiskfs, OSTs with zfs

Sequence of events:

  • Feb 2 01:32:18 2016: lfsck started on mds-0 (lola-8 using the command {{lctl lfsck_start -M soaked-MDT0000 -A }}
    OSS nodes lola-2 crashed with LBUG (See see error message after time stamp in console log 'Feb 2 01:30:01')
  • Feb 2 01:50:01 2016: lola-2 rebooted; remounted OSTs node crashed with same LBUG
    (see error message after 'Feb 2 01:50:01 lola-2 TIME: Time stamp for console' in console log)
  • Several tries to simply reboot the node and remount again
  • Feb 2 02:13:35 PST 2016: Execution of lctl lfsck_stop -M soaked-MDT0000 -A }} end successfull. No OSTs mounted on {{lola-2
  • Feb 2 02:15:01 2016: Mounted OSTs on lola-2. Node crashed almost immediately with same LBUG again. Forced creation of debug log with 'lfsck' enable and buffer size 512M
    (see attached file: lustre-log-lu-5565)
  • Feb 2 02:31 2016: Rebooted and re-mounted OSTs one more with 'panic_on_lbug', lfsck enabled, debug buffer 512M
    Explicitly forced stack trace.
    Also debug logs were written after LBUG appear again (almost immediately). See:
    lustre-log.1454409077.7931
    lustre-log.1454409090.7841
  • OSS node can't be used as FS resource anymore

Attached files:

  • lola-2 message and console log
  • debug-log after lfsck stopped and remount of OSTs
  • lustre-log.1454409077.7931, lustre-log.1454409090.7841 - Debug logs of server in current status
    when mounting the OSTs
Comment by Peter Jones [ 03/Feb/16 ]

Please open a new ticket to track any further work that is needed in this area for 2.8

Generated at Sat Feb 10 01:52:35 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.