[LU-534] (mds_open.c:1323:mds_open()) ASSERTION(!mds_inode_is_orphan(dchild->d_inode)) failed: -> LBUG Created: 26/Jul/11  Updated: 09/May/12  Resolved: 26/Jan/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.6
Fix Version/s: Lustre 1.8.8

Type: Bug Priority: Critical
Reporter: Frederik Ferner (Inactive) Assignee: Zhenyu Xu
Resolution: Fixed Votes: 0
Labels: None
Environment:

RHEL5 on all affected machines, Lustre exported via NFS


Attachments: Text File lustre-log.1311606830.6854.txt     File lustre-log.1313079467.7339.txt.bz2     File racer-dls.tar.gz    
Severity: 3
Bugzilla ID: 17,764
Rank (Obsolete): 6577

 Description   

We hit this LBUG frequently on one of our production file systems and now have managed to reproduce reliably on our test file system by exporting the Lustre file system via NFS on one Lustre client and by running a version of racer on a NFS client in the exported Lustre file system. After a few minutes the LBUG will happen on the MDS. We've initially seen this on Lustre 1.6.7.2, then 1.8.3-ddn3.3 and now have been able to reproduce on the test file system after upgrading the MDS to 1.8.6-wc1, leaving the OSSes and clients at 1.8.3-ddn3.3 for now.

Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: LustreError: 6854:0:(mds_open.c:1323:mds_open()) ASSERTION(!mds_inode_is_orphan(dchild->d_inode)) failed: dchild 1d2764:0e4d3640 (ffff810429fc2b70) inode ffff81042aabfc30/1910628/239941184
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: LustreError: 6854:0:(mds_open.c:1323:mds_open()) LBUG
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: Pid: 6854, comm: ll_mdt_03
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel:
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: Call Trace:
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff887aa6a1>] libcfs_debug_dumpstack+0x51/0x60 [libcfs]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff887aabda>] lbug_with_loc+0x7a/0xd0 [libcfs]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88c1d33d>] mds_open+0x26ad/0x38eb [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff889a3461>] ksocknal_launch_packet+0x2b1/0x3a0 [ksocklnd]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff889a4f65>] ksocknal_alloc_tx+0x1f5/0x2a0 [ksocklnd]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88917491>] lustre_swab_buf+0x81/0x170 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8000d567>] dput+0x2c/0x113
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88bf40b5>] mds_reint_rec+0x365/0x550 [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88c1eb3e>] mds_update_unpack+0x1fe/0x280 [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88be6eca>] mds_reint+0x35a/0x420 [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88be5dda>] fixup_handle_for_resent_req+0x5a/0x2c0 [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88bf0bfc>] mds_intent_policy+0x4ac/0xc20 [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff888d8270>] ldlm_resource_putref_internal+0x230/0x460 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff888d5eb6>] ldlm_lock_enqueue+0x186/0xb20 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff888d27fd>] ldlm_lock_create+0x9bd/0x9f0 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff888fa870>] ldlm_server_blocking_ast+0x0/0x83d [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff888f7b29>] ldlm_handle_enqueue+0xbf9/0x1210 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88befb20>] mds_handle+0x40e0/0x4d10 [mds]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8008ddcd>] enqueue_task+0x41/0x56
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8008de38>] __activate_task+0x56/0x6d
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8891bd55>] lustre_msg_get_conn_cnt+0x35/0xf0 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff889256d9>] ptlrpc_server_handle_request+0x989/0xe00 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88925e35>] ptlrpc_wait_event+0x2e5/0x310 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8008c85d>] __wake_up_common+0x3e/0x68
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88926dc6>] ptlrpc_main+0xf66/0x1120 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff88925e60>] ptlrpc_main+0x0/0x1120 [ptlrpc]
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel:
Jul 25 16:13:50 cs04r-sc-mds02-03 kernel: LustreError: dumping log to /tmp/lustre-log.1311606830.6854

I'll attach the racer scripts and lustre-log.

I'm not sure but at least earlier traces seemed to look like it might have been this bug, now reporting here as I can still reproduce it with the 1.8.6-wc1: https://bugzilla.lustre.org/show_bug.cgi?id=17764

[MDS:]cat /proc/fs/lustre/version
lustre: 1.8.6
kernel: patchless_client
build: jenkins-wc1--PRISTINE-2.6.18-238.12.1.el5_lustre.g266a955



 Comments   
Comment by Zhenyu Xu [ 26/Jul/11 ]

patch tracking at http://review.whamcloud.com/1141

Comment by Frederik Ferner (Inactive) [ 01/Aug/11 ]

I've upgraded the MDS to the kernel/lustre version with the patch. I can still reproduce the problem.

The call trace this time looks slightly different, not sure if it is relevant:

Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: LustreError: 7210:0:(mds_open.c:1323:mds_open()) ASSERTION(!mds_inode_is_orphan(dchild->d_inode)) failed: dchild 1d2773:5b45ac47 (ffff81023a3a8078) inode ffff81023a8f9c30/1910643/1531292743
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: LustreError: 7210:0:(mds_open.c:1323:mds_open()) LBUG
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: Pid: 7210, comm: ll_mdt_31
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel:
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: Call Trace:
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff887aa6a1>] libcfs_debug_dumpstack+0x51/0x60 [libcfs]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff887aabda>] lbug_with_loc+0x7a/0xd0 [libcfs]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88c1f30d>] mds_open+0x26ad/0x38eb [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff889a3461>] ksocknal_launch_packet+0x2b1/0x3a0 [ksocklnd]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff889a4f65>] ksocknal_alloc_tx+0x1f5/0x2a0 [ksocklnd]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88917491>] lustre_swab_buf+0x81/0x170 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff8000d567>] dput+0x2c/0x113
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88bf6135>] mds_reint_rec+0x365/0x550 [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88c20b0e>] mds_update_unpack+0x1fe/0x280 [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88be8f4a>] mds_reint+0x35a/0x420 [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88be7e5a>] fixup_handle_for_resent_req+0x5a/0x2c0 [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88bf2c7c>] mds_intent_policy+0x4ac/0xc20 [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff888d8270>] ldlm_resource_putref_internal+0x230/0x460 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff888d5eb6>] ldlm_lock_enqueue+0x186/0xb20 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff888d27fd>] ldlm_lock_create+0x9bd/0x9f0 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff888fa870>] ldlm_server_blocking_ast+0x0/0x83d [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff888f7b29>] ldlm_handle_enqueue+0xbf9/0x1210 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88bf1ba0>] mds_handle+0x40e0/0x4d10 [mds]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff800774ed>] smp_send_reschedule+0x4e/0x53
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff8008ddcd>] enqueue_task+0x41/0x56
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff8891bd55>] lustre_msg_get_conn_cnt+0x35/0xf0 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff889256d9>] ptlrpc_server_handle_request+0x989/0xe00 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88925e35>] ptlrpc_wait_event+0x2e5/0x310 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff8008c85d>] __wake_up_common+0x3e/0x68
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88926dc6>] ptlrpc_main+0xf66/0x1120 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff88925e60>] ptlrpc_main+0x0/0x1120 [ptlrpc]
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
Jul 29 14:51:28 cs04r-sc-mds02-03 kernel:

I've got the lustre-log file available and can upload it if required.

Comment by Cory Spitz [ 04/Aug/11 ]

When we've seen this bug at Cray it has always been related to re-exporting over NFS as was described here.

Comment by Frederik Ferner (Inactive) [ 11/Aug/11 ]

I've now reproduced it (using the patch version, build: jenkins-g1b1f5ae-PRISTINE-2.6.18-238.12.1.el5_lustre.gd70e443) after enabling full debugging on the MDS:

lnet.debug = trace inode super ext2 malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec

The call trace looks slightly different again:

Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: LustreError: 7339:0:(mds_open.c:1323:mds_open()) ASSERTION(!mds_inode_is_orphan(dchild->d_inode)) failed: dchild 1d276d:bf466b4d (ffff810224c7f9c0) inode ffff810425970a70/1910637/3209063245
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: LustreError: 7339:0:(mds_open.c:1323:mds_open()) LBUG
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: Pid: 7339, comm: ll_mdt_11
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel:
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: Call Trace:
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff887aa6a1>] libcfs_debug_dumpstack+0x51/0x60 [libcfs]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff887aabda>] lbug_with_loc+0x7a/0xd0 [libcfs]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88aef30d>] mds_open+0x26ad/0x38eb [mds]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88917491>] lustre_swab_buf+0x81/0x170 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff8000d567>] dput+0x2c/0x113
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88ac6135>] mds_reint_rec+0x365/0x550 [mds]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88af0b74>] mds_update_unpack+0x264/0x280 [mds]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88ab8f4a>] mds_reint+0x35a/0x420 [mds]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88ac2c7c>] mds_intent_policy+0x4ac/0xc20 [mds]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff888d82e4>] ldlm_resource_putref_internal+0x2a4/0x460 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff888d5eb6>] ldlm_lock_enqueue+0x186/0xb20 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff888fa870>] ldlm_server_blocking_ast+0x0/0x83d [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff888f7b29>] ldlm_handle_enqueue+0xbf9/0x1210 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88ac1ba0>] mds_handle+0x40e0/0x4d10 [mds]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff887afc7e>] libcfs_nid2str+0xbe/0x110 [libcfs]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88922955>] ptlrpc_server_log_handling_request+0x105/0x130 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff889256d9>] ptlrpc_server_handle_request+0x989/0xe00 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff8008e421>] default_wake_function+0x0/0xe
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88926dc6>] ptlrpc_main+0xf66/0x1120 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff88925e60>] ptlrpc_main+0x0/0x1120 [ptlrpc]
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel:
Aug 11 17:17:47 cs04r-sc-mds02-03 kernel: LustreError: dumping log to /tmp/lustre-log.1313079467.7339

I'll attach the decoded lustre-log in the hope that it might be useful.

Comment by Frederik Ferner (Inactive) [ 11/Aug/11 ]

lustre log after LBUG with full debugging enabled.

Comment by Cory Spitz [ 10/Oct/11 ]

FYI, Vladimir S. has reviewed the logs from Frederik and posted an update in bz 17764

Comment by Vladimir V. Saveliev [ 13/Oct/11 ]

I can reproduce the problem with set of open()s, read()s, unlink() and close(). Details are in https://bugzilla.lustre.org/show_bug.cgi?id=17764#c109

Comment by Cory Spitz [ 21/Nov/11 ]

Vladimir has a patch available for inspection at https://bugzilla.lustre.org/attachment.cgi?id=33079 that simply removes what he feels is a bogus assert.

Comment by Frederik Ferner (Inactive) [ 28/Nov/11 ]

Vladimir has an updated patch available for inspection at https://bugzilla.lustre.org/attachment.cgi?id=33110.

Comment by Cory Spitz [ 05/Dec/11 ]

bz 17764 is marked RESOLVED-FIXED with https://bugzilla.lustre.org/attachment.cgi?id=33110 and https://bugzilla.lustre.org/attachment.cgi?id=33121.

Comment by Zhenyu Xu [ 19/Dec/11 ]

Include bz17764 patches at http://review.whamcloud.com/1894 (fix patch) and http://review.whamcloud.com/1895 (test patch)

Comment by Frederik Ferner (Inactive) [ 09/Jan/12 ]

In view of Johann's comment on the patch, is it worth testing the patch on our side? If so, is there a rpm with the patch available for RHEL5 anywhere? The link to the autobuilt rpm in the review page is returning 404 for me.

Comment by Zhenyu Xu [ 09/Jan/12 ]

I've pushed it for another build, and I also will try to reproduce it.

Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » x86_64,client,ubuntu1004,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/test-framework.sh
  • lustre/tests/parallel-scale.sh
  • lustre/tests/replay-vbr.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » i686,client,el6,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/test-framework.sh
  • lustre/tests/parallel-scale.sh
  • lustre/tests/replay-vbr.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » x86_64,client,el5,ofa #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/parallel-scale.sh
  • lustre/tests/replay-vbr.sh
  • lustre/tests/test-framework.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » x86_64,client,el5,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/parallel-scale.sh
  • lustre/tests/test-framework.sh
  • lustre/tests/replay-vbr.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » x86_64,client,el6,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/replay-vbr.sh
  • lustre/tests/test-framework.sh
  • lustre/tests/parallel-scale.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » i686,client,el5,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/test-framework.sh
  • lustre/tests/replay-vbr.sh
  • lustre/tests/parallel-scale.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » i686,client,el5,ofa #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/parallel-scale.sh
  • lustre/tests/replay-vbr.sh
  • lustre/tests/test-framework.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » x86_64,server,el5,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/test-framework.sh
  • lustre/tests/replay-vbr.sh
  • lustre/tests/parallel-scale.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » x86_64,server,el5,ofa #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/parallel-scale.sh
  • lustre/tests/replay-vbr.sh
  • lustre/tests/test-framework.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » i686,server,el5,inkernel #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/test-framework.sh
  • lustre/tests/parallel-scale.sh
  • lustre/tests/replay-vbr.sh
Comment by Build Master (Inactive) [ 16/Jan/12 ]

Integrated in lustre-b1_8 » i686,server,el5,ofa #166
LU-534 mds: correct assertion (Revision 069d0b6393841bf2adbef7e834919fa52310b664)
LU-534 test: nfsread_orphan_file test (Revision 66cd9a73abc2f075abf7ce78215a1d0cb5038a62)

Result = SUCCESS
Johann Lombardi : 069d0b6393841bf2adbef7e834919fa52310b664
Files :

  • lustre/mds/mds_open.c

Johann Lombardi : 66cd9a73abc2f075abf7ce78215a1d0cb5038a62
Files :

  • lustre/tests/replay-vbr.sh
  • lustre/tests/parallel-scale.sh
  • lustre/tests/test-framework.sh
Comment by Peter Jones [ 16/Jan/12 ]

Frederik

It looks like you can now go ahead and test the fix. The RPMs can be obtained at http://build.whamcloud.com/job/lustre-reviews/4163/

Regards

Peter

Comment by Peter Jones [ 26/Jan/12 ]

Frederik

Have you had a chance to test out this fix yet? If not, when do you expect to have an opportunity to do so?

Please advise

Peter

Comment by Frederik Ferner (Inactive) [ 26/Jan/12 ]

Peter,

apologies for my late reply.

I've been trying to reproduce this bug on my test system using the unpatch version of Lustre and it seems I have lost the ability to reproduce it. I'm not sure what has changed on our side though. I'll keep trying and I've downloaded the RPMs with the fix so I'll have them available locally once I can reproduce it.

Kind regards,
Frederik

Comment by Peter Jones [ 26/Jan/12 ]

ok Frederik then let's close this ticket for now and reopen it if you find that this problem reoccurs in the future and this patch does not address the problem.

Generated at Sat Feb 10 01:07:59 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.