Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9068

Hardware problem resulting in bad blocks

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • None
    • Lustre 2.5.5
    • None
    • 1
    • 9223372036854775807

    Description

      We encountered a hardware problem on the MDT storage device (DDN 7700) that resulted in bad blocks. The file system continued to operate but yesterday went read-only when it stumbled over a bad sector.

      We ran fsck against the file system with the most current e2fsprogs which repaired the file system but dumped 90 objects/files into lost+found. All but 2 belonged to one user. But one of the files/objects belongs to root and has a low inode number #5749 that appears to be a data file.

      We are very concerned that this particular file may be lustre relevant and would like your guidance on what we should do. (Obviously we are able to mount the file system ldiskfs.)

      Attachments

        Issue Links

          Activity

            [LU-9068] Hardware problem resulting in bad blocks
            pjones Peter Jones added a comment -

            ok - thanks Ruth

            pjones Peter Jones added a comment - ok - thanks Ruth

            As Joe mentioned, we had already replaced the file and moved on to errors on some other log files.

            We finally removed the CATALOGS file and rebooted. This got us past the LBUG and the file system is back, minus a few items lost due to hardware.

            Ok to close.

            ruth.klundt@gmail.com Ruth Klundt (Inactive) added a comment - As Joe mentioned, we had already replaced the file and moved on to errors on some other log files. We finally removed the CATALOGS file and rebooted. This got us past the LBUG and the file system is back, minus a few items lost due to hardware. Ok to close.
            adilger Andreas Dilger added a comment - - edited

            Sorry, I didn't see your reply until now. Applying the patch to return the error from osp_sync_thread() is the proper fix. You may be able to work around this by creating an empty O/1/105729 file on the MDT (using decimal object ID based on error messages), but it may be that this will also return an error message if the content is bad, instead of just a missing file.

            adilger Andreas Dilger added a comment - - edited Sorry, I didn't see your reply until now. Applying the patch to return the error from osp_sync_thread() is the proper fix. You may be able to work around this by creating an empty O/1/105729 file on the MDT (using decimal object ID based on error messages), but it may be that this will also return an error message if the content is bad, instead of just a missing file.
            jamervi Joe Mervini added a comment -

            We're looking for a work around to get the file system back up again.

            jamervi Joe Mervini added a comment - We're looking for a work around to get the file system back up again.
            jamervi Joe Mervini added a comment -

            We're running the toss version of 2.5.5.

            jamervi Joe Mervini added a comment - We're running the toss version of 2.5.5.
            adilger Andreas Dilger added a comment - - edited

            The error being reported on the MDS is for a different llog file 0x19cee than the one in lost+found, which is 0x19c6b.

            What version of Lustre are you running? It looks like this LBUG (ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK )) is a duplicate with LU-6696, which was fixed in Lustre 2.9.0 (patch http://review.whamcloud.com/19856).

            adilger Andreas Dilger added a comment - - edited The error being reported on the MDS is for a different llog file 0x19cee than the one in lost+found , which is 0x19c6b . What version of Lustre are you running? It looks like this LBUG (ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK )) is a duplicate with LU-6696 , which was fixed in Lustre 2.9.0 (patch http://review.whamcloud.com/19856 ).
            jamervi Joe Mervini added a comment - - edited

            We brought down all the OSSs and OSTs and rebooted them. Brought up the MDS and MDT which looked happy. When I started bring the OSTs back online it LBUGed against another OST.

            [ 1549.368141] Lustre: gscratch-MDT0000: trigger OI scrub by RPC for [0x1:0x19d01:0x0], rc = 0 [1]
            [ 1549.463828] LustreError: 5476:0:(llog_cat.c:195:llog_cat_id2handle()) gscratch-OST0036-osc-MDT0000: error opening log id 0x19d01:1:0: rc = -115
            [ 1549.614586] LustreError: 5476:0:(llog_cat.c:586:llog_cat_process_cb()) gscratch-OST0036-osc-MDT0000: cannot find handle for llog 0x19d01:1: -115
            [ 1549.765765] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 19 changes, 7 in progress, 0 in flight: -115
            Jan 31 12:45:50 [ 1549.919365] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) LBUG
            gmds1 kernel: [ 1549.765765] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 19 changes, 7 in progress, 0 in flight: -115
            Jan 31 12:45:50 gmds1 kernel: [ 1549.919365] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) LBUG
            [ 1549.996451] Pid: 5476, comm: osp-syn-54-0
            

            By the way - this is lustre 2.5.5 not 2.7.

            jamervi Joe Mervini added a comment - - edited We brought down all the OSSs and OSTs and rebooted them. Brought up the MDS and MDT which looked happy. When I started bring the OSTs back online it LBUGed against another OST. [ 1549.368141] Lustre: gscratch-MDT0000: trigger OI scrub by RPC for [0x1:0x19d01:0x0], rc = 0 [1] [ 1549.463828] LustreError: 5476:0:(llog_cat.c:195:llog_cat_id2handle()) gscratch-OST0036-osc-MDT0000: error opening log id 0x19d01:1:0: rc = -115 [ 1549.614586] LustreError: 5476:0:(llog_cat.c:586:llog_cat_process_cb()) gscratch-OST0036-osc-MDT0000: cannot find handle for llog 0x19d01:1: -115 [ 1549.765765] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 19 changes, 7 in progress, 0 in flight: -115 Jan 31 12:45:50 [ 1549.919365] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) LBUG gmds1 kernel: [ 1549.765765] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 19 changes, 7 in progress, 0 in flight: -115 Jan 31 12:45:50 gmds1 kernel: [ 1549.919365] LustreError: 5476:0:(osp_sync.c:872:osp_sync_thread()) LBUG [ 1549.996451] Pid: 5476, comm: osp-syn-54-0 By the way - this is lustre 2.5.5 not 2.7.
            jamervi Joe Mervini added a comment -

            Note: All the OSS have remained up with all OSTs mounted.

            jamervi Joe Mervini added a comment - Note: All the OSS have remained up with all OSTs mounted.
            jamervi Joe Mervini added a comment - - edited

            We rebooted the node and started lustre. The file system mounted and was in the process of connecting clients when it panicked with an LBUG. (This is before trying to rename the file.)

            Jan 31 11:51:57 gmds1 kernel: [944595.780330] LustreError: 86284:0:(ldlm_lib.c:2324:target_queue_recovery_request()) @@@ dropping resent queued req  req@ffff880bd2005800 x1557048164247136/t0(177528794042) o35->0e26
            9f32-c7c8-9037-e29a-c1f762282b43@10.1.9.69@o2ib:0/0 lens 392/0 e 0 to 0 dl 1485888778 ref 2 fl Interpret:/6/ffffffff rc 0/-1
            Jan 31 11:51:57 gmds1 kernel: [944596.013130] LustreError: 86284:0:(ldlm_lib.c:2324:target_queue_recovery_request()) Skipped 202 previous similar messages
            Jan 31 11:52:12 gmds1 kernel: [944610.729259] Lustre: gscratch-MDT0000: Client 5a2a05db-5f33-aecf-9cce-44f4bfe225ab (at 10.1.6.230@o2ib) reconnecting, waiting for 7326 clients in recovery for 3:28
            Jan 31 11:52:12 gmds1 kernel: [944610.887017] Lustre: Skipped 431 previous similar messages
            Jan 31 11:52:14 gmds1 kernel: [944612.182223] LustreError: 85847:0:(ldlm_lib.c:2324:target_queue_recovery_request()) @@@ dropping resent queued req  req@ffff880c1c85c850 x1556994744167148/t0(177528744813) o35->ea44
            7cc5-9dd6-7b57-077b-df9f767a42e4@10.1.1.156@o2ib:0/0 lens 392/0 e 0 to 0 dl 1485888795 ref 2 fl Interpret:/6/ffffffff rc 0/-1
            Jan 31 11:52:14 gmds1 kernel: [944612.415138] LustreError: 85847:0:(ldlm_lib.c:2324:target_queue_recovery_request()) Skipped 127 previous similar messages
            Jan 31 11:52:14 gmds1 kernel: [944612.589958] Lustre: gscratch-MDT0000: Denying connection for new client 4bb27966-87a7-7a95-6e3e-1007a28f70bd (at 10.1.7.205@o2ib), waiting for all 7326 known clients (2152 recovere
            d, 1857 in progress, and 0 evicted) to recover in 3:26
            Jan 31 11:52:14 gmds1 kernel: [944612.756886] Lustre: Skipped 12 previous similar messages
            Jan 31 11:52:14 gmds1 kernel: [944612.896489] LustreError: 86016:0:(llog_cat.c:195:llog_cat_id2handle()) gscratch-OST004a-osc-MDT0000: error opening log id 0x19cee:1:0: rc = -2
            Jan 31 11:52:15 gmds1 kernel: [944613.046320] LustreError: 86016:0:(llog_cat.c:586:llog_cat_process_cb()) gscratch-OST004a-osc-MDT0000: cannot find handle for llog 0x19cee:1: -2
            Jan 31 11:52:24 gmds1 kernel: [944622.532283] LustreError: 86182:0:(llog_osd.c:757:llog_osd_next_block()) gscratch-MDT0000-osd: missed desired record? 45953 > 39475
            Jan 31 11:52:24 gmds1 kernel: [944622.681106] LustreError: 86182:0:(osp_sync.c:872:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 0 changes, 6 in progress, 0 in flight: -2
            Jan 31 11:52:24 gmds1 kernel: [944622.833222] LustreError: 86182:0:(osp_sync.c:872:osp_sync_thread()) LBUG
            Jan 31 11:52:24 gmds1 kernel: [944622.908388] Pid: 86182, comm: osp-syn-89-0
            Jan 31 11:52:24 gmds1 kernel: [944622.979766]
            Jan 31 11:52:24 gmds1 kernel: [944622.979768] Call Trace:
            Jan 31 11:52:25 gmds1 kernel: [944623.113187]  [<ffffffffa05ed8f5>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
            Jan 31 11:52:25 gmds1 kernel: [944623.185016]  [<ffffffffa05edef7>] lbug_with_loc+0x47/0xb0 [libcfs]
            Jan 31 11:52:25 gmds1 kernel: [944623.255067]  [<ffffffffa15e6e03>] osp_sync_thread+0x753/0x7d0 [osp]
            Jan 31 11:52:25 gmds1 kernel: [944623.325388]  [<ffffffff8154b79e>] ? schedule+0x3ee/0xb70
            Jan 31 11:52:25 gmds1 kernel: [944623.395727]  [<ffffffffa15e66b0>] ? osp_sync_thread+0x0/0x7d0 [osp]
            Jan 31 11:52:25 gmds1 kernel: [944623.466450]  [<ffffffff810a6d1e>] kthread+0x9e/0xc0
            Jan 31 11:52:25 gmds1 kernel: [944623.534286]  [<ffffffff8100c2ca>] child_rip+0xa/0x20
            Jan 31 11:52:25 gmds1 kernel: [944623.601056]  [<ffffffff810a6c80>] ? kthread+0x0/0xc0
            Jan 31 11:52:25 gmds1 kernel: [944623.666502]  [<ffffffff8100c2c0>] ? child_rip+0x0/0x20
            Jan 31 11:52:25 gmds1 kernel: [944623.731052]
            Jan 31 11:52:26 gmds1 kernel: [944623.790477] Kernel panic - not syncing: LBUG
            
            jamervi Joe Mervini added a comment - - edited We rebooted the node and started lustre. The file system mounted and was in the process of connecting clients when it panicked with an LBUG. (This is before trying to rename the file.) Jan 31 11:51:57 gmds1 kernel: [944595.780330] LustreError: 86284:0:(ldlm_lib.c:2324:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880bd2005800 x1557048164247136/t0(177528794042) o35->0e26 9f32-c7c8-9037-e29a-c1f762282b43@10.1.9.69@o2ib:0/0 lens 392/0 e 0 to 0 dl 1485888778 ref 2 fl Interpret:/6/ffffffff rc 0/-1 Jan 31 11:51:57 gmds1 kernel: [944596.013130] LustreError: 86284:0:(ldlm_lib.c:2324:target_queue_recovery_request()) Skipped 202 previous similar messages Jan 31 11:52:12 gmds1 kernel: [944610.729259] Lustre: gscratch-MDT0000: Client 5a2a05db-5f33-aecf-9cce-44f4bfe225ab (at 10.1.6.230@o2ib) reconnecting, waiting for 7326 clients in recovery for 3:28 Jan 31 11:52:12 gmds1 kernel: [944610.887017] Lustre: Skipped 431 previous similar messages Jan 31 11:52:14 gmds1 kernel: [944612.182223] LustreError: 85847:0:(ldlm_lib.c:2324:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880c1c85c850 x1556994744167148/t0(177528744813) o35->ea44 7cc5-9dd6-7b57-077b-df9f767a42e4@10.1.1.156@o2ib:0/0 lens 392/0 e 0 to 0 dl 1485888795 ref 2 fl Interpret:/6/ffffffff rc 0/-1 Jan 31 11:52:14 gmds1 kernel: [944612.415138] LustreError: 85847:0:(ldlm_lib.c:2324:target_queue_recovery_request()) Skipped 127 previous similar messages Jan 31 11:52:14 gmds1 kernel: [944612.589958] Lustre: gscratch-MDT0000: Denying connection for new client 4bb27966-87a7-7a95-6e3e-1007a28f70bd (at 10.1.7.205@o2ib), waiting for all 7326 known clients (2152 recovere d, 1857 in progress, and 0 evicted) to recover in 3:26 Jan 31 11:52:14 gmds1 kernel: [944612.756886] Lustre: Skipped 12 previous similar messages Jan 31 11:52:14 gmds1 kernel: [944612.896489] LustreError: 86016:0:(llog_cat.c:195:llog_cat_id2handle()) gscratch-OST004a-osc-MDT0000: error opening log id 0x19cee:1:0: rc = -2 Jan 31 11:52:15 gmds1 kernel: [944613.046320] LustreError: 86016:0:(llog_cat.c:586:llog_cat_process_cb()) gscratch-OST004a-osc-MDT0000: cannot find handle for llog 0x19cee:1: -2 Jan 31 11:52:24 gmds1 kernel: [944622.532283] LustreError: 86182:0:(llog_osd.c:757:llog_osd_next_block()) gscratch-MDT0000-osd: missed desired record? 45953 > 39475 Jan 31 11:52:24 gmds1 kernel: [944622.681106] LustreError: 86182:0:(osp_sync.c:872:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 0 changes, 6 in progress, 0 in flight: -2 Jan 31 11:52:24 gmds1 kernel: [944622.833222] LustreError: 86182:0:(osp_sync.c:872:osp_sync_thread()) LBUG Jan 31 11:52:24 gmds1 kernel: [944622.908388] Pid: 86182, comm: osp-syn-89-0 Jan 31 11:52:24 gmds1 kernel: [944622.979766] Jan 31 11:52:24 gmds1 kernel: [944622.979768] Call Trace: Jan 31 11:52:25 gmds1 kernel: [944623.113187] [<ffffffffa05ed8f5>] libcfs_debug_dumpstack+0x55/0x80 [libcfs] Jan 31 11:52:25 gmds1 kernel: [944623.185016] [<ffffffffa05edef7>] lbug_with_loc+0x47/0xb0 [libcfs] Jan 31 11:52:25 gmds1 kernel: [944623.255067] [<ffffffffa15e6e03>] osp_sync_thread+0x753/0x7d0 [osp] Jan 31 11:52:25 gmds1 kernel: [944623.325388] [<ffffffff8154b79e>] ? schedule+0x3ee/0xb70 Jan 31 11:52:25 gmds1 kernel: [944623.395727] [<ffffffffa15e66b0>] ? osp_sync_thread+0x0/0x7d0 [osp] Jan 31 11:52:25 gmds1 kernel: [944623.466450] [<ffffffff810a6d1e>] kthread+0x9e/0xc0 Jan 31 11:52:25 gmds1 kernel: [944623.534286] [<ffffffff8100c2ca>] child_rip+0xa/0x20 Jan 31 11:52:25 gmds1 kernel: [944623.601056] [<ffffffff810a6c80>] ? kthread+0x0/0xc0 Jan 31 11:52:25 gmds1 kernel: [944623.666502] [<ffffffff8100c2c0>] ? child_rip+0x0/0x20 Jan 31 11:52:25 gmds1 kernel: [944623.731052] Jan 31 11:52:26 gmds1 kernel: [944623.790477] Kernel panic - not syncing: LBUG

            It looks like this is indeed a Lustre internal file, based on the FID reported in the stat output:

            lma: fid=[0x1:0x19c6b:0x0]
            

            This object is an llog file, and can be renamed to O/1/d11/105579 (the FID number in decimal). However, it may be that the MDS has moved on once it detected this log file was missing (there may be some error messages on the MDS console to that effect). At worst this may mean a few OST objects are orphaned, but they will be cleaned up when LFSCK is run with Lustre 2.7 or later (it isn't mentioned in this ticket what version you are running).

            So, in summary, it is possible to repair this file, which may be helpful, but isn't critical if this would take the system offline.

            adilger Andreas Dilger added a comment - It looks like this is indeed a Lustre internal file, based on the FID reported in the stat output: lma: fid=[0x1:0x19c6b:0x0] This object is an llog file, and can be renamed to O/1/d11/105579 (the FID number in decimal). However, it may be that the MDS has moved on once it detected this log file was missing (there may be some error messages on the MDS console to that effect). At worst this may mean a few OST objects are orphaned, but they will be cleaned up when LFSCK is run with Lustre 2.7 or later (it isn't mentioned in this ticket what version you are running). So, in summary, it is possible to repair this file, which may be helpful, but isn't critical if this would take the system offline.

            People

              pjones Peter Jones
              jamervi Joe Mervini
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: