Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1442

File corrupt with 1MiB-aligned 4k regions of zeros

Details

    • 3
    • 4520

    Description

      A data integrity test run periodically run by our storage group found two occurrences of corrupt files written to Lustre. The original files contain 300 MB of random data. The corrupt copies contain several 4096B regions of zeros aligned on 1MiB boundaries. The two corrupt files were written to the same filesystem from two different login nodes on the same cluster within five minutes of each other. The stripe count is 100.

      The client application is a parallel ftp client reading data out of our storage archive into Lustre. The test checks for differences between the restored files and the original copies. For a 300MB file it uses 4 threads which issue 4 64MB pwrite()'s and 1 44MB pwrite(). It is possible that the pwrite() gets restarted due to SIGUSR2 from a master process, though we don't know if this occurred in the corrupting case. This test has seen years of widespread use on all of our clusters, and this is the first reported incidence of this type of corruption, so we can characterize the frequency as rare.

      When I examine an OST object containing a corrupt region, I see there is no block allocated for the corrupt region (in this case, logical block 256 is missing).

      # pigs58 /root > debugfs -c -R "dump_extents /O/0/d$((30205348 % 32))/30205348" /dev/sdb
      debugfs 1.41.12 (17-May-2010)
      /dev/sdb: catastrophic mode - not reading inode or group bitmaps
      Level Entries       Logical              Physical Length Flags
       0/ 0   1/  3     0 -   255 813140480 - 813140735    256
       0/ 0   2/  3   257 -   511 813142528 - 813142782    255
       0/ 0   3/  3   512 -   767 813143040 - 813143295    256
      

      Finally, the following server-side console messages appeared at the same time one of the corrupted files was written, and mention the NID of the implicated client. The consoles of the OSTs containing the corrupt objects were quiet at the time.

      May 17 01:06:08 pigs-mds1 kernel: LustreError: 20418:0:(mdt_recovery.c:1011:mdt_steal_ack_locks()) Resent req xid 1402165306385077 has mismatched opc: new 101 old 0
      May 17 01:06:08 pigs-mds1 kernel: Lustre: 20418:0:(mdt_recovery.c:1022:mdt_steal_ack_locks()) Stealing 1 locks from rs ffff880410f62000 x1402165306385077.t125822723745 o0 NID 192.168.114.155@o2ib5
      May 17 01:06:08 pigs-mds1 kernel: Lustre: All locks stolen from rs ffff880410f62000 x1402165306385077.t125822723745 o0 NID 192.168.114.155@o2ib5
      

      Attachments

        Issue Links

          Activity

            [LU-1442] File corrupt with 1MiB-aligned 4k regions of zeros

            I can't think of what kind of problem can cause this symptom, there is even no lock contention. One more question is: is the client ppc64?

            jay Jinshan Xiong (Inactive) added a comment - I can't think of what kind of problem can cause this symptom, there is even no lock contention. One more question is: is the client ppc64?

            The stripe patterns are not always the same. The first case involved a 300 MB file with a stripe count of 100. This most recent case involved a 10 GB file with a stripe count of 2. The only consistent pattern so far is that it always seems to be logical block 256 missing from an OST object.

            I'll check the rpc_stats tomorrow.

            nedbass Ned Bass (Inactive) added a comment - The stripe patterns are not always the same. The first case involved a 300 MB file with a stripe count of 100. This most recent case involved a 10 GB file with a stripe count of 2. The only consistent pattern so far is that it always seems to be logical block 256 missing from an OST object. I'll check the rpc_stats tomorrow.
            bobijam Zhenyu Xu added a comment -

            Ned Bass,

            Would you mind checking whether the stripe patterns are always the same for the error cases? Also please collect and upload client's rpc_stats of clients info (e.g. /proc/fs/lustre/osc/lustre-OST0000-osc-ffff88003de72800/rpc_stats data).

            We'd like to know whether clients have always written 1M RPC to OST w/o partial write, so that we'd know whether client's issue or OST's.

            bobijam Zhenyu Xu added a comment - Ned Bass, Would you mind checking whether the stripe patterns are always the same for the error cases? Also please collect and upload client's rpc_stats of clients info (e.g. /proc/fs/lustre/osc/lustre-OST0000-osc-ffff88003de72800/rpc_stats data). We'd like to know whether clients have always written 1M RPC to OST w/o partial write, so that we'd know whether client's issue or OST's.

            Looking at the on-disk data, it is again logical block 256 that is unallocated in the OST object.

            nedbass Ned Bass (Inactive) added a comment - Looking at the on-disk data, it is again logical block 256 that is unallocated in the OST object.

            Our test program detected this type of corruption again, so we should re-open this issue. Client was running 2.1.2-4chaos, which includes the patch "LU-1442 llite: cleanup if a page failed to add into cache".

            https://github.com/chaos/lustre/commits/2.1.2-4chaos

            Note that this test runs regularly on our production systems, so this bug is very rare to reproduce.

            nedbass Ned Bass (Inactive) added a comment - Our test program detected this type of corruption again, so we should re-open this issue. Client was running 2.1.2-4chaos, which includes the patch " LU-1442 llite: cleanup if a page failed to add into cache". https://github.com/chaos/lustre/commits/2.1.2-4chaos Note that this test runs regularly on our production systems, so this bug is very rare to reproduce.
            cyberchip Yusong Wang added a comment -

            Hi,

            We got a similar issue (the written file has a big hole with a lot of zeros) occasionally and caused wrong simulation results.

            As we have Luster 1.8.8, I wonder if our file system could be affected by this bug.

            Thanks,
            Yusong

            cyberchip Yusong Wang added a comment - Hi, We got a similar issue (the written file has a big hole with a lot of zeros) occasionally and caused wrong simulation results. As we have Luster 1.8.8, I wonder if our file system could be affected by this bug. Thanks, Yusong

            Thanks, Jinshan!

            jaylan Jay Lan (Inactive) added a comment - Thanks, Jinshan!

            Hi Jay Lan, Lu-1717 was just created to address/understand this MDS console error message.

            jay Jinshan Xiong (Inactive) added a comment - Hi Jay Lan, Lu-1717 was just created to address/understand this MDS console error message.

            This message may not introduce any problem, probably just too aggressive console output. I think it can be ignored if you didn't see any real problem.

            jay Jinshan Xiong (Inactive) added a comment - This message may not introduce any problem, probably just too aggressive console output. I think it can be ignored if you didn't see any real problem.

            Hi Jinshan, but our console log messages look the same as the server console log messages shown in the Description section of the ticket. Are you suggesting that other problems can produce the same log messages and thus the messages alone not sufficient evidence? Please advise. Thanks!

            jaylan Jay Lan (Inactive) added a comment - Hi Jinshan, but our console log messages look the same as the server console log messages shown in the Description section of the ticket. Are you suggesting that other problems can produce the same log messages and thus the messages alone not sufficient evidence? Please advise. Thanks!

            I'm going to set this issue as fixed otherwise we can't release 2.3.

            Please reopen this issue if it occurs again.

            jay Jinshan Xiong (Inactive) added a comment - I'm going to set this issue as fixed otherwise we can't release 2.3. Please reopen this issue if it occurs again.

            People

              jay Jinshan Xiong (Inactive)
              nedbass Ned Bass (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: