Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1442

File corrupt with 1MiB-aligned 4k regions of zeros

Details

    • 3
    • 4520

    Description

      A data integrity test run periodically run by our storage group found two occurrences of corrupt files written to Lustre. The original files contain 300 MB of random data. The corrupt copies contain several 4096B regions of zeros aligned on 1MiB boundaries. The two corrupt files were written to the same filesystem from two different login nodes on the same cluster within five minutes of each other. The stripe count is 100.

      The client application is a parallel ftp client reading data out of our storage archive into Lustre. The test checks for differences between the restored files and the original copies. For a 300MB file it uses 4 threads which issue 4 64MB pwrite()'s and 1 44MB pwrite(). It is possible that the pwrite() gets restarted due to SIGUSR2 from a master process, though we don't know if this occurred in the corrupting case. This test has seen years of widespread use on all of our clusters, and this is the first reported incidence of this type of corruption, so we can characterize the frequency as rare.

      When I examine an OST object containing a corrupt region, I see there is no block allocated for the corrupt region (in this case, logical block 256 is missing).

      # pigs58 /root > debugfs -c -R "dump_extents /O/0/d$((30205348 % 32))/30205348" /dev/sdb
      debugfs 1.41.12 (17-May-2010)
      /dev/sdb: catastrophic mode - not reading inode or group bitmaps
      Level Entries       Logical              Physical Length Flags
       0/ 0   1/  3     0 -   255 813140480 - 813140735    256
       0/ 0   2/  3   257 -   511 813142528 - 813142782    255
       0/ 0   3/  3   512 -   767 813143040 - 813143295    256
      

      Finally, the following server-side console messages appeared at the same time one of the corrupted files was written, and mention the NID of the implicated client. The consoles of the OSTs containing the corrupt objects were quiet at the time.

      May 17 01:06:08 pigs-mds1 kernel: LustreError: 20418:0:(mdt_recovery.c:1011:mdt_steal_ack_locks()) Resent req xid 1402165306385077 has mismatched opc: new 101 old 0
      May 17 01:06:08 pigs-mds1 kernel: Lustre: 20418:0:(mdt_recovery.c:1022:mdt_steal_ack_locks()) Stealing 1 locks from rs ffff880410f62000 x1402165306385077.t125822723745 o0 NID 192.168.114.155@o2ib5
      May 17 01:06:08 pigs-mds1 kernel: Lustre: All locks stolen from rs ffff880410f62000 x1402165306385077.t125822723745 o0 NID 192.168.114.155@o2ib5
      

      Attachments

        Issue Links

          Activity

            [LU-1442] File corrupt with 1MiB-aligned 4k regions of zeros

            Have you ever run other jobs in this cluster because I saw some non 1M RPCs in the roc_stats?

            From the symptom, I guess somehow the 1st page of a stripe was not included in the RPC. If that is true, then that RPC must have only 255 pages and we should see a tally in the stats. Of course this way only works if that job is the only one in the cluster.

            We'll make a debug patch for this purpose.

            jay Jinshan Xiong (Inactive) added a comment - Have you ever run other jobs in this cluster because I saw some non 1M RPCs in the roc_stats? From the symptom, I guess somehow the 1st page of a stripe was not included in the RPC. If that is true, then that RPC must have only 255 pages and we should see a tally in the stats. Of course this way only works if that job is the only one in the cluster. We'll make a debug patch for this purpose.

            The rpc_stats for the involved OST looked like this:

                                        read                    write
            pages per rpc           rpcs   %   cum % |   rpcs  %  cum %
            1:                        0    0     0   |      3  0    0   
            2:                        0    0     0   |      0  0    0   
            4:                        2    0     0   |      0  0    0   
            8:                        0    0     0   |      0  0    0   
            16:                       0    0     0   |      0  0    0   
            32:                       0    0     0   |      1  0    0   
            64:                       0    0     0   |      3  0    0   
            128:                      0    0     0   |      3  0    0   
            256:                   4771   99   100   |   4768 99  100 
            
                                        read                    write
            offset                  rpcs   %   cum % |   rpcs  %  cum %
            1:                     4772   99    99   |   4760 99   99  
            2:                        0    0    99   |      4  0   99  
            4:                        2    0    99   |      0  0   99  
            8:                        0    0    99   |      0  0   99  
            16:                       0    0    99   |      0  0   99  
            32:                       0    0    99   |      1  0   99  
            64:                       1    0   100   |      6  0   99  
            128:                      0    0   100   |      7  0  100 
            
            nedbass Ned Bass (Inactive) added a comment - The rpc_stats for the involved OST looked like this: read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 3 0 0 2: 0 0 0 | 0 0 0 4: 2 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 0 0 0 | 0 0 0 32: 0 0 0 | 1 0 0 64: 0 0 0 | 3 0 0 128: 0 0 0 | 3 0 0 256: 4771 99 100 | 4768 99 100 read write offset rpcs % cum % | rpcs % cum % 1: 4772 99 99 | 4760 99 99 2: 0 0 99 | 4 0 99 4: 2 0 99 | 0 0 99 8: 0 0 99 | 0 0 99 16: 0 0 99 | 0 0 99 32: 0 0 99 | 1 0 99 64: 1 0 100 | 6 0 99 128: 0 0 100 | 7 0 100

            No, both client and server are x86_64.

            morrone Christopher Morrone (Inactive) added a comment - No, both client and server are x86_64.

            I can't think of what kind of problem can cause this symptom, there is even no lock contention. One more question is: is the client ppc64?

            jay Jinshan Xiong (Inactive) added a comment - I can't think of what kind of problem can cause this symptom, there is even no lock contention. One more question is: is the client ppc64?

            The stripe patterns are not always the same. The first case involved a 300 MB file with a stripe count of 100. This most recent case involved a 10 GB file with a stripe count of 2. The only consistent pattern so far is that it always seems to be logical block 256 missing from an OST object.

            I'll check the rpc_stats tomorrow.

            nedbass Ned Bass (Inactive) added a comment - The stripe patterns are not always the same. The first case involved a 300 MB file with a stripe count of 100. This most recent case involved a 10 GB file with a stripe count of 2. The only consistent pattern so far is that it always seems to be logical block 256 missing from an OST object. I'll check the rpc_stats tomorrow.
            bobijam Zhenyu Xu added a comment -

            Ned Bass,

            Would you mind checking whether the stripe patterns are always the same for the error cases? Also please collect and upload client's rpc_stats of clients info (e.g. /proc/fs/lustre/osc/lustre-OST0000-osc-ffff88003de72800/rpc_stats data).

            We'd like to know whether clients have always written 1M RPC to OST w/o partial write, so that we'd know whether client's issue or OST's.

            bobijam Zhenyu Xu added a comment - Ned Bass, Would you mind checking whether the stripe patterns are always the same for the error cases? Also please collect and upload client's rpc_stats of clients info (e.g. /proc/fs/lustre/osc/lustre-OST0000-osc-ffff88003de72800/rpc_stats data). We'd like to know whether clients have always written 1M RPC to OST w/o partial write, so that we'd know whether client's issue or OST's.

            Looking at the on-disk data, it is again logical block 256 that is unallocated in the OST object.

            nedbass Ned Bass (Inactive) added a comment - Looking at the on-disk data, it is again logical block 256 that is unallocated in the OST object.

            Our test program detected this type of corruption again, so we should re-open this issue. Client was running 2.1.2-4chaos, which includes the patch "LU-1442 llite: cleanup if a page failed to add into cache".

            https://github.com/chaos/lustre/commits/2.1.2-4chaos

            Note that this test runs regularly on our production systems, so this bug is very rare to reproduce.

            nedbass Ned Bass (Inactive) added a comment - Our test program detected this type of corruption again, so we should re-open this issue. Client was running 2.1.2-4chaos, which includes the patch " LU-1442 llite: cleanup if a page failed to add into cache". https://github.com/chaos/lustre/commits/2.1.2-4chaos Note that this test runs regularly on our production systems, so this bug is very rare to reproduce.
            cyberchip Yusong Wang added a comment -

            Hi,

            We got a similar issue (the written file has a big hole with a lot of zeros) occasionally and caused wrong simulation results.

            As we have Luster 1.8.8, I wonder if our file system could be affected by this bug.

            Thanks,
            Yusong

            cyberchip Yusong Wang added a comment - Hi, We got a similar issue (the written file has a big hole with a lot of zeros) occasionally and caused wrong simulation results. As we have Luster 1.8.8, I wonder if our file system could be affected by this bug. Thanks, Yusong

            Thanks, Jinshan!

            jaylan Jay Lan (Inactive) added a comment - Thanks, Jinshan!

            Hi Jay Lan, Lu-1717 was just created to address/understand this MDS console error message.

            jay Jinshan Xiong (Inactive) added a comment - Hi Jay Lan, Lu-1717 was just created to address/understand this MDS console error message.

            People

              jay Jinshan Xiong (Inactive)
              nedbass Ned Bass (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: