Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1442

File corrupt with 1MiB-aligned 4k regions of zeros

Details

    • 3
    • 4520

    Description

      A data integrity test run periodically run by our storage group found two occurrences of corrupt files written to Lustre. The original files contain 300 MB of random data. The corrupt copies contain several 4096B regions of zeros aligned on 1MiB boundaries. The two corrupt files were written to the same filesystem from two different login nodes on the same cluster within five minutes of each other. The stripe count is 100.

      The client application is a parallel ftp client reading data out of our storage archive into Lustre. The test checks for differences between the restored files and the original copies. For a 300MB file it uses 4 threads which issue 4 64MB pwrite()'s and 1 44MB pwrite(). It is possible that the pwrite() gets restarted due to SIGUSR2 from a master process, though we don't know if this occurred in the corrupting case. This test has seen years of widespread use on all of our clusters, and this is the first reported incidence of this type of corruption, so we can characterize the frequency as rare.

      When I examine an OST object containing a corrupt region, I see there is no block allocated for the corrupt region (in this case, logical block 256 is missing).

      # pigs58 /root > debugfs -c -R "dump_extents /O/0/d$((30205348 % 32))/30205348" /dev/sdb
      debugfs 1.41.12 (17-May-2010)
      /dev/sdb: catastrophic mode - not reading inode or group bitmaps
      Level Entries       Logical              Physical Length Flags
       0/ 0   1/  3     0 -   255 813140480 - 813140735    256
       0/ 0   2/  3   257 -   511 813142528 - 813142782    255
       0/ 0   3/  3   512 -   767 813143040 - 813143295    256
      

      Finally, the following server-side console messages appeared at the same time one of the corrupted files was written, and mention the NID of the implicated client. The consoles of the OSTs containing the corrupt objects were quiet at the time.

      May 17 01:06:08 pigs-mds1 kernel: LustreError: 20418:0:(mdt_recovery.c:1011:mdt_steal_ack_locks()) Resent req xid 1402165306385077 has mismatched opc: new 101 old 0
      May 17 01:06:08 pigs-mds1 kernel: Lustre: 20418:0:(mdt_recovery.c:1022:mdt_steal_ack_locks()) Stealing 1 locks from rs ffff880410f62000 x1402165306385077.t125822723745 o0 NID 192.168.114.155@o2ib5
      May 17 01:06:08 pigs-mds1 kernel: Lustre: All locks stolen from rs ffff880410f62000 x1402165306385077.t125822723745 o0 NID 192.168.114.155@o2ib5
      

      Attachments

        Issue Links

          Activity

            [LU-1442] File corrupt with 1MiB-aligned 4k regions of zeros

            Old ticket for unsupported version

            simmonsja James A Simmons added a comment - Old ticket for unsupported version
            pjones Peter Jones added a comment -

            Sorry for the confusion Ned - things just got a little muddled due to two somewhat similar issues becoming intertwined. Yes, this is still on Jinshan's plate to dig into further at some point.

            pjones Peter Jones added a comment - Sorry for the confusion Ned - things just got a little muddled due to two somewhat similar issues becoming intertwined. Yes, this is still on Jinshan's plate to dig into further at some point.

            Jodi, afaik this issue is not resolved, just rare. Are there still any plans to make a debug patch?

            nedbass Ned Bass (Inactive) added a comment - Jodi, afaik this issue is not resolved, just rare. Are there still any plans to make a debug patch?

            Reclosing this ticket. The new issue mentioned is tracked in LU-3219.

            jlevi Jodi Levi (Inactive) added a comment - Reclosing this ticket. The new issue mentioned is tracked in LU-3219 .

            Moved unrelated comments to new bug LU-3219.

            artem_blagodarenko Artem Blagodarenko (Inactive) added a comment - - edited Moved unrelated comments to new bug LU-3219 .

            I'm lowering the prio to major and will continue working on this after getting new clues.

            jay Jinshan Xiong (Inactive) added a comment - I'm lowering the prio to major and will continue working on this after getting new clues.

            Hi Jinshan,

            This node has been up for about 70 days and has run many other jobs, so we can't say for sure whether the missing page is reflected in the stats.

            I like the idea of a debug patch to catch this case. We discussed that approach during our last meeting but I couldn't easily figure out where to put the debugging.

            nedbass Ned Bass (Inactive) added a comment - Hi Jinshan, This node has been up for about 70 days and has run many other jobs, so we can't say for sure whether the missing page is reflected in the stats. I like the idea of a debug patch to catch this case. We discussed that approach during our last meeting but I couldn't easily figure out where to put the debugging.

            Have you ever run other jobs in this cluster because I saw some non 1M RPCs in the roc_stats?

            From the symptom, I guess somehow the 1st page of a stripe was not included in the RPC. If that is true, then that RPC must have only 255 pages and we should see a tally in the stats. Of course this way only works if that job is the only one in the cluster.

            We'll make a debug patch for this purpose.

            jay Jinshan Xiong (Inactive) added a comment - Have you ever run other jobs in this cluster because I saw some non 1M RPCs in the roc_stats? From the symptom, I guess somehow the 1st page of a stripe was not included in the RPC. If that is true, then that RPC must have only 255 pages and we should see a tally in the stats. Of course this way only works if that job is the only one in the cluster. We'll make a debug patch for this purpose.

            The rpc_stats for the involved OST looked like this:

                                        read                    write
            pages per rpc           rpcs   %   cum % |   rpcs  %  cum %
            1:                        0    0     0   |      3  0    0   
            2:                        0    0     0   |      0  0    0   
            4:                        2    0     0   |      0  0    0   
            8:                        0    0     0   |      0  0    0   
            16:                       0    0     0   |      0  0    0   
            32:                       0    0     0   |      1  0    0   
            64:                       0    0     0   |      3  0    0   
            128:                      0    0     0   |      3  0    0   
            256:                   4771   99   100   |   4768 99  100 
            
                                        read                    write
            offset                  rpcs   %   cum % |   rpcs  %  cum %
            1:                     4772   99    99   |   4760 99   99  
            2:                        0    0    99   |      4  0   99  
            4:                        2    0    99   |      0  0   99  
            8:                        0    0    99   |      0  0   99  
            16:                       0    0    99   |      0  0   99  
            32:                       0    0    99   |      1  0   99  
            64:                       1    0   100   |      6  0   99  
            128:                      0    0   100   |      7  0  100 
            
            nedbass Ned Bass (Inactive) added a comment - The rpc_stats for the involved OST looked like this: read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 3 0 0 2: 0 0 0 | 0 0 0 4: 2 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 0 0 0 | 0 0 0 32: 0 0 0 | 1 0 0 64: 0 0 0 | 3 0 0 128: 0 0 0 | 3 0 0 256: 4771 99 100 | 4768 99 100 read write offset rpcs % cum % | rpcs % cum % 1: 4772 99 99 | 4760 99 99 2: 0 0 99 | 4 0 99 4: 2 0 99 | 0 0 99 8: 0 0 99 | 0 0 99 16: 0 0 99 | 0 0 99 32: 0 0 99 | 1 0 99 64: 1 0 100 | 6 0 99 128: 0 0 100 | 7 0 100

            People

              jay Jinshan Xiong (Inactive)
              nedbass Ned Bass (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: