Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11663

corrupt data after page-unaligned write with zfs backend lustre 2.10

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.12.0, Lustre 2.10.6
    • Lustre 2.12.0, Lustre 2.10.5, Lustre 2.10.6
    • client catalyst: lustre-2.8.2_5.chaos-1.ch6.x86_64
      server: porter lustre-2.10.5_2.chaos-3.ch6.x86_64
      kernel-3.10.0-862.14.4.1chaos.ch6.x86_64 (RHEL 7.5 derivative)
    • 2
    • 9223372036854775807

    Description

      The apparent contents of a file change after dropping caches:

      [root@catalyst110:toss-4371.umm1t]# ./proc6.olaf
      + dd if=/dev/urandom of=testfile20K.in bs=10240 count=2
      2+0 records in
      2+0 records out
      20480 bytes (20 kB) copied, 0.024565 s, 834 kB/s
      + dd if=testfile20K.in of=testfile20K.out bs=10240 count=2
      2+0 records in
      2+0 records out
      20480 bytes (20 kB) copied, 0.0451045 s, 454 kB/s
      ++ md5sum testfile20K.out
      + original_md5sum='1060a4c01a415d7c38bdd00dcf09dd22  testfile20K.out'
      + echo 3
      ++ md5sum testfile20K.out
      + echo after drop_caches 1060a4c01a415d7c38bdd00dcf09dd22 testfile20K.out 717122f4dd25f2e75834a8b21c79ce50 testfile20K.out
      after drop_caches 1060a4c01a415d7c38bdd00dcf09dd22 testfile20K.out 717122f4dd25f2e75834a8b21c79ce50 testfile20K.out                                                                        
      
      [root@catalyst110:toss-4371.umm1t]# cat proc6.olaf
      #!/bin/bash
      
      set -x
      
      dd if=/dev/urandom of=testfile.in bs=10240 count=2
      dd if=testfile.in of=testfile.out bs=10240 count=2
      
      #dd if=/dev/urandom of=testfile.in bs=102400 count=2
      #dd if=testfile.in of=testfile.out bs=102400 count=2
      original_md5sum=$(md5sum testfile.out)
      echo 3 >/proc/sys/vm/drop_caches
      
      echo after drop_caches $original_md5sum $(md5sum testfile.out)
      

      Attachments

        Issue Links

          Activity

            [LU-11663] corrupt data after page-unaligned write with zfs backend lustre 2.10

            well, from filesystem point of view, there is no requirement to use same page offset. moreover, client and server may have different pagesize, which makes it impossible to match offset, right?

            bzzz Alex Zhuravlev added a comment - well, from filesystem point of view, there is no requirement to use same page offset. moreover, client and server may have different pagesize, which makes it impossible to match offset, right?
            lixi_wc Li Xi added a comment -

            I feel between the two options that Alex pointed out, reverting the patch of LU-10683 is not a good one. The lnb_page_offset should be the same with the client side page offset in 'struct brw_page', shouldn't it? It doesn't feel right to move the data to the offset 0 of a page when the data has an offset in the page.

            lixi_wc Li Xi added a comment - I feel between the two options that Alex pointed out, reverting the patch of LU-10683 is not a good one. The lnb_page_offset should be the same with the client side page offset in 'struct brw_page', shouldn't it? It doesn't feel right to move the data to the offset 0 of a page when the data has an offset in the page.
            bzzz Alex Zhuravlev added a comment - - edited

            there are two options here: 1) revert LU-10683 (but potentially get bad RPC checksum messages back) 2) apply https://review.whamcloud.com/#/c/33726/ patch which is still under testing.
            both options have worked against our reproducer (see in option #2 patch) on b2_10
            we are still investigating the root cause for LU-10683 (bad checksums)

            bzzz Alex Zhuravlev added a comment - - edited there are two options here: 1) revert LU-10683 (but potentially get bad RPC checksum messages back) 2) apply https://review.whamcloud.com/#/c/33726/ patch which is still under testing. both options have worked against our reproducer (see in option #2 patch) on b2_10 we are still investigating the root cause for LU-10683 (bad checksums)

            Oleg Drokin (green@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33748
            Subject: LU-11663 osd-zfs: write partial pages with correct offset
            Project: fs/lustre-release
            Branch: b2_10
            Current Patch Set: 1
            Commit: 6f9a0292eacb0d603b14cc03290a574cb7f0c846

            gerrit Gerrit Updater added a comment - Oleg Drokin (green@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33748 Subject: LU-11663 osd-zfs: write partial pages with correct offset Project: fs/lustre-release Branch: b2_10 Current Patch Set: 1 Commit: 6f9a0292eacb0d603b14cc03290a574cb7f0c846
            pjones Peter Jones added a comment -

            Strange. Alex's patch did not get an auto comment - https://review.whamcloud.com/#/c/33726/. As I understand it, this patch seems to be holding up well  against the reproducer but the test cases need some refinement. Are we now at the point when LLNL can use a b2_10 port of this patch on their affected filesystem? 

            pjones Peter Jones added a comment - Strange. Alex's patch did not get an auto comment - https://review.whamcloud.com/#/c/33726/.  As I understand it, this patch seems to be holding up well  against the reproducer but the test cases need some refinement. Are we now at the point when LLNL can use a b2_10 port of this patch on their affected filesystem? 

            Olaf,

            If a bug is opened for the grant issue, could you tag me on it?  Thx.

            paf Patrick Farrell (Inactive) added a comment - Olaf, If a bug is opened for the grant issue, could you tag me on it?  Thx.
            green Oleg Drokin added a comment -

            btw, since we are concentrating this ticket on the data corruption, if you want to pursue why some nodes are stuck with no grant and do not appear to be getting any more grant until remount, you probably should open another ticket for this.

            green Oleg Drokin added a comment - btw, since we are concentrating this ticket on the data corruption, if you want to pursue why some nodes are stuck with no grant and do not appear to be getting any more grant until remount, you probably should open another ticket for this.
            pjones Peter Jones added a comment -

            Alex

            Can you please investigate?

            Peter

            pjones Peter Jones added a comment - Alex Can you please investigate? Peter
            green Oleg Drokin added a comment -

            Shortest reproducer:

            lctl set_param fail_loc=0x411
            dd if=/dev/urandom of=testfile.in bs=10240 count=2
            md5sum testfile.in
            lctl set_param ldlm.namespaces.*osc*.lru_size=clear
            md5sum testfile.in
            
            green Oleg Drokin added a comment - Shortest reproducer: lctl set_param fail_loc=0x411 dd if=/dev/urandom of=testfile.in bs=10240 count=2 md5sum testfile.in lctl set_param ldlm.namespaces.*osc*.lru_size=clear md5sum testfile.in
            green Oleg Drokin added a comment - - edited

            Ok, I can reproduce this on master now too. There are two requirements: sync writes due to lack of grant/quota and ZFS. ldiskfs works fine.

            In order to force the lack of quota codepath we can use the 0x411 failloc on the client like this: lctl set_param fail_loc=0x411

            Then run the original inspired script in a lustre dir:

            dd if=/dev/urandom of=testfile.in bs=10240 count=2 
            dd if=testfile.in of=testfile.out bs=10240 count=2
            original_md5sum=$(md5sum testfile.in)
            echo 3 | sudo tee /proc/sys/vm/drop_caches ; sleep 2
            md5sum=$(md5sum testfile.out)
            echo after drop_caches $md5sum before $original_md5sum
            

            Set this and you'll see the problem 100% of the time. What's interesting is doing oflags=sync to dd does not help as it still results in full page writes in RPC for partial page writes on VFS side.

            It appears that the problem is either in ZFS or more likely, in osd-zfs, where when a partial page write happens, the previous content of the page is not read from disk and so we just update the partial content we got in the RPC, but overwrite whatever was supposed to be there in the part that we are not overwriting.

            Comparing osd_write_prep, we can see it's a noop in osd_zfs, but in osd_zfs it actually prereads all partial pages. On the other hand osd_write in osd_zfs uses dmu_write(by_node) with offset so perhaps it's expected that zfs is expected to do this?

            Either way at least it's clear what's going on now, hence this update.

            green Oleg Drokin added a comment - - edited Ok, I can reproduce this on master now too. There are two requirements: sync writes due to lack of grant/quota and ZFS. ldiskfs works fine. In order to force the lack of quota codepath we can use the 0x411 failloc on the client like this: lctl set_param fail_loc=0x411 Then run the original inspired script in a lustre dir: dd if =/dev/urandom of=testfile.in bs=10240 count=2 dd if =testfile.in of=testfile.out bs=10240 count=2 original_md5sum=$(md5sum testfile.in) echo 3 | sudo tee /proc/sys/vm/drop_caches ; sleep 2 md5sum=$(md5sum testfile.out) echo after drop_caches $md5sum before $original_md5sum Set this and you'll see the problem 100% of the time. What's interesting is doing oflags=sync to dd does not help as it still results in full page writes in RPC for partial page writes on VFS side. It appears that the problem is either in ZFS or more likely, in osd-zfs, where when a partial page write happens, the previous content of the page is not read from disk and so we just update the partial content we got in the RPC, but overwrite whatever was supposed to be there in the part that we are not overwriting. Comparing osd_write_prep, we can see it's a noop in osd_zfs, but in osd_zfs it actually prereads all partial pages. On the other hand osd_write in osd_zfs uses dmu_write(by_node) with offset so perhaps it's expected that zfs is expected to do this? Either way at least it's clear what's going on now, hence this update.
            adilger Andreas Dilger added a comment - - edited

            Olaf, I think Oleg was referring to the space grant, which can be seen on the OSS with "lctl get_param obdfilter.*.tot_granted" and the amount granted to the client with "lctl get_param osc.*.cur_grant_bytes" (probably only for the OST the file was striped over. Also useful would be "lctl get_param osc.*.max_dirty_mb".

            adilger Andreas Dilger added a comment - - edited Olaf, I think Oleg was referring to the space grant, which can be seen on the OSS with " lctl get_param obdfilter.*.tot_granted " and the amount granted to the client with " lctl get_param osc.*.cur_grant_bytes " (probably only for the OST the file was striped over. Also useful would be " lctl get_param osc.*.max_dirty_mb ".

            People

              bzzz Alex Zhuravlev
              ofaaland Olaf Faaland
              Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: