Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11663

corrupt data after page-unaligned write with zfs backend lustre 2.10

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.12.0, Lustre 2.10.6
    • Lustre 2.12.0, Lustre 2.10.5, Lustre 2.10.6
    • client catalyst: lustre-2.8.2_5.chaos-1.ch6.x86_64
      server: porter lustre-2.10.5_2.chaos-3.ch6.x86_64
      kernel-3.10.0-862.14.4.1chaos.ch6.x86_64 (RHEL 7.5 derivative)
    • 2
    • 9223372036854775807

    Description

      The apparent contents of a file change after dropping caches:

      [root@catalyst110:toss-4371.umm1t]# ./proc6.olaf
      + dd if=/dev/urandom of=testfile20K.in bs=10240 count=2
      2+0 records in
      2+0 records out
      20480 bytes (20 kB) copied, 0.024565 s, 834 kB/s
      + dd if=testfile20K.in of=testfile20K.out bs=10240 count=2
      2+0 records in
      2+0 records out
      20480 bytes (20 kB) copied, 0.0451045 s, 454 kB/s
      ++ md5sum testfile20K.out
      + original_md5sum='1060a4c01a415d7c38bdd00dcf09dd22  testfile20K.out'
      + echo 3
      ++ md5sum testfile20K.out
      + echo after drop_caches 1060a4c01a415d7c38bdd00dcf09dd22 testfile20K.out 717122f4dd25f2e75834a8b21c79ce50 testfile20K.out
      after drop_caches 1060a4c01a415d7c38bdd00dcf09dd22 testfile20K.out 717122f4dd25f2e75834a8b21c79ce50 testfile20K.out                                                                        
      
      [root@catalyst110:toss-4371.umm1t]# cat proc6.olaf
      #!/bin/bash
      
      set -x
      
      dd if=/dev/urandom of=testfile.in bs=10240 count=2
      dd if=testfile.in of=testfile.out bs=10240 count=2
      
      #dd if=/dev/urandom of=testfile.in bs=102400 count=2
      #dd if=testfile.in of=testfile.out bs=102400 count=2
      original_md5sum=$(md5sum testfile.out)
      echo 3 >/proc/sys/vm/drop_caches
      
      echo after drop_caches $original_md5sum $(md5sum testfile.out)
      

      Attachments

        Issue Links

          Activity

            [LU-11663] corrupt data after page-unaligned write with zfs backend lustre 2.10
            ofaaland Olaf Faaland added a comment -

            Is there some pattern around which nodes can hit this issue vs those that don't?

            Not that I've been able to find.

            ofaaland Olaf Faaland added a comment - Is there some pattern around which nodes can hit this issue vs those that don't? Not that I've been able to find.
            pjones Peter Jones added a comment -

            Olaf

            Is there some pattern around which nodes can hit this issue vs those that don't?

            Peter

            pjones Peter Jones added a comment - Olaf Is there some pattern around which nodes can hit this issue vs those that don't? Peter
            ofaaland Olaf Faaland added a comment -

            I've had issues getting other versions of the client to work due to changes in IB with map_on_demand, peer_credits, etc. in recent versions, but I think I'm past that.

            Today I reproduced the issue with client version 2.8.2_2.chaos.  I'll try with earlier and later clients tomorrow.

            ofaaland Olaf Faaland added a comment - I've had issues getting other versions of the client to work due to changes in IB with map_on_demand, peer_credits, etc. in recent versions, but I think I'm past that. Today I reproduced the issue with client version 2.8.2_2.chaos.  I'll try with earlier and later clients tomorrow.
            ofaaland Olaf Faaland added a comment -

            Thanks, for trying those, Sarah. Can you suggest information to capture from my clients where the problem is reproducing? As I mentioned, even on my cluster during testing over the weekend, some nodes reproduced reliably, but many never did.

            ofaaland Olaf Faaland added a comment - Thanks, for trying those, Sarah. Can you suggest information to capture from my clients where the problem is reproducing? As I mentioned, even on my cluster during testing over the weekend, some nodes reproduced reliably, but many never did.
            sarah Sarah Liu added a comment - - edited

            I downloaded lustre-2.10.5_2.chaos.tar.gz and lustre-2.8.2_1.chaos.tar.gz (cannot find 2.8.2_5) from https://github.com/LLNL/lustre/releases, compile them and cannot reproduce.

            Server: compile lustre-2.10.5_2.chaos on kernel 3.10.0-862.14.4.el7_lustre.x86_64
            1 MDT, 1 OST on single node

            [root@trevis-60vm1 utils]# uname -a
            Linux trevis-60vm1.trevis.whamcloud.com 3.10.0-862.14.4.el7_lustre.x86_64 #1 SMP Thu Nov 8 07:41:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
            [root@trevis-60vm1 utils]#
            [root@trevis-60vm1 ~]# lu-11663/lustre-2.10.5_2.chaos/lustre/utils/lctl get_param version
            version=2.10.5
            [root@trevis-60vm1 ~]#
            
            [root@trevis-60vm1 ~]# rpm -qa|grep zfs
            libzfs2-0.7.9-1.el7.x86_64
            kmod-lustre-osd-zfs-2.11.56_140_g2339e1b-1.el7.x86_64
            libzfs2-devel-0.7.9-1.el7.x86_64
            lustre-osd-zfs-mount-2.11.56_140_g2339e1b-1.el7.x86_64
            zfs-0.7.9-1.el7.x86_64
            ...
            

            Single client, compile lustre-2.8.2_1.chaos on kernel 3.10.0-327.3.1.el7.x86_64(2.8.0 kernel)

            [root@trevis-60vm4 ~]# lu-11663/lustre-2.8.2_1.chaos/lustre/utils/lctl get_param version
            version=lustre: 2.8.2
            kernel: patchless_client
            build:  2.8.2
            [root@trevis-60vm4 ~]#
            [root@trevis-60vm4 ~]# uname -a
            Linux trevis-60vm4.trevis.whamcloud.com 3.10.0-327.3.1.el7.x86_64 #1 SMP Fri Nov 20 05:40:26 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
            [root@trevis-60vm4 ~]#
            

            result

            [root@trevis-60vm4 ~]# cd /mnt/lustre/
            [root@trevis-60vm4 lustre]# sh foo.sh 
            + dd if=/dev/urandom of=testfile.in bs=10240 count=2
            2+0 records in
            2+0 records out
            20480 bytes (20 kB) copied, 0.00236147 s, 8.7 MB/s
            + dd if=testfile.in of=testfile.out bs=10240 count=2
            2+0 records in
            2+0 records out
            20480 bytes (20 kB) copied, 0.00111802 s, 18.3 MB/s
            ++ md5sum testfile.out
            + original_md5sum='20dd24fb015feb7de67bbdc12f2c16bf  testfile.out'
            + echo 3
            ++ md5sum testfile.out
            + echo after drop_caches 20dd24fb015feb7de67bbdc12f2c16bf testfile.out 20dd24fb015feb7de67bbdc12f2c16bf testfile.out
            after drop_caches 20dd24fb015feb7de67bbdc12f2c16bf testfile.out 20dd24fb015feb7de67bbdc12f2c16bf testfile.out
            [root@trevis-60vm4 lustre]#
            

            Also tried client with branch: origin/2.8.2-llnl from fs/lustre-release-fe-llnl
            top commit 8356dd88e2e59edd1462bb4647f61d5a210d4262
            run reproducer 10 times, cannot reproduce.

            sarah Sarah Liu added a comment - - edited I downloaded lustre-2.10.5_2.chaos.tar.gz and lustre-2.8.2_1.chaos.tar.gz (cannot find 2.8.2_5) from https://github.com/LLNL/lustre/releases , compile them and cannot reproduce. Server: compile lustre-2.10.5_2.chaos on kernel 3.10.0-862.14.4.el7_lustre.x86_64 1 MDT, 1 OST on single node [root@trevis-60vm1 utils]# uname -a Linux trevis-60vm1.trevis.whamcloud.com 3.10.0-862.14.4.el7_lustre.x86_64 #1 SMP Thu Nov 8 07:41:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [root@trevis-60vm1 utils]# [root@trevis-60vm1 ~]# lu-11663/lustre-2.10.5_2.chaos/lustre/utils/lctl get_param version version=2.10.5 [root@trevis-60vm1 ~]# [root@trevis-60vm1 ~]# rpm -qa|grep zfs libzfs2-0.7.9-1.el7.x86_64 kmod-lustre-osd-zfs-2.11.56_140_g2339e1b-1.el7.x86_64 libzfs2-devel-0.7.9-1.el7.x86_64 lustre-osd-zfs-mount-2.11.56_140_g2339e1b-1.el7.x86_64 zfs-0.7.9-1.el7.x86_64 ... Single client, compile lustre-2.8.2_1.chaos on kernel 3.10.0-327.3.1.el7.x86_64(2.8.0 kernel) [root@trevis-60vm4 ~]# lu-11663/lustre-2.8.2_1.chaos/lustre/utils/lctl get_param version version=lustre: 2.8.2 kernel: patchless_client build: 2.8.2 [root@trevis-60vm4 ~]# [root@trevis-60vm4 ~]# uname -a Linux trevis-60vm4.trevis.whamcloud.com 3.10.0-327.3.1.el7.x86_64 #1 SMP Fri Nov 20 05:40:26 EST 2015 x86_64 x86_64 x86_64 GNU/Linux [root@trevis-60vm4 ~]# result [root@trevis-60vm4 ~]# cd /mnt/lustre/ [root@trevis-60vm4 lustre]# sh foo.sh + dd if=/dev/urandom of=testfile.in bs=10240 count=2 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.00236147 s, 8.7 MB/s + dd if=testfile.in of=testfile.out bs=10240 count=2 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.00111802 s, 18.3 MB/s ++ md5sum testfile.out + original_md5sum='20dd24fb015feb7de67bbdc12f2c16bf testfile.out' + echo 3 ++ md5sum testfile.out + echo after drop_caches 20dd24fb015feb7de67bbdc12f2c16bf testfile.out 20dd24fb015feb7de67bbdc12f2c16bf testfile.out after drop_caches 20dd24fb015feb7de67bbdc12f2c16bf testfile.out 20dd24fb015feb7de67bbdc12f2c16bf testfile.out [root@trevis-60vm4 lustre]# Also tried client with branch: origin/2.8.2-llnl from fs/lustre-release-fe-llnl top commit 8356dd88e2e59edd1462bb4647f61d5a210d4262 run reproducer 10 times, cannot reproduce.

            I'm still working on trying previous client versions. I should have at least one other version tested today.

            For context, this issue has been observed on client cluster catalyst, which mounts three lustre file systems.

            • lustre3 hosted on porter. This is lustre 2.10.5 based.
            • lustre1 hosted on copper. This is lustre 2.8.2 based.
            • lscratchh hosted on zinc. This is lustre 2.8.2 based.

            Connections are through routers. The routers in catalyst are the same version as the clients. All nodes are x86_64. I don't recall the IB-to-IP router nodes lustre or kernel versions but can find out.

            catalyst-compute <> catalyst-router <> lustre3
            catalyst-compute <> catalyst-router <> IB-to-IP-router <> IP-to-IB-router <> (lustre1 and lscratchh)

            We have observed this issue only on lustre3 so far.

            During testing this weekend I ran two 1000-iteration test sets on 20 dedicated catalyst nodes. During both sets:

            • one node, catalyst110, reproduced the problem > 95% of the time
            • a different node reproduced the problem about 15% of the time
            • fifteen nodes never reproduced the problem

            In the first test set, I only ran the reproducer against lustre3, where the issue was first identified last week.
            In the second test set, I ran the reproducer first against lustre3 and then against lustre1. The problem was reproduced only with lustre3, the 2.10 file system. It was never reproduced with lustre1.

            ofaaland Olaf Faaland added a comment - I'm still working on trying previous client versions. I should have at least one other version tested today. For context, this issue has been observed on client cluster catalyst, which mounts three lustre file systems. lustre3 hosted on porter. This is lustre 2.10.5 based. lustre1 hosted on copper. This is lustre 2.8.2 based. lscratchh hosted on zinc. This is lustre 2.8.2 based. Connections are through routers. The routers in catalyst are the same version as the clients. All nodes are x86_64. I don't recall the IB-to-IP router nodes lustre or kernel versions but can find out. catalyst-compute < > catalyst-router < > lustre3 catalyst-compute < > catalyst-router < > IB-to-IP-router < > IP-to-IB-router < > (lustre1 and lscratchh) We have observed this issue only on lustre3 so far. During testing this weekend I ran two 1000-iteration test sets on 20 dedicated catalyst nodes. During both sets: one node, catalyst110, reproduced the problem > 95% of the time a different node reproduced the problem about 15% of the time fifteen nodes never reproduced the problem In the first test set, I only ran the reproducer against lustre3, where the issue was first identified last week. In the second test set, I ran the reproducer first against lustre3 and then against lustre1. The problem was reproduced only with lustre3, the 2.10 file system. It was never reproduced with lustre1.
            ofaaland Olaf Faaland added a comment -

            I guess the other question is whether you tried running the reproducer on some previous version on the client? Is it possible that this is a newly introduced problem? It seems a bit strange that there would be a problem like this going unnoticed since 2.8 was released.

            I agree.  I'll try that.

            ofaaland Olaf Faaland added a comment - I guess the other question is whether you tried running the reproducer on some previous version on the client? Is it possible that this is a newly introduced problem? It seems a bit strange that there would be a problem like this going unnoticed since 2.8 was released. I agree.  I'll try that.
            ofaaland Olaf Faaland added a comment -

            In testing since yesterday I'm sometimes finding the corruption does not occur - that is, if I run the same reproduce 60 times in a row on the same client, for example, it may show corruption 50 times in a row and then show no corruption for the last 10.

            I attached for-upload-lu-11663.tar.bz2 which has -1 debug logs for 3 attempts, along with the terminal output when I ran the reproducer and an index matching the results to the log files.  I run lctl dk before each attempt and after, so there are 6 log files.

            After the first attempt, which shows the corruption, I umount all the lustre file systems and then mount them again.  I then run the same reproducer twice and no corruption occurs.  I'm not sure whether that's due to the umount/remount or not.

            ofaaland Olaf Faaland added a comment - In testing since yesterday I'm sometimes finding the corruption does not occur - that is, if I run the same reproduce 60 times in a row on the same client, for example, it may show corruption 50 times in a row and then show no corruption for the last 10. I attached for-upload-lu-11663.tar.bz2 which has -1 debug logs for 3 attempts, along with the terminal output when I ran the reproducer and an index matching the results to the log files.  I run lctl dk before each attempt and after, so there are 6 log files. After the first attempt, which shows the corruption, I umount all the lustre file systems and then mount them again.  I then run the same reproducer twice and no corruption occurs.  I'm not sure whether that's due to the umount/remount or not.

            I guess the other question is whether you tried running the reproducer on some previous version on the client? Is it possible that this is a newly introduced problem? It seems a bit strange that there would be a problem like this going unnoticed since 2.8 was released.

            adilger Andreas Dilger added a comment - I guess the other question is whether you tried running the reproducer on some previous version on the client? Is it possible that this is a newly introduced problem? It seems a bit strange that there would be a problem like this going unnoticed since 2.8 was released.

            Olaf, as Sarah is having trouble to reproduce this, can you please run a test with -1 debug on the client? My first guess is that this is somehow related to the client IO stack. Given that there would only be a handful of operations in the log it shouldn't be too bad to look through.

            adilger Andreas Dilger added a comment - Olaf, as Sarah is having trouble to reproduce this, can you please run a test with -1 debug on the client? My first guess is that this is somehow related to the client IO stack. Given that there would only be a handful of operations in the log it shouldn't be too bad to look through.

            Sarah,
            If there's any information I can provide let me know. Thanks.

            ofaaland Olaf Faaland added a comment - Sarah, If there's any information I can provide let me know. Thanks.

            People

              bzzz Alex Zhuravlev
              ofaaland Olaf Faaland
              Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: