Details

    • Technical task
    • Resolution: Duplicate
    • Blocker
    • Lustre 2.6.0
    • Lustre 2.5.0
    • Patches submitted to autotest
    • 9548

    Description

      from https://maloo.whamcloud.com/test_sets/0afc2c56-fc86-11e2-8ce2-52540035b04c

      This sanity-hsm test 21 seems to be hitting a lot right now
      Wrong block number is one of the errors seen.

      test_21 	
      
          Error: 'wrong block number'
          Failure Rate: 33.00% of last 100 executions [all branches] 
      
      == sanity-hsm test 21: Simple release tests == 23:18:20 (1375510700)
      2+0 records in
      2+0 records out
      2097152 bytes (2.1 MB) copied, 0.353933 s, 5.9 MB/s
       sanity-hsm test_21: @@@@@@ FAIL: wrong block number 
        Trace dump:
        = /usr/lib64/lustre/tests/test-framework.sh:4202:error_noexit()
      

      Attachments

        Issue Links

          Activity

            [LU-3700] sanity-hsm test_21 Error: 'wrong block number'

            Looks like two different bugs.

            The conv=fsync option should indeed result in a single OST_SYNC RPC sent at the end of the writes. I suspect that this was previously skipped because OST writes were always synchronous (so fsync() was a no-op), and the async journal commit feature was developed on b1_8 and this wasn't fixed in the CLIO code when it landed. It should be noted that the Lustre OST_SYNC allows syncing a range of data on a single object, so the mapping of the VFS sync_page_range() method should map its range to the RPC, and extract that from the RPC on the server side(it migh already do this.

            The second problem about the client not updating the blocks count based on reply values should also be investigated. I expect that the ZFS block count is not updated by the time the write is submitted, so it doesn't reply with the new block count to the client. However, the subsequent OST_SYNC should result in the right blocks count being returned to the client and then being cached under the DLM lock.

            adilger Andreas Dilger added a comment - Looks like two different bugs. The conv=fsync option should indeed result in a single OST_SYNC RPC sent at the end of the writes. I suspect that this was previously skipped because OST writes were always synchronous (so fsync() was a no-op), and the async journal commit feature was developed on b1_8 and this wasn't fixed in the CLIO code when it landed. It should be noted that the Lustre OST_SYNC allows syncing a range of data on a single object, so the mapping of the VFS sync_page_range() method should map its range to the RPC, and extract that from the RPC on the server side(it migh already do this. The second problem about the client not updating the blocks count based on reply values should also be investigated. I expect that the ZFS block count is not updated by the time the write is submitted, so it doesn't reply with the new block count to the client. However, the subsequent OST_SYNC should result in the right blocks count being returned to the client and then being cached under the DLM lock.
            utopiabound Nathaniel Clark added a comment - - edited

            conv=fsync causes an MDS_SYNC to be sent to the MDT (but no syncs are sent to the OSTs), it does not cause the OST_BRW_ASYNC flag to be cleared in the OST_WRITE, so the ost does not think it needs to sync the data to disk.

            So even changing conv=fsync to oflag=sync (which causes OST_SYNCs to be sent) here is the wire traffic:
            OST_WRITE sent (OST_BRW_ASYNC is set), returns 1 block
            OST_SYNC sent, returns 2053 blocks
            OST_WRITE sent (OST_BRW_ASYNC is set), returns 2053 blocks
            OST_SYNC sent, returns 4101 bocks (the correct amount)

            A stat of the file at this point only shows 2053 (the last OST_WRITE amount).

            For ldiskfs the wire traffic is:
            OST_WRITE sent (OST_BRW_ASYNC is set), returns 2048 block
            OST_SYNC sent, returns 2048 blocks
            OST_WRITE sent (OST_BRW_ASYNC is set), returns 4096 blocks (the correct amount)
            OST_SYNC sent, returns 4096 bocks

            Looking at the client code, on OST_SYNC's processing, the oa is ignored.

            utopiabound Nathaniel Clark added a comment - - edited conv=fsync causes an MDS_SYNC to be sent to the MDT (but no syncs are sent to the OSTs), it does not cause the OST_BRW_ASYNC flag to be cleared in the OST_WRITE, so the ost does not think it needs to sync the data to disk. So even changing conv=fsync to oflag=sync (which causes OST_SYNCs to be sent) here is the wire traffic: OST_WRITE sent (OST_BRW_ASYNC is set), returns 1 block OST_SYNC sent, returns 2053 blocks OST_WRITE sent (OST_BRW_ASYNC is set), returns 2053 blocks OST_SYNC sent, returns 4101 bocks (the correct amount) A stat of the file at this point only shows 2053 (the last OST_WRITE amount). For ldiskfs the wire traffic is: OST_WRITE sent (OST_BRW_ASYNC is set), returns 2048 block OST_SYNC sent, returns 2048 blocks OST_WRITE sent (OST_BRW_ASYNC is set), returns 4096 blocks (the correct amount) OST_SYNC sent, returns 4096 bocks Looking at the client code, on OST_SYNC's processing, the oa is ignored.

            BTW, since my change #8467 also enable full debug logs and has landed already, we may be able to use such logs gathered during recent failures of sanity-hsm/test_20 now with the "wrong block number after archive: …" symptom, to troubleshoot.

            bfaccini Bruno Faccini (Inactive) added a comment - BTW, since my change #8467 also enable full debug logs and has landed already, we may be able to use such logs gathered during recent failures of sanity-hsm/test_20 now with the "wrong block number after archive: …" symptom, to troubleshoot.

            I didn't see this comment before I inspected the patch. However, this implies a different bug in ZFS. Namely, the "small_file" helper in sanity-hsm is creating files with conv=fsync, so the fsync on close should flush all the blocks from the client cache and onto disk on the OST. After that point the blocks count for the file should not change. Likewise, if the locks are cancelled at the client, it should flush all the dirty blocks from cache to disk on the OST.

            This makes me wonder if ZFS is implementing fsync properly at the server.

            adilger Andreas Dilger added a comment - I didn't see this comment before I inspected the patch. However, this implies a different bug in ZFS. Namely, the "small_file" helper in sanity-hsm is creating files with conv=fsync, so the fsync on close should flush all the blocks from the client cache and onto disk on the OST. After that point the blocks count for the file should not change. Likewise, if the locks are cancelled at the client, it should flush all the dirty blocks from cache to disk on the OST. This makes me wonder if ZFS is implementing fsync properly at the server.

            My previous patch actually skipped wrong test.

            http://review.whamcloud.com/8575

            utopiabound Nathaniel Clark added a comment - My previous patch actually skipped wrong test. http://review.whamcloud.com/8575

            ZFS reports blocks written to disk, and not # of blocks used if the file were fully written to disk, so the number actually changes after the write completes in ZFS. It's not a locking or consistency issue in Lustre.

            utopiabound Nathaniel Clark added a comment - ZFS reports blocks written to disk, and not # of blocks used if the file were fully written to disk, so the number actually changes after the write completes in ZFS. It's not a locking or consistency issue in Lustre.

            The OSS is definitely returning a funny (not the same) number of blocks after doing hsm_archive. I checked this on my local setup with wireshark and OST_GETATTR is replying with different number of blocks than before doing hsm_archive.

            It may be that the passing tests are the ones hitting the cached value on the client and the failure is the one getting the value from the OST.

            OST debug log:

            1386629604.515513:0:1344:0:(ldlm_lockd.c:1167:ldlm_handle_enqueue0()) ### server-side enqueue handler START
            1386629604.515525:0:1344:0:(ldlm_lockd.c:1253:ldlm_handle_enqueue0()) ### server-side enqueue handler, new lock created ns: filter-lustre-OST0001_UUID lock: ffff880072cb7740/0xb2c4c9585d128715 lrc: 2/0,0 mode: --/PR res: [0x21aa:0x0:0x0].0 rrc: 2 type: EXT [0->0] (req 0->0) flags: 0x40000000000000 nid: local remote: 0x59a0a522abcc8a58 expref: -99 pid: 1344 timeout: 0 lvb_type: 0
            1386629604.515559:0:1344:0:(client.c:1473:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ll_ost00_002:lustre-OST0001_UUID:1344:1453973284418480:10.10.16.236@tcp:106
            1386629604.515582:0:1344:0:(client.c:2123:ptlrpc_set_wait()) set ffff880072921ec0 going to sleep for 6 seconds
            1386629604.516784:0:1344:0:(ofd_lvb.c:195:ofd_lvbo_update()) res: [0x21aa:0x0:0x0] updating lvb size: 0 -> 2097152
            1386629604.516789:0:1344:0:(ofd_lvb.c:207:ofd_lvbo_update()) res: [0x21aa:0x0:0x0] updating lvb atime: 0 -> 1386629600
            1386629604.516799:0:1344:0:(ofd_lvb.c:265:ofd_lvbo_update()) res: [0x100000000:0x21aa:0x0] updating lvb blocks from disk: 1 -> 4103
            1386629604.516813:0:1344:0:(client.c:1840:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ll_ost00_002:lustre-OST0001_UUID:1344:1453973284418480:10.10.16.236@tcp:106
            
            utopiabound Nathaniel Clark added a comment - The OSS is definitely returning a funny (not the same) number of blocks after doing hsm_archive. I checked this on my local setup with wireshark and OST_GETATTR is replying with different number of blocks than before doing hsm_archive. It may be that the passing tests are the ones hitting the cached value on the client and the failure is the one getting the value from the OST. OST debug log: 1386629604.515513:0:1344:0:(ldlm_lockd.c:1167:ldlm_handle_enqueue0()) ### server-side enqueue handler START 1386629604.515525:0:1344:0:(ldlm_lockd.c:1253:ldlm_handle_enqueue0()) ### server-side enqueue handler, new lock created ns: filter-lustre-OST0001_UUID lock: ffff880072cb7740/0xb2c4c9585d128715 lrc: 2/0,0 mode: --/PR res: [0x21aa:0x0:0x0].0 rrc: 2 type: EXT [0->0] (req 0->0) flags: 0x40000000000000 nid: local remote: 0x59a0a522abcc8a58 expref: -99 pid: 1344 timeout: 0 lvb_type: 0 1386629604.515559:0:1344:0:(client.c:1473:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ll_ost00_002:lustre-OST0001_UUID:1344:1453973284418480:10.10.16.236@tcp:106 1386629604.515582:0:1344:0:(client.c:2123:ptlrpc_set_wait()) set ffff880072921ec0 going to sleep for 6 seconds 1386629604.516784:0:1344:0:(ofd_lvb.c:195:ofd_lvbo_update()) res: [0x21aa:0x0:0x0] updating lvb size: 0 -> 2097152 1386629604.516789:0:1344:0:(ofd_lvb.c:207:ofd_lvbo_update()) res: [0x21aa:0x0:0x0] updating lvb atime: 0 -> 1386629600 1386629604.516799:0:1344:0:(ofd_lvb.c:265:ofd_lvbo_update()) res: [0x100000000:0x21aa:0x0] updating lvb blocks from disk: 1 -> 4103 1386629604.516813:0:1344:0:(client.c:1840:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ll_ost00_002:lustre-OST0001_UUID:1344:1453973284418480:10.10.16.236@tcp:106

            Andreas, I am not sure to understand you. Do you mean that on ZFS, to get the correct st_blocks from "stat -c %b" just after make_small(), we need to "cancel_lru_locks osc" in between ?

            bfaccini Bruno Faccini (Inactive) added a comment - Andreas, I am not sure to understand you. Do you mean that on ZFS, to get the correct st_blocks from "stat -c %b" just after make_small(), we need to "cancel_lru_locks osc" in between ?

            Hmm, looking at the test more closely, now that we have a test failure, it seems the test itself is defective. The blocks count == 1 should not be true after ARCHIVE (as it was checked beofre the 8467 patch), only after RELEASE. The failed test reported:

            # error "$f: wrong size after archive: $size != $orig_size"
            
            wrong block number after archive:  4103 != 1
            

            so it is setting "orig_size" == 1.

            That the test passes on ldiskfs is likely just a race, because it is a bit faster and the client does not have the actual on-disk blocks count, so it will return "st_blocks=1" to userspace to avoid bugs in tar/rsync that think st_blocks=0 means the file has no data in it. The zfs test is a bit slower to archive, and the client gets the actual st_blocks value back, and (incorrectly) fails this check.

            I suspect that even with make_small using conv=fsync that the client still has cached the original blocks count, so what is needed to fix this bug is to flush the locks on the client before getting orig_blocks:

            cancel_lru_locks osc

            That way, it returns an accurate value for st_blocks for later testing.

            Nathaniel, can you please submit a patch ASAP so maybe we can land this by tomorrow.

            adilger Andreas Dilger added a comment - Hmm, looking at the test more closely, now that we have a test failure, it seems the test itself is defective. The blocks count == 1 should not be true after ARCHIVE (as it was checked beofre the 8467 patch), only after RELEASE. The failed test reported: # error "$f: wrong size after archive: $size != $orig_size" wrong block number after archive: 4103 != 1 so it is setting "orig_size" == 1. That the test passes on ldiskfs is likely just a race, because it is a bit faster and the client does not have the actual on-disk blocks count, so it will return "st_blocks=1" to userspace to avoid bugs in tar/rsync that think st_blocks=0 means the file has no data in it. The zfs test is a bit slower to archive, and the client gets the actual st_blocks value back, and (incorrectly) fails this check. I suspect that even with make_small using conv=fsync that the client still has cached the original blocks count, so what is needed to fix this bug is to flush the locks on the client before getting orig_blocks: cancel_lru_locks osc That way, it returns an accurate value for st_blocks for later testing. Nathaniel, can you please submit a patch ASAP so maybe we can land this by tomorrow.

            There was a failure https://maloo.whamcloud.com/test_sets/8d7a2b66-6134-11e3-bd66-52540035b04c post #8467. It is, so far, unique (but not seemingly patch related).

            utopiabound Nathaniel Clark added a comment - There was a failure https://maloo.whamcloud.com/test_sets/8d7a2b66-6134-11e3-bd66-52540035b04c post #8467. It is, so far, unique (but not seemingly patch related).

            In case 8467 doesn't fix 21 for ZFS, here is a patch to EXCEPT it: http://review.whamcloud.com/8503

            utopiabound Nathaniel Clark added a comment - In case 8467 doesn't fix 21 for ZFS, here is a patch to EXCEPT it: http://review.whamcloud.com/8503

            People

              bfaccini Bruno Faccini (Inactive)
              keith Keith Mannthey (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: