Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-4345

failed to update accounting ZAP for user

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • Lustre 2.6.0, Lustre 2.5.3
    • None
    • Lustre 2.4.0-19chaos
    • 3
    • 11907

    Description

      We are using lustre 2.4.0-19chaos on our servers running with the ZFS OSD. On some of the OSS nodes we are seeing messages like this:

      Nov  6 00:06:29 stout8 kernel: LustreError: 14909:0:(osd_object.c:973:osd_attr_set()) fsrzb-OST0007: failed to update accounting ZAP for user 132245 (-2)
      Nov  6 00:06:29 stout8 kernel: LustreError: 14909:0:(osd_object.c:973:osd_attr_set()) Skipped 5 previous similar messages
      Nov  6 00:06:38 stout16 kernel: LustreError: 15266:0:(osd_object.c:973:osd_attr_set()) fsrzb-OST000f: failed to update accounting ZAP for user 122392 (-2)
      Nov  6 00:06:38 stout16 kernel: LustreError: 15266:0:(osd_object.c:973:osd_attr_set()) Skipped 3 previous similar messages
      Nov  6 00:06:40 stout12 kernel: LustreError: 15801:0:(osd_object.c:973:osd_attr_set()) fsrzb-OST000b: failed to update accounting ZAP for user 122708 (-2)
      Nov  6 00:06:40 stout12 kernel: LustreError: 15801:0:(osd_object.c:973:osd_attr_set()) Skipped 4 previous similar messages
      
      Nov  7 00:31:36 porter31 kernel: LustreError: 7704:0:(osd_object.c:973:osd_attr_set()) lse-OST001f: failed to update accounting ZAP for user 54916 (-2)
      Nov  7 02:53:05 porter19 kernel: LustreError: 9380:0:(osd_object.c:973:osd_attr_set()) lse-OST0013: failed to update accounting ZAP for user 7230 (-2)
      
      Dec  3 12:01:21 stout7 kernel: Lustre: Skipped 3 previous similar messages
      Dec  3 13:52:30 stout4 kernel: LustreError: 15806:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0003: failed to update accounting ZAP for user 1752876224 (-2)
      Dec  3 13:52:30 stout4 kernel: LustreError: 15806:0:(osd_object.c:967:osd_attr_set()) Skipped 3 previous similar messages
      Dec  3 13:52:30 stout1 kernel: LustreError: 15324:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0000: failed to update accounting ZAP for user 1752876224 (-2)
      Dec  3 13:52:30 stout1 kernel: LustreError: 15784:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0000: failed to update accounting ZAP for user 1752876224 (-2)
      Dec  3 13:52:30 stout14 kernel: LustreError: 16345:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST000d: failed to update accounting ZAP for user 1752876224 (-2)
      Dec  3 13:52:30 stout12 kernel: LustreError: 32355:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST000b: failed to update accounting ZAP for user 1752876224 (-2)
      Dec  3 13:52:30 stout2 kernel: LustreError: 15145:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0001: failed to update accounting ZAP for user 1752876224 (-2)
      Dec  3 13:52:30 stout10 kernel: LustreError: 14570:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0009: failed to update accounting ZAP for user 1752876224 (-2)
      

      First of all, these messages are terrible. If you look at osd_attr_set() there are four exactly identical messages that are printed. Ok, granted, we can look them up by line number. But even better would be to make them unique.

      So looking them up by line numbers 967 and 973, it would appear that we have hit at least the first two of the "filed to update accounting ZAP for user" messages.

      Note that the UID numbers do not look correct to me. Many of them are clearly not in the valid UID range. But then I don't completely understand what is going on here yet.

      Attachments

        Issue Links

          Activity

            [LU-4345] failed to update accounting ZAP for user
            chunteraa Chris Hunter (Inactive) made changes -
            Link New: This issue is related to DDN-111 [ DDN-111 ]
            morrone Christopher Morrone (Inactive) made changes -
            Labels Original: mn4 New: llnl mn4
            pjones Peter Jones made changes -
            Fix Version/s New: Lustre 2.5.3 [ 11100 ]
            Labels Original: mn4 mq314 New: mn4
            yujian Jian Yu added a comment -

            Here is the back-ported patch for Lustre b2_5 branch: http://review.whamcloud.com/11435

            yujian Jian Yu added a comment - Here is the back-ported patch for Lustre b2_5 branch: http://review.whamcloud.com/11435
            pjones Peter Jones made changes -
            Link New: This issue is related to ATP-4 [ ATP-4 ]
            adilger Andreas Dilger made changes -
            Link New: This issue is related to LU-5188 [ LU-5188 ]

            > spa_history_log() and spa_history_log_sync don't calls dmu_tx_create_assigned()

            They don't call it because there aren't any atomic requirements for the history. But if you need to make sure all the tx's end up being atomic you'll need dmu_tx_create_assigned() which is what the quota code uses. Even better just use the tx passed to the synctask. This is created in dsl_pool_sync() and will be tied to the txg being synced.

            behlendorf Brian Behlendorf added a comment - > spa_history_log() and spa_history_log_sync don't calls dmu_tx_create_assigned() They don't call it because there aren't any atomic requirements for the history. But if you need to make sure all the tx's end up being atomic you'll need dmu_tx_create_assigned() which is what the quota code uses. Even better just use the tx passed to the synctask. This is created in dsl_pool_sync() and will be tied to the txg being synced.

            OK, I'll have a closer look at the debug options, but notice that spa_history_log() and spa_history_log_sync don't calls dmu_tx_create_assigned() and this still work?

            I'll also try to use dmu_tx_create_assigned(), thanks.

            bzzz Alex Zhuravlev added a comment - OK, I'll have a closer look at the debug options, but notice that spa_history_log() and spa_history_log_sync don't calls dmu_tx_create_assigned() and this still work? I'll also try to use dmu_tx_create_assigned(), thanks.

            Perhaps it's just me. But I find it hard to walk the code and verify that the tx is being constructed correctly. Are we perhaps taking extra holds now? One thing you could try is to build ZFS with with the --enable-debug-dmu-tx option. This will enable a variety of checks to ensure that tx are constructed and managed properly. If they're not you'll ASSERT. They're somewhat expensive so they're disabled by default.

            As for doing thing like the internal ZFS code quota is managed more like I described above. If you look at dsl_pool_sync() which is called in the sync'ing context like a synctask you'll see that it creates a new tx for the correct txg with dmu_tx_create_assigned(). It then goes through the list of dirty dnodes per dataset and updates the zap's accordingly. I don't see why Lustre couldn't do something similar.

            behlendorf Brian Behlendorf added a comment - Perhaps it's just me. But I find it hard to walk the code and verify that the tx is being constructed correctly. Are we perhaps taking extra holds now? One thing you could try is to build ZFS with with the --enable-debug-dmu-tx option. This will enable a variety of checks to ensure that tx are constructed and managed properly. If they're not you'll ASSERT. They're somewhat expensive so they're disabled by default. As for doing thing like the internal ZFS code quota is managed more like I described above. If you look at dsl_pool_sync() which is called in the sync'ing context like a synctask you'll see that it creates a new tx for the correct txg with dmu_tx_create_assigned(). It then goes through the list of dirty dnodes per dataset and updates the zap's accordingly. I don't see why Lustre couldn't do something similar.

            hmm, osd_trans_stop() does this, otherwise we would block in the very first txg? sort of interesting thing is that the patch getting stuck quite rarely.. it actually passed many maloo runs and it was hard to hit the issue locally as well.

            bzzz Alex Zhuravlev added a comment - hmm, osd_trans_stop() does this, otherwise we would block in the very first txg? sort of interesting thing is that the patch getting stuck quite rarely.. it actually passed many maloo runs and it was hard to hit the issue locally as well.

            People

              niu Niu Yawei (Inactive)
              morrone Christopher Morrone (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: