Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7926

MDS sits idle with extreme slow response to clients

Details

    • Bug
    • Resolution: Duplicate
    • Blocker
    • None
    • Lustre 2.5.3
    • None
    • 4
    • 9223372036854775807

    Description

      We had our MDS become idle no with no errors on the console or logs. It responds extremely slow (takes mins to just do an ls). This has happened twice with 24 hours. We took crash dump in both cases. The crash dump show all mdt threads sitting in ' qsd_op_begin'
      like this

      PID: 14754  TASK: ffff88400a83c040  CPU: 7   COMMAND: "mdt01_002"
       #0 [ffff883f634cd500] schedule at ffffffff81565692
       #1 [ffff883f634cd5d8] schedule_timeout at ffffffff81566572
       #2 [ffff883f634cd688] qsd_op_begin at ffffffffa0d04909 [lquota]
       #3 [ffff883f634cd738] osd_declare_qid at ffffffffa0d88449 [osd_ldiskfs]
       #4 [ffff883f634cd798] osd_declare_inode_qid at ffffffffa0d88702 [osd_ldiskfs]
       #5 [ffff883f634cd7f8] osd_declare_object_create at ffffffffa0d65d53 [osd_ldiskfs]
       #6 [ffff883f634cd858] lod_declare_object_create at ffffffffa0f4b482 [lod]
       #7 [ffff883f634cd8b8] mdd_declare_object_create_internal at ffffffffa0fa78cf [mdd]
       #8 [ffff883f634cd918] mdd_declare_create at ffffffffa0fbb4ce [mdd]
       #9 [ffff883f634cd988] mdd_create at ffffffffa0fbc631 [mdd]
      #10 [ffff883f634cda88] mdo_create at ffffffffa0e88058 [mdt]
      #11 [ffff883f634cda98] mdt_reint_open at ffffffffa0e923f4 [mdt]
      #12 [ffff883f634cdb78] mdt_reint_rec at ffffffffa0e7a481 [mdt]
      #13 [ffff883f634cdb98] mdt_reint_internal at ffffffffa0e5fed3 [mdt]
      #14 [ffff883f634cdbd8] mdt_intent_reint at ffffffffa0e6045e [mdt]
      #15 [ffff883f634cdc28] mdt_intent_policy at ffffffffa0e5dc3e [mdt]
      #16 [ffff883f634cdc68] ldlm_lock_enqueue at ffffffffa075e2c5 [ptlrpc]
      #17 [ffff883f634cdcd8] ldlm_handle_enqueue0 at ffffffffa0787ebb [ptlrpc]
      #18 [ffff883f634cdd48] mdt_enqueue at ffffffffa0e5e106 [mdt]
      #19 [ffff883f634cdd68] mdt_handle_common at ffffffffa0e62ada [mdt]
      #20 [ffff883f634cddb8] mds_regular_handle at ffffffffa0e9f505 [mdt]
      #21 [ffff883f634cddc8] ptlrpc_server_handle_request at ffffffffa07b70c5 [ptlrpc]
      #22 [ffff883f634cdea8] ptlrpc_main at ffffffffa07b989d [ptlrpc]
      #23 [ffff883f634cdf48] kernel_thread at ffffffff8100c28a
      

      I am attaching backtrace from the 2 crash dumps and also to lustre debug dumps.

      Attachments

        1. bt.all.Mar26.gz
          109 kB
        2. bt.all.Mar27.05.25.28.gz
          104 kB
        3. debug.out.1.gz
          0.2 kB
        4. debug.out.2.gz
          0.2 kB

        Issue Links

          Activity

            [LU-7926] MDS sits idle with extreme slow response to clients
            bzzz Alex Zhuravlev added a comment - - edited

            well, exactly. the fact that we send an RPC to get another quota unit can be the point at which we interrupt and return -EINPROGRESS to the client? I guess a potential problem is to avoid a livelock where a single unit is ping-pong'ed among 2+ servers because nobody consumes that right away..

            bzzz Alex Zhuravlev added a comment - - edited well, exactly. the fact that we send an RPC to get another quota unit can be the point at which we interrupt and return -EINPROGRESS to the client? I guess a potential problem is to avoid a livelock where a single unit is ping-pong'ed among 2+ servers because nobody consumes that right away..

            Alex, the problem is that the thread sending DQACQ has to wait for the RPC timeout, once it get timeout, it'll reply -EINPROGRESS to client. So literally it's not a deadlock but a livelock problem, our solution is to use different set of threads for sending and handling the DQACQ requests.

            niu Niu Yawei (Inactive) added a comment - Alex, the problem is that the thread sending DQACQ has to wait for the RPC timeout, once it get timeout, it'll reply -EINPROGRESS to client. So literally it's not a deadlock but a livelock problem, our solution is to use different set of threads for sending and handling the DQACQ requests.

            I'd think that MDT shouldn't get blocked in this case at all - just return to the client with -EINPROGRESS or something?

            bzzz Alex Zhuravlev added a comment - I'd think that MDT shouldn't get blocked in this case at all - just return to the client with -EINPROGRESS or something?
            niu Niu Yawei (Inactive) added a comment - - edited port to b2_5_fe: http://review.whamcloud.com/#/c/19250/

            Ah, I think it's a livelock problem, from the stack trace we can see that all mdt service threads were busy on sending DQACQ requests and no available threads to handle the requests. This has been fixed in master by LU-6433.

            niu Niu Yawei (Inactive) added a comment - Ah, I think it's a livelock problem, from the stack trace we can see that all mdt service threads were busy on sending DQACQ requests and no available threads to handle the requests. This has been fixed in master by LU-6433 .

            Going through the logs there is nothing to indicate that there was a lost of connections between OST, MDT or clients. There was a high load on a number of OSSes at 9:27:00. Are there any quota allocation adjusted once your over your inode softlimit?

            mhanafi Mahmoud Hanafi added a comment - Going through the logs there is nothing to indicate that there was a lost of connections between OST, MDT or clients. There was a high load on a number of OSSes at 9:27:00. Are there any quota allocation adjusted once your over your inode softlimit?

            So it sounds like the user who was over their inode soft quota was creates a lot of files. Could this be a scalling issue in inode quota acquire?

            Right, from the log we can see the user was creating lots of files. I don't think it's a scaling problem, because the root cause is a that former DQACQ from MDT didn't get replied somehow, but unfortunately that part of log is missed.

            My speculation is: There was a network hiccup caused all OSTs lost connections then reconnect, because there were tons of quota lock acquire from OSTs (pure quota lock acquire should only happen when quota slave start or reconnect to master). And probably that hiccup caused the DQACQ reply being lost?

            niu Niu Yawei (Inactive) added a comment - So it sounds like the user who was over their inode soft quota was creates a lot of files. Could this be a scalling issue in inode quota acquire? Right, from the log we can see the user was creating lots of files. I don't think it's a scaling problem, because the root cause is a that former DQACQ from MDT didn't get replied somehow, but unfortunately that part of log is missed. My speculation is: There was a network hiccup caused all OSTs lost connections then reconnect, because there were tons of quota lock acquire from OSTs (pure quota lock acquire should only happen when quota slave start or reconnect to master). And probably that hiccup caused the DQACQ reply being lost?

            The user creates 90K files during their 30K core run.

            mhanafi Mahmoud Hanafi added a comment - The user creates 90K files during their 30K core run.

            The OST/MDT and networks were not having any issues. UID 11361 user was running a 30K core job and they have an inode quota of 600K. They currently have over 530K files which is over their softquota of 500K.

            I have sent a request to the user to get additional info about their job.

            So it sounds like the user who was over their inode soft quota was creates a lot of files. Could this be a scalling issue in inode quota acquire?

            mhanafi Mahmoud Hanafi added a comment - The OST/MDT and networks were not having any issues. UID 11361 user was running a 30K core job and they have an inode quota of 600K. They currently have over 530K files which is over their softquota of 500K. I have sent a request to the user to get additional info about their job. So it sounds like the user who was over their inode soft quota was creates a lot of files. Could this be a scalling issue in inode quota acquire?

            Looks lots of MDT threads (creating objects) were trying to acquire quota for id 11361, but they all waited on an in-flight DQACQ. And I found lots of OSTs were acquiring quota lock from quota master, probably MDT was busy on these acquire request and the DQACQ from MDT wasn't processed in time?

            Are the OSTs just get rebooted (or network to MDT just recovered) when this happened? I'm wondering why there are so many requests from different OSTs to acquire quota locks.

            niu Niu Yawei (Inactive) added a comment - Looks lots of MDT threads (creating objects) were trying to acquire quota for id 11361, but they all waited on an in-flight DQACQ. And I found lots of OSTs were acquiring quota lock from quota master, probably MDT was busy on these acquire request and the DQACQ from MDT wasn't processed in time? Are the OSTs just get rebooted (or network to MDT just recovered) when this happened? I'm wondering why there are so many requests from different OSTs to acquire quota locks.
            pjones Peter Jones added a comment -

            Niu

            Could you please advise here?

            Thanks

            Peter

            pjones Peter Jones added a comment - Niu Could you please advise here? Thanks Peter

            People

              niu Niu Yawei (Inactive)
              mhanafi Mahmoud Hanafi
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: