Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6643

write hang up with small max_cached_mb

Details

    • Bug
    • Resolution: Duplicate
    • Major
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      Running multiple WRITEs at the same time with small max_cached_mb results in hang-up.
      According to my survey, it's because WRITEs eat up lru slots all at once but no one have enough lru slots to start I/O. To make matters worse, no one will release lru slots they've reserved until it completes I/O. That's why all the WRITEs have to wait each other eternally.

      PID: 4896   TASK: ffff880c35f1e040  CPU: 3   COMMAND: "dd"
       #0 [ffff880bb9a33788] schedule at ffffffff81528162
       #1 [ffff880bb9a33850] osc_page_init at ffffffffa0b1392d [osc]
       #2 [ffff880bb9a338f0] lov_page_init_raid0 at ffffffffa0b7d481 [lov]
       #3 [ffff880bb9a33960] lov_page_init at ffffffffa0b740c1 [lov]
       #4 [ffff880bb9a33970] cl_page_alloc at ffffffffa0f9b40a [obdclass]
       #5 [ffff880bb9a339d0] cl_page_find at ffffffffa0f9b79e [obdclass]
       #6 [ffff880bb9a33a30] ll_write_begin at ffffffffa18229ec [lustre]
       #7 [ffff880bb9a33ab0] generic_file_buffered_write at ffffffff81120703
       #8 [ffff880bb9a33b80] __generic_file_aio_write at ffffffff81122160
       #9 [ffff880bb9a33c40] vvp_io_write_start at ffffffffa1833f3e [lustre]
      #10 [ffff880bb9a33ca0] cl_io_start at ffffffffa0f9d63a [obdclass]
      #11 [ffff880bb9a33cd0] cl_io_loop at ffffffffa0fa11c4 [obdclass]
      #12 [ffff880bb9a33d00] ll_file_io_generic at ffffffffa17d75c4 [lustre]
      #13 [ffff880bb9a33e20] ll_file_aio_write at ffffffffa17d7d13 [lustre]
      #14 [ffff880bb9a33e80] ll_file_write at ffffffffa17d83a9 [lustre]
      #15 [ffff880bb9a33ef0] vfs_write at ffffffff811893a8
      #16 [ffff880bb9a33f30] sys_write at ffffffff81189ca1
      #17 [ffff880bb9a33f80] tracesys at ffffffff8100b288 (via system_call)
          RIP: 0000003c990db790  RSP: 00007fffdc56e778  RFLAGS: 00000246
          RAX: ffffffffffffffda  RBX: ffffffff8100b288  RCX: ffffffffffffffff
          RDX: 0000000000400000  RSI: 00007f4ed96db000  RDI: 0000000000000001
          RBP: 00007f4ed96db000   R8: 00000000ffffffff   R9: 0000000000000000
          R10: 0000000000402003  R11: 0000000000000246  R12: 00007f4ed96dafff
          R13: 0000000000000000  R14: 0000000000400000  R15: 0000000000400000
          ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
      

      My solution to the situation is letting the only one WRITE ignore the lru limitation. it means that the only one WRITE can go ahead then we can expect it to release some lru slots and next one can go ahead. (or someone get to be the next "privileged" WRITE)

      I know it's a kind of dirty fix but I thought this is better than all the WRITEs hang up. Actually the "privileged" WRITE exceeds max_cached_mb by its I/O size only, it's smaller problem than hang-up.

      BTW, you can reproduce the situation easily like setting small max_cached_mb like 4 and running lots of dd commands or something at the same time.

      I attached the backtrace in the situation.

      Attachments

        Issue Links

          Activity

            [LU-6643] write hang up with small max_cached_mb

            OK, I abandoned the patch here.

            nozaki Hiroya Nozaki (Inactive) added a comment - OK, I abandoned the patch here.

            Could you please retest this with the latest master, since it appears this was fixed with "LU-5108 osc: Performance tune for LRU". If the problem is gone, please abandon your patch.

            adilger Andreas Dilger added a comment - Could you please retest this with the latest master, since it appears this was fixed with " LU-5108 osc: Performance tune for LRU". If the problem is gone, please abandon your patch.
            nozaki Hiroya Nozaki (Inactive) added a comment - - edited

            if extra slots are allowed under some situation, why do you set that pathological max_cached_mb in the first place?

            Precisely.
            4MIB is a kind of extreme case, but the situation can be reproduced with 64MiB, 128MiB and more ... if multiple writes are running. I developed a feature in my company, Single I/O performance improvement with multi worker threads in llite layer, which is why sometimes I have to face and deal with this problem, so this patch was a kind of detour at that time.

            the same policy can be applied to write with non-empty write queue on the LLITE layer.

            OK, I'll check it, thanks !

            nozaki Hiroya Nozaki (Inactive) added a comment - - edited if extra slots are allowed under some situation, why do you set that pathological max_cached_mb in the first place? Precisely. 4MIB is a kind of extreme case, but the situation can be reproduced with 64MiB, 128MiB and more ... if multiple writes are running. I developed a feature in my company, Single I/O performance improvement with multi worker threads in llite layer, which is why sometimes I have to face and deal with this problem, so this patch was a kind of detour at that time. the same policy can be applied to write with non-empty write queue on the LLITE layer. OK, I'll check it, thanks !

            hmm.. if extra slots are allowed under some situation, why do you set that pathological max_cached_mb in the first place?

            Anyway, if this really needs fixing, coo_page_init() should take a parameter(or an extra flag in cl_page) to tell OSC how to handle the situation if there is no LRU slots. For readahead, it isn't necessary to sleep wait for LRU slots if it runs out; the same policy can be applied to write with non-empty write queue on the LLITE layer.

            jay Jinshan Xiong (Inactive) added a comment - hmm.. if extra slots are allowed under some situation, why do you set that pathological max_cached_mb in the first place? Anyway, if this really needs fixing, coo_page_init() should take a parameter(or an extra flag in cl_page) to tell OSC how to handle the situation if there is no LRU slots. For readahead, it isn't necessary to sleep wait for LRU slots if it runs out; the same policy can be applied to write with non-empty write queue on the LLITE layer.

            Hiroya Nozaki (nozaki.hiroya@jp.fujitsu.com) uploaded a new patch: http://review.whamcloud.com/14932
            Subject: LU-6643 llite: write hang up with small max_cached_mb
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 29997fd8157dc5b293db17f460583fc76b63c361

            gerrit Gerrit Updater added a comment - Hiroya Nozaki (nozaki.hiroya@jp.fujitsu.com) uploaded a new patch: http://review.whamcloud.com/14932 Subject: LU-6643 llite: write hang up with small max_cached_mb Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 29997fd8157dc5b293db17f460583fc76b63c361

            People

              wc-triage WC Triage
              nozaki Hiroya Nozaki (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: