Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6643

write hang up with small max_cached_mb

    XMLWordPrintable

Details

    • Bug
    • Resolution: Duplicate
    • Major
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      Running multiple WRITEs at the same time with small max_cached_mb results in hang-up.
      According to my survey, it's because WRITEs eat up lru slots all at once but no one have enough lru slots to start I/O. To make matters worse, no one will release lru slots they've reserved until it completes I/O. That's why all the WRITEs have to wait each other eternally.

      PID: 4896   TASK: ffff880c35f1e040  CPU: 3   COMMAND: "dd"
       #0 [ffff880bb9a33788] schedule at ffffffff81528162
       #1 [ffff880bb9a33850] osc_page_init at ffffffffa0b1392d [osc]
       #2 [ffff880bb9a338f0] lov_page_init_raid0 at ffffffffa0b7d481 [lov]
       #3 [ffff880bb9a33960] lov_page_init at ffffffffa0b740c1 [lov]
       #4 [ffff880bb9a33970] cl_page_alloc at ffffffffa0f9b40a [obdclass]
       #5 [ffff880bb9a339d0] cl_page_find at ffffffffa0f9b79e [obdclass]
       #6 [ffff880bb9a33a30] ll_write_begin at ffffffffa18229ec [lustre]
       #7 [ffff880bb9a33ab0] generic_file_buffered_write at ffffffff81120703
       #8 [ffff880bb9a33b80] __generic_file_aio_write at ffffffff81122160
       #9 [ffff880bb9a33c40] vvp_io_write_start at ffffffffa1833f3e [lustre]
      #10 [ffff880bb9a33ca0] cl_io_start at ffffffffa0f9d63a [obdclass]
      #11 [ffff880bb9a33cd0] cl_io_loop at ffffffffa0fa11c4 [obdclass]
      #12 [ffff880bb9a33d00] ll_file_io_generic at ffffffffa17d75c4 [lustre]
      #13 [ffff880bb9a33e20] ll_file_aio_write at ffffffffa17d7d13 [lustre]
      #14 [ffff880bb9a33e80] ll_file_write at ffffffffa17d83a9 [lustre]
      #15 [ffff880bb9a33ef0] vfs_write at ffffffff811893a8
      #16 [ffff880bb9a33f30] sys_write at ffffffff81189ca1
      #17 [ffff880bb9a33f80] tracesys at ffffffff8100b288 (via system_call)
          RIP: 0000003c990db790  RSP: 00007fffdc56e778  RFLAGS: 00000246
          RAX: ffffffffffffffda  RBX: ffffffff8100b288  RCX: ffffffffffffffff
          RDX: 0000000000400000  RSI: 00007f4ed96db000  RDI: 0000000000000001
          RBP: 00007f4ed96db000   R8: 00000000ffffffff   R9: 0000000000000000
          R10: 0000000000402003  R11: 0000000000000246  R12: 00007f4ed96dafff
          R13: 0000000000000000  R14: 0000000000400000  R15: 0000000000400000
          ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
      

      My solution to the situation is letting the only one WRITE ignore the lru limitation. it means that the only one WRITE can go ahead then we can expect it to release some lru slots and next one can go ahead. (or someone get to be the next "privileged" WRITE)

      I know it's a kind of dirty fix but I thought this is better than all the WRITEs hang up. Actually the "privileged" WRITE exceeds max_cached_mb by its I/O size only, it's smaller problem than hang-up.

      BTW, you can reproduce the situation easily like setting small max_cached_mb like 4 and running lots of dd commands or something at the same time.

      I attached the backtrace in the situation.

      Attachments

        Issue Links

          Activity

            People

              wc-triage WC Triage
              nozaki Hiroya Nozaki (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: