Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-416

Many processes hung consuming a lot of CPU in Lustre-Client page-cache lookups

Details

    • Bug
    • Resolution: Fixed
    • Major
    • None
    • None
    • None
    • 3
    • 23,398
    • 8547

    Description

      Hi,

      At CEA they see quite often a problem on Lustre clients where processes are stuck consuming a lot of CPU time in Lustre layers. Unfortunately the only way to really fix this for now is to reboot the impacted nodes (after waiting for them for several hours), since involved processes are not killable.

      Crash dump analysis shows processes stuck with the following stack traces (crash dumps can be analyzed only on customer site):

      =========================================================
      _spin_lock()
      cl_page_gang_lookup()
      cl_lock_page_out()
      osc_lock_flush()
      osc_lock_cancel()
      cl_lock_cancel0()
      .....
      =========================================================

      and/or
      =========================================================
      __cond_resched()
      _cond_resched()
      cfs_cond_resched()
      cl_lock_page_out()
      osc_lock_flush()
      osc_lock_cancel()
      cl_lock_cancel0()
      .....
      =========================================================

      In attachment you will find 3 files:

      • node1330_dmesg is the dmesg of the faulty client;
      • node1330_lctl_dk is the 'lctl dk' output from the faulty client;
      • cmds.txt is the sequence of commands played to get the 'lctl dk' output.

      There are also "ll_imp_inval" threads stuck due to this problem, leaving OSCs in "IN"active state during a too long time finally causing time-outs and EIOs for client processes.
      Data structures involved are "cl_object_header.[coh_page_guard,coh_tree]", respectively for the lock/radix-tree used to manage the page-cache associated to a Lustre-Client object.

      It seems to be a race around the OSC object pages lock/radix-tree when concurrent access occur (OOM, flush, invalidation, concurrent I/O). This problem seems to occur when, on the same
      Lustre-Client, there are concurrent accesses on the same Lustre objects, inducing a competition on the associated lock and radix-tree from multiple CPUs.
      To reproduce this issue, CEA is using one of their proprietary benchmark. But basically, on a single node there are as many processes as cores on this machine, each process mapping a lot of memory. The processes write this memory to Lustre, preferably on the same OST to reproduce the problem. CEA noticed that OSC inactivation process in client eviction can be involved during
      issue reproduction. So a part of the reproducer can be to manually force client eviction on OSS side by using either:
      lct set_param obdfilter.<fs_name>-<OST_name>.evict_client=nid:<ipoib_clnt_addr>@<portal_name>
      or:
      echo 'nid:<ipoib_clnt_addr>@<portal_name>' > /proc/fs/lustre/obdfilter/<fs_name>/<OST_name>/evict_client

      In order to cope with production imperatives, CEA has setup a work-around that consists in freeing pagecache with "echo 1 > /proc/sys/vm/drop_caches". Doing so, clients will be able to reconnect. On the contrary, and it is interesting to note, clearing the LRU with "lctl set_param ldlm.namespaces.*.lru_size=clear" will hang the node!

      Does this issue sound familiar?
      Of course CEA really need a fix for this, as soon as possible.

      Sebastien.

      Attachments

        1. cmds.txt
          0.4 kB
        2. node1330_dmesg
          247 kB
        3. node1330_lctl_dk
          475 kB
        4. radix-intro.pdf
          43 kB

        Activity

          [LU-416] Many processes hung consuming a lot of CPU in Lustre-Client page-cache lookups

          About cache size at client side, I have trace where the client 'only' have 3GB of cached data (not that much).
          CPU time consumed by ldlm_bl & ll_imp_inval thread indicate that this situation was there for more that 13 hours, time that match also the difference beetwen now and the time the recall were issued. I didn't got traces to see if the 'cached' size was moving in time, but at least after 30 minutes, the cleaning was not completed.

          Regarding time spend in various kernel code, we got time to run oprofile. 96% was spend into cl_page_gang_lookup, 0.7% in radix_tree_gang_lookup.

          louveta Alexandre Louvet (Inactive) added a comment - About cache size at client side, I have trace where the client 'only' have 3GB of cached data (not that much). CPU time consumed by ldlm_bl & ll_imp_inval thread indicate that this situation was there for more that 13 hours, time that match also the difference beetwen now and the time the recall were issued. I didn't got traces to see if the 'cached' size was moving in time, but at least after 30 minutes, the cleaning was not completed. Regarding time spend in various kernel code, we got time to run oprofile. 96% was spend into cl_page_gang_lookup, 0.7% in radix_tree_gang_lookup.
          jay Jinshan Xiong (Inactive) added a comment - - edited

          Indeed. From the dmesg in the attachment, though only a few CPUs(cpu 3, 4 and 10) were busy discarding pages, they were stuck at grabbing spin_lock. This is why I'm thinking the contention on object's radix tree lock would be a problem. Another thing I'm quite sure is that there must be tons of pages caching at the client side(This is because there is no cache limit as what we did in b18), so that the client had to take a lot of time to drain them.

          It seems that there is a lot of work to use lockless pagecache in clio as linux kernel does. Maybe we can limit # of caching pages at the client side so that a fast recovery is possible after an OST runs into problem.

          Also, it may be interesting to see what's going on at the OST side. The client lost connections to a couple of OSTs in a short time.

          jay Jinshan Xiong (Inactive) added a comment - - edited Indeed. From the dmesg in the attachment, though only a few CPUs(cpu 3, 4 and 10) were busy discarding pages, they were stuck at grabbing spin_lock. This is why I'm thinking the contention on object's radix tree lock would be a problem. Another thing I'm quite sure is that there must be tons of pages caching at the client side(This is because there is no cache limit as what we did in b18), so that the client had to take a lot of time to drain them. It seems that there is a lot of work to use lockless pagecache in clio as linux kernel does. Maybe we can limit # of caching pages at the client side so that a fast recovery is possible after an OST runs into problem. Also, it may be interesting to see what's going on at the OST side. The client lost connections to a couple of OSTs in a short time.

          I would like to precise/correct Sebastien's "CEA confirms that processes are waiting for ages on this global spin lock." comment.

          In fact, all involved threads (as the spin-lock counter evolving indicates!) one at a time acquire the spin-lock, then go thru the radix-tree and last release the spin-lock.

          And this pseudo-hang situation could be aggravated by the race on the spin-lock, the radix-tree search, and may be also "false/unnecessary" trips for the same pages ...

          bfaccini Bruno Faccini (Inactive) added a comment - I would like to precise/correct Sebastien's "CEA confirms that processes are waiting for ages on this global spin lock." comment. In fact, all involved threads (as the spin-lock counter evolving indicates!) one at a time acquire the spin-lock, then go thru the radix-tree and last release the spin-lock. And this pseudo-hang situation could be aggravated by the race on the spin-lock, the radix-tree search, and may be also "false/unnecessary" trips for the same pages ...

          Hi Seba,

          If you have a testing system, you may try patch at http://review.whamcloud.com/#change,911. That patch is for LU-394, but I think it can mitigate the contention on coh_page_guard a little bit.

          I'm working on using RCU radix tree to solve the problem.

          Jinshan

          jay Jinshan Xiong (Inactive) added a comment - Hi Seba, If you have a testing system, you may try patch at http://review.whamcloud.com/#change,911 . That patch is for LU-394 , but I think it can mitigate the contention on coh_page_guard a little bit. I'm working on using RCU radix tree to solve the problem. Jinshan
          pjones Peter Jones added a comment -

          Jinshan

          Yes they are using RHEL6. Do you need anything more precise than that?

          Peter

          pjones Peter Jones added a comment - Jinshan Yes they are using RHEL6. Do you need anything more precise than that? Peter

          what kernel r you using? If you;re using rhel6, we can use lockless radix tree to fix this problem; otherwise, I will try to work out a workaround to mitigate it.

          jay Jinshan Xiong (Inactive) added a comment - what kernel r you using? If you;re using rhel6, we can use lockless radix tree to fix this problem; otherwise, I will try to work out a workaround to mitigate it.

          CEA confirms that processes are waiting for ages on this global spin lock.

          sebastien.buisson Sebastien Buisson (Inactive) added a comment - CEA confirms that processes are waiting for ages on this global spin lock.

          It looks like those processes are busy discarding pages. During this process, they all need to grab a global spin lock to do this. So if possible, it would be interesting to verify it by running oprofile.

          jay Jinshan Xiong (Inactive) added a comment - It looks like those processes are busy discarding pages. During this process, they all need to grab a global spin lock to do this. So if possible, it would be interesting to verify it by running oprofile.
          pjones Peter Jones added a comment -

          Oleg

          Could you please advise on this one?

          Thanks

          Peter

          pjones Peter Jones added a comment - Oleg Could you please advise on this one? Thanks Peter

          People

            jay Jinshan Xiong (Inactive)
            sebastien.buisson Sebastien Buisson (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: