Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6607

MDS ( 2 node DNE) running out of memory and crash

Details

    • Bug
    • Resolution: Won't Fix
    • Blocker
    • None
    • Lustre 2.7.0
    • 4
    • 9223372036854775807

    Description

      2 node DNE MDS
      16 OSS
      2K clients

      A MDS node randomly running out of memory and hang.
      We watch MDS drain its memory in matter of few minutes. Many times right after recovery from previous hangs.

      Clients are generating a ton of Lustre errors with strings "ptlrpc_expire_one_request". The numbers are from several hundred thousands to several millions of such errors from each node. Here are number of error counts from some nodes:

      comet-12-31 662616
      comet-10-06 690764
      comet-12-24 720396
      comet-12-25 735659
      comet-12-14 778073
      comet-12-33 840302
      comet-10-10 928322
      comet-12-33 945614
      comet-12-25 992288
      comet-10-15 1131711
      comet-12-25 1147043
      comet-10-07 1160876
      comet-12-30 1180270
      comet-10-03 1387072
      comet-10-02 2515764
      comet-10-02 3371128

      I am attaching logs from both client and server on one such incidence.

      Attachments

        1. clients_log.gz
          622 kB
        2. dmesg_mds.gz
          21 kB
        3. dmesg.out
          396 kB
        4. lustre-log.tgz
          9.35 MB
        5. messages-19-6.gz
          92 kB
        6. slabinfo.txt
          27 kB

        Activity

          [LU-6607] MDS ( 2 node DNE) running out of memory and crash

          Ah, it is. you can use that build. Thanks

          di.wang Di Wang (Inactive) added a comment - Ah, it is. you can use that build. Thanks

          Hi Wang Di,

          I understand LU-6584 is a different problem, for OSS and not MDS memory issue.

          What I said earlier was, to work on LU-6584 problem, we have to apply a patch soon. This is because they are the same
          file-system. That patch is built with http://review.whamcloud.com/#/c/14926/

          Was that 2.7.58 equivalent?

          Haisong

          haisong Haisong Cai (Inactive) added a comment - Hi Wang Di, I understand LU-6584 is a different problem, for OSS and not MDS memory issue. What I said earlier was, to work on LU-6584 problem, we have to apply a patch soon. This is because they are the same file-system. That patch is built with http://review.whamcloud.com/#/c/14926/ Was that 2.7.58 equivalent? Haisong

          Hmm, I think LU-6584 is different issue. This ticket is about MDS OOM during failover? Do you happen to know any easy way to reproduce this problem?
          Hmm, btw: is that possible you can add "log_buf_len=10M" in your boot command? since the dmesg you post here only have half stack trace. Thanks.

          di.wang Di Wang (Inactive) added a comment - Hmm, I think LU-6584 is different issue. This ticket is about MDS OOM during failover? Do you happen to know any easy way to reproduce this problem? Hmm, btw: is that possible you can add "log_buf_len=10M" in your boot command? since the dmesg you post here only have half stack trace. Thanks.

          LU-6584 is about OSS crashing problem. The OSS servers are part of these very same MDS servers. They are the one file-system.

          We are about to apply a new patch related to LU-6584. It is built from http://review.whamcloud.com/#/c/14926/

          Will it be satisfy your recommendation?

          Haisong

          haisong Haisong Cai (Inactive) added a comment - LU-6584 is about OSS crashing problem. The OSS servers are part of these very same MDS servers. They are the one file-system. We are about to apply a new patch related to LU-6584 . It is built from http://review.whamcloud.com/#/c/14926/ Will it be satisfy your recommendation? Haisong

          Is that possible you can upgrade MDS to 2.7.58 ? there are quite a few fix on these area since 2.7.51.

          Btw: we are currently testing ZFS on DNE at LU-7009, please follow there.

          di.wang Di Wang (Inactive) added a comment - Is that possible you can upgrade MDS to 2.7.58 ? there are quite a few fix on these area since 2.7.51. Btw: we are currently testing ZFS on DNE at LU-7009 , please follow there.

          On one of the 2 MDS servers:

          [root@panda-mds-19-6 panda-mds-19-6]# sysctl -a | grep slab
          kernel.spl.kmem.slab_kmem_alloc = 92736
          kernel.spl.kmem.slab_kmem_max = 92736
          kernel.spl.kmem.slab_kmem_total = 172032
          kernel.spl.kmem.slab_vmem_alloc = 407675904
          kernel.spl.kmem.slab_vmem_max = 490480640
          kernel.spl.kmem.slab_vmem_total = 485459072
          vm.min_slab_ratio = 5

          haisong Haisong Cai (Inactive) added a comment - On one of the 2 MDS servers: [root@panda-mds-19-6 panda-mds-19-6] # sysctl -a | grep slab kernel.spl.kmem.slab_kmem_alloc = 92736 kernel.spl.kmem.slab_kmem_max = 92736 kernel.spl.kmem.slab_kmem_total = 172032 kernel.spl.kmem.slab_vmem_alloc = 407675904 kernel.spl.kmem.slab_vmem_max = 490480640 kernel.spl.kmem.slab_vmem_total = 485459072 vm.min_slab_ratio = 5

          Hi WangDi,

          We are running CentOS 6.6 with Linux kernel 3.10.73 from elrepo.
          Lustre and ZFS are build as kdms modules.

          Filesystem has 16 OSS and each has 6 OSTs.

          Haisong

          haisong Haisong Cai (Inactive) added a comment - Hi WangDi, We are running CentOS 6.6 with Linux kernel 3.10.73 from elrepo. Lustre and ZFS are build as kdms modules. Filesystem has 16 OSS and each has 6 OSTs. Haisong

          Ah, it is a ZFS environment (ZFS + DNE)? A few questions here

          1. I saw this on your MDS console message(dmesg_mds.gz), the kernel version is definitely not EL6? EL7? But we do not support EL7 server on MDS yet. could you please confirm what kernel did you use on MDS?

          Linux version 3.10.73-1.el6.elrepo.x86_64 (mockbuild@Build64R6) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Thu Mar 26 16:28:30 EDT 2015
          

          2. In the slab info

          kmalloc-8192      9033431 9033431   8192    1    2 : tunables    8    4    0 : slabdata 9033431 9033431      0
          

          8192 size slab costs too much memory, 941G! that is too much. Btw: how much OSTs for each OSS?

          di.wang Di Wang (Inactive) added a comment - Ah, it is a ZFS environment (ZFS + DNE)? A few questions here 1. I saw this on your MDS console message(dmesg_mds.gz), the kernel version is definitely not EL6? EL7? But we do not support EL7 server on MDS yet. could you please confirm what kernel did you use on MDS? Linux version 3.10.73-1.el6.elrepo.x86_64 (mockbuild@Build64R6) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Thu Mar 26 16:28:30 EDT 2015 2. In the slab info kmalloc-8192 9033431 9033431 8192 1 2 : tunables 8 4 0 : slabdata 9033431 9033431 0 8192 size slab costs too much memory, 941G! that is too much. Btw: how much OSTs for each OSS?

          Files collected between 2 time MDS crashes.

          haisong Haisong Cai (Inactive) added a comment - Files collected between 2 time MDS crashes.

          WangDi,

          We ran into this problem on one of MDS (mdt0, the master again today)
          I have collected information you asked by issuing the following commands:

          echo t > /proc/sysrq-trigger
          dmesg > /state/partition1/tmp/dmesg.out
          cat /proc/slabinfo > /state/partition1/tmp/slabinfo.txt

          dmesg.out & slabinfo.txt will be uploaded separately.

          Haisong

          haisong Haisong Cai (Inactive) added a comment - WangDi, We ran into this problem on one of MDS (mdt0, the master again today) I have collected information you asked by issuing the following commands: echo t > /proc/sysrq-trigger dmesg > /state/partition1/tmp/dmesg.out cat /proc/slabinfo > /state/partition1/tmp/slabinfo.txt dmesg.out & slabinfo.txt will be uploaded separately. Haisong

          WangDi,

          We had 2 incidences recently and both time I failed to collect need info.
          One time I simply forgot and the other time we had no chance since MDS node was hung.

          Haisong

          haisong Haisong Cai (Inactive) added a comment - WangDi, We had 2 incidences recently and both time I failed to collect need info. One time I simply forgot and the other time we had no chance since MDS node was hung. Haisong

          People

            laisiyao Lai Siyao
            haisong Haisong Cai (Inactive)
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: