Details

    • Bug
    • Resolution: Fixed
    • Critical
    • None
    • Lustre 2.2.0

    Description

      We recently experienced two MDS crashes on our Lustre installation.

      I've attached the netconsole output of both crashes (that's all i got: there is nothing in the syslog and i wasn't able to create a screenshot of the console output as the crashed mds was already powercycled by its failover partner).

      Attachments

        1. llog.txt
          47 kB
        2. mds08.txt
          15 kB
        3. mds14.txt
          15 kB
        4. osd_ldiskfs.ko
          4.18 MB

        Activity

          [LU-2323] mds crash
          bobijam Zhenyu Xu added a comment -

          Yes, even it's 1.8.x client problem we should fix it. The purpose of the question is trying to help to make out which area to find the root cause.

          I'm still investigating the llog part issue.

          bobijam Zhenyu Xu added a comment - Yes, even it's 1.8.x client problem we should fix it. The purpose of the question is trying to help to make out which area to find the root cause. I'm still investigating the llog part issue.

          Well, the problem is that i can not reproduce the crash and i did not see any new crashes since 14. November.

          (The crash was probably caused by an user job: There are about ~800 users on our cluster and i have no way to figure out what job crashed it).

          But in any case: Even if the crash was triggered by an 1.8.x client: It should get fixed, shouldn't it?

          And do we have any news about the llog_write_rec error? (did the debugfs output help?)

          adrian Adrian Ulrich (Inactive) added a comment - Well, the problem is that i can not reproduce the crash and i did not see any new crashes since 14. November. (The crash was probably caused by an user job: There are about ~800 users on our cluster and i have no way to figure out what job crashed it). But in any case: Even if the crash was triggered by an 1.8.x client: It should get fixed, shouldn't it? And do we have any news about the llog_write_rec error? (did the debugfs output help?)
          bobijam Zhenyu Xu added a comment -

          Adrian,

          Did you have opportunity to try excluding 1.8.x clients to check whether the MDS still crashes with only 2.x clients accessing it?

          bobijam Zhenyu Xu added a comment - Adrian, Did you have opportunity to try excluding 1.8.x clients to check whether the MDS still crashes with only 2.x clients accessing it?

          I did this using a snapshot from the MDS (taken at 5. November).
          The output of llog_reader is attached to the case (llog.txt)

          Output of CONFIGS/ via debugfs:

          $ debugfs mds.dump
          debugfs 1.41.12 (17-May-2010)
          debugfs: ls -l CONFIGS
          467550721 40777 (2) 0 0 4096 9-Oct-2012 07:05 .
          2 40755 (2) 0 0 4096 3-May-2012 14:58 ..
          467550722 100644 (1) 0 0 12288 9-May-2012 09:21 mountdata
          467550723 100644 (1) 0 0 0 3-May-2012 14:58 _mgs-sptlrpc
          467550724 100644 (1) 0 0 89128 3-May-2012 14:58 nero-client
          467550725 100644 (1) 0 0 0 3-May-2012 14:58 nero-sptlrpc
          467550726 100644 (1) 0 0 89000 3-May-2012 14:58 nero-MDT0000
          467550727 100644 (1) 0 0 0 3-May-2012 14:58 changelog_catalog
          467550728 100644 (1) 0 0 0 3-May-2012 14:58 changelog_users
          467550730 100644 (1) 0 0 9432 3-May-2012 15:05 nero-OST0000
          467550729 100644 (1) 0 0 0 9-Oct-2012 07:05 sptlrpc
          467550731 100644 (1) 0 0 9432 3-May-2012 15:08 nero-OST0008
          467550732 100644 (1) 0 0 9432 3-May-2012 15:09 nero-OST0010
          467550733 100644 (1) 0 0 9432 3-May-2012 15:09 nero-OST0018
          467550734 100644 (1) 0 0 9432 3-May-2012 15:58 nero-OST0001
          467550735 100644 (1) 0 0 9432 3-May-2012 15:58 nero-OST0009
          467550736 100644 (1) 0 0 9432 3-May-2012 15:59 nero-OST0011
          467550737 100644 (1) 0 0 9432 3-May-2012 15:59 nero-OST0019
          467550738 100644 (1) 0 0 9432 3-May-2012 16:00 nero-OST0002
          467550739 100644 (1) 0 0 9432 3-May-2012 16:01 nero-OST000a
          467550740 100644 (1) 0 0 9432 3-May-2012 16:04 nero-OST0012
          467550741 100644 (1) 0 0 9432 3-May-2012 16:05 nero-OST001a
          467550742 100644 (1) 0 0 9432 3-May-2012 16:06 nero-OST0003
          467550743 100644 (1) 0 0 9432 3-May-2012 16:06 nero-OST000b
          467550744 100644 (1) 0 0 9432 3-May-2012 16:06 nero-OST0013
          467550745 100644 (1) 0 0 9432 3-May-2012 16:07 nero-OST001b
          467550746 100644 (1) 0 0 9432 3-May-2012 16:11 nero-OST0004
          467550747 100644 (1) 0 0 9432 3-May-2012 16:11 nero-OST000c
          467550748 100644 (1) 0 0 9432 3-May-2012 16:12 nero-OST0014
          467550749 100644 (1) 0 0 9432 3-May-2012 16:12 nero-OST001c
          467550750 100644 (1) 0 0 9432 3-May-2012 16:14 nero-OST0005
          467550751 100644 (1) 0 0 9432 3-May-2012 16:14 nero-OST000d
          467550752 100644 (1) 0 0 9432 3-May-2012 16:14 nero-OST0015
          467550753 100644 (1) 0 0 9432 3-May-2012 16:15 nero-OST001d
          467550754 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST0006
          467550755 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST000e
          467550756 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST0016
          467550757 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST001e
          467550758 100644 (1) 0 0 9432 3-May-2012 16:21 nero-OST0007
          467550759 100644 (1) 0 0 9432 3-May-2012 16:21 nero-OST000f
          467550760 100644 (1) 0 0 9432 3-May-2012 16:22 nero-OST0017
          467550761 100644 (1) 0 0 9432 3-May-2012 16:22 nero-OST001f

          adrian Adrian Ulrich (Inactive) added a comment - I did this using a snapshot from the MDS (taken at 5. November). The output of llog_reader is attached to the case (llog.txt) Output of CONFIGS/ via debugfs: $ debugfs mds.dump debugfs 1.41.12 (17-May-2010) debugfs: ls -l CONFIGS 467550721 40777 (2) 0 0 4096 9-Oct-2012 07:05 . 2 40755 (2) 0 0 4096 3-May-2012 14:58 .. 467550722 100644 (1) 0 0 12288 9-May-2012 09:21 mountdata 467550723 100644 (1) 0 0 0 3-May-2012 14:58 _mgs-sptlrpc 467550724 100644 (1) 0 0 89128 3-May-2012 14:58 nero-client 467550725 100644 (1) 0 0 0 3-May-2012 14:58 nero-sptlrpc 467550726 100644 (1) 0 0 89000 3-May-2012 14:58 nero-MDT0000 467550727 100644 (1) 0 0 0 3-May-2012 14:58 changelog_catalog 467550728 100644 (1) 0 0 0 3-May-2012 14:58 changelog_users 467550730 100644 (1) 0 0 9432 3-May-2012 15:05 nero-OST0000 467550729 100644 (1) 0 0 0 9-Oct-2012 07:05 sptlrpc 467550731 100644 (1) 0 0 9432 3-May-2012 15:08 nero-OST0008 467550732 100644 (1) 0 0 9432 3-May-2012 15:09 nero-OST0010 467550733 100644 (1) 0 0 9432 3-May-2012 15:09 nero-OST0018 467550734 100644 (1) 0 0 9432 3-May-2012 15:58 nero-OST0001 467550735 100644 (1) 0 0 9432 3-May-2012 15:58 nero-OST0009 467550736 100644 (1) 0 0 9432 3-May-2012 15:59 nero-OST0011 467550737 100644 (1) 0 0 9432 3-May-2012 15:59 nero-OST0019 467550738 100644 (1) 0 0 9432 3-May-2012 16:00 nero-OST0002 467550739 100644 (1) 0 0 9432 3-May-2012 16:01 nero-OST000a 467550740 100644 (1) 0 0 9432 3-May-2012 16:04 nero-OST0012 467550741 100644 (1) 0 0 9432 3-May-2012 16:05 nero-OST001a 467550742 100644 (1) 0 0 9432 3-May-2012 16:06 nero-OST0003 467550743 100644 (1) 0 0 9432 3-May-2012 16:06 nero-OST000b 467550744 100644 (1) 0 0 9432 3-May-2012 16:06 nero-OST0013 467550745 100644 (1) 0 0 9432 3-May-2012 16:07 nero-OST001b 467550746 100644 (1) 0 0 9432 3-May-2012 16:11 nero-OST0004 467550747 100644 (1) 0 0 9432 3-May-2012 16:11 nero-OST000c 467550748 100644 (1) 0 0 9432 3-May-2012 16:12 nero-OST0014 467550749 100644 (1) 0 0 9432 3-May-2012 16:12 nero-OST001c 467550750 100644 (1) 0 0 9432 3-May-2012 16:14 nero-OST0005 467550751 100644 (1) 0 0 9432 3-May-2012 16:14 nero-OST000d 467550752 100644 (1) 0 0 9432 3-May-2012 16:14 nero-OST0015 467550753 100644 (1) 0 0 9432 3-May-2012 16:15 nero-OST001d 467550754 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST0006 467550755 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST000e 467550756 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST0016 467550757 100644 (1) 0 0 9432 3-May-2012 16:18 nero-OST001e 467550758 100644 (1) 0 0 9432 3-May-2012 16:21 nero-OST0007 467550759 100644 (1) 0 0 9432 3-May-2012 16:21 nero-OST000f 467550760 100644 (1) 0 0 9432 3-May-2012 16:22 nero-OST0017 467550761 100644 (1) 0 0 9432 3-May-2012 16:22 nero-OST001f

          Adrian,
          if I remember well you can use debufs with the device mounted. Try:

          debugfs -c -R 'dump CONFIGS/ /tmp/config' /dev/<device>
          llog_reader /tmp/config

          ethz.support ETHz Support (Inactive) added a comment - Adrian, if I remember well you can use debufs with the device mounted. Try: debugfs -c -R 'dump CONFIGS/ /tmp/config' /dev/<device> llog_reader /tmp/config

          no, i've got what I need, thanks

          a bit of analysis here:

          0000000000004d26 <osd_trans_stop+0x56> mov 0x50(%rbx),%r12
          0000000000004d2a <osd_trans_stop+0x5a> test %r12,%r12
          0000000000004d2d <osd_trans_stop+0x5d> je 0000000000004e82 <osd_trans_stop+0x1b2>
          0000000000004d33 <osd_trans_stop+0x63> movzbl 0x28(%r12),%eax
          0000000000004d39 <osd_trans_stop+0x69> movzbl 0x4c(%rbx),%edx
          0000000000004d3d <osd_trans_stop+0x6d> and $0xfffffffe,%eax
          0000000000004d40 <osd_trans_stop+0x70> and $0x1,%edx
          0000000000004d43 <osd_trans_stop+0x73> or %edx,%eax
          0000000000004d45 <osd_trans_stop+0x75> mov %al,0x28(%r12)
          0000000000004d4a <osd_trans_stop+0x7a> mov (%r12),%rax

          so rbx contains pointer to oh:

          (gdb) p/x sizeof(struct thandle)
          $2 = 0x50
          struct osd_thandle {
          struct thandle ot_super;
          handle_t *ot_handle;

          0000000000004db3 <osd_trans_stop+0xe3> mov (%rbx),%rax
          0000000000004db6 <osd_trans_stop+0xe6> test %rax,%rax
          0000000000004db9 <osd_trans_stop+0xe9> je 0000000000004dc4 <osd_trans_stop+0xf4>
          0000000000004dbb <osd_trans_stop+0xeb> mov 0x8(%rax),%rax
          0000000000004dbf <osd_trans_stop+0xef> testb $0x1,(%rax)

          these lines implement:
          if (lu_device_is_md(&th->th_dev->dd_lu_dev)) {

          RAX: 0006000100000002 is supposed to be ld_type (and 0x8(%rax) is ld_type->ldt_tags)

          IOW, thandle was broken and pointing to garbage instead of a device.

          now the question what broke that..

          bzzz Alex Zhuravlev added a comment - no, i've got what I need, thanks a bit of analysis here: 0000000000004d26 <osd_trans_stop+0x56> mov 0x50(%rbx),%r12 0000000000004d2a <osd_trans_stop+0x5a> test %r12,%r12 0000000000004d2d <osd_trans_stop+0x5d> je 0000000000004e82 <osd_trans_stop+0x1b2> 0000000000004d33 <osd_trans_stop+0x63> movzbl 0x28(%r12),%eax 0000000000004d39 <osd_trans_stop+0x69> movzbl 0x4c(%rbx),%edx 0000000000004d3d <osd_trans_stop+0x6d> and $0xfffffffe,%eax 0000000000004d40 <osd_trans_stop+0x70> and $0x1,%edx 0000000000004d43 <osd_trans_stop+0x73> or %edx,%eax 0000000000004d45 <osd_trans_stop+0x75> mov %al,0x28(%r12) 0000000000004d4a <osd_trans_stop+0x7a> mov (%r12),%rax so rbx contains pointer to oh: (gdb) p/x sizeof(struct thandle) $2 = 0x50 struct osd_thandle { struct thandle ot_super; handle_t *ot_handle; 0000000000004db3 <osd_trans_stop+0xe3> mov (%rbx),%rax 0000000000004db6 <osd_trans_stop+0xe6> test %rax,%rax 0000000000004db9 <osd_trans_stop+0xe9> je 0000000000004dc4 <osd_trans_stop+0xf4> 0000000000004dbb <osd_trans_stop+0xeb> mov 0x8(%rax),%rax 0000000000004dbf <osd_trans_stop+0xef> testb $0x1,(%rax) these lines implement: if (lu_device_is_md(&th->th_dev->dd_lu_dev)) { RAX: 0006000100000002 is supposed to be ld_type (and 0x8(%rax) is ld_type->ldt_tags) IOW, thandle was broken and pointing to garbage instead of a device. now the question what broke that..

          Alex: I attached the requested kernel module (or did you mean with 'attach' that i should insmod it? )

          adrian Adrian Ulrich (Inactive) added a comment - Alex: I attached the requested kernel module (or did you mean with 'attach' that i should insmod it? )

          md5sum = 6a07cbbb49f63ea0f6a5e6bc067bc7c9

          requested kernel module

          adrian Adrian Ulrich (Inactive) added a comment - md5sum = 6a07cbbb49f63ea0f6a5e6bc067bc7c9 requested kernel module

          ok, good to know.. llog_write_rec() getting -28 mean MDS was not able to write (transactionally) a record to make some updates to OST (remove OST object, update OST object attributes).

          then could you attach osd_ldiskfs.ko please ?

          bzzz Alex Zhuravlev added a comment - ok, good to know.. llog_write_rec() getting -28 mean MDS was not able to write (transactionally) a record to make some updates to OST (remove OST object, update OST object attributes). then could you attach osd_ldiskfs.ko please ?

          Yong Fan:

          • We need the 1.8.x client to copy data from our old Lustre 1.8 installation to the 2.2 installation.
            I will temporarily disable it on the next crash (there are still users moving data around).
          • We didn't do any upgrades recently: We didn't touch the servers since months.
          • We have 2.3 clients because the 2.2-client is too unstable for us and i don't have enough time to backport all crash fixes from 2.3 to our 2.2-client

          Alex: We get this llog_write_rec errors since months (Jun 2012) while our MDS only startet to crash recently (also note that there are ~5 hours between the error and the crash).

          What does the llog_write_rec error mean anyway?

          I'll post the output of 'ls CONFIGS/' as soon as i have a chance to remount the volume via ldiskfs (= on next crash or on monday evening)

          adrian Adrian Ulrich (Inactive) added a comment - Yong Fan: We need the 1.8.x client to copy data from our old Lustre 1.8 installation to the 2.2 installation. I will temporarily disable it on the next crash (there are still users moving data around). We didn't do any upgrades recently: We didn't touch the servers since months. We have 2.3 clients because the 2.2-client is too unstable for us and i don't have enough time to backport all crash fixes from 2.3 to our 2.2-client Alex: We get this llog_write_rec errors since months (Jun 2012) while our MDS only startet to crash recently (also note that there are ~5 hours between the error and the crash). What does the llog_write_rec error mean anyway? I'll post the output of 'ls CONFIGS/' as soon as i have a chance to remount the volume via ldiskfs (= on next crash or on monday evening)

          Adrian, if you cannot abandon 1.8.4 clients, then please umount them temporarily, and try to reproduce the failure with other clients. If cannot reproduce, then it is quite possible related with the interoperability issues.

          yong.fan nasf (Inactive) added a comment - Adrian, if you cannot abandon 1.8.4 clients, then please umount them temporarily, and try to reproduce the failure with other clients. If cannot reproduce, then it is quite possible related with the interoperability issues.

          People

            niu Niu Yawei (Inactive)
            ethz.support ETHz Support (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: