Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5585

MDS became unresponsive, clients hanging until MDS fail over

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Minor
    • None
    • Lustre 2.5.2
    • None
    • 3
    • 15580

    Description

      This morning some of our clients were hanging (others had not been checked at that time), the active MDS was unresponsive and flooding the console with stack traces. We had to fail over to the second MDS to get the file system back.

      Looking at the system logs, we see a large number of these messages:
      kernel: socknal_sd00_02: page allocation failure. order:2, mode:0x20 all followed by many stack traces, full log attached. Our monitoring is showing that the memory was mainly used by buffers but this had been the case for all of last week already and was stable and only slowly increasing. After the restart the memory used by buffers has quickly increase to about 60% and currently seems to be stable about there...

      Just before these page allocation failure messages we noticed a few client reconnect messages, but have not been able to find any network problems so far. Since the restart of the MDT, no unexpected client reconnects have been seen.

      We are running lustre 2.5.2 + 4 patches as recommended in LU-5529 and LU-5514.

      We've been hammering the MDS a bit since the upgrade, both creating files, stating many files/directories from many clients etc and removing many files, but I would still expect the MDS not to fall over like this.

      Is this a problem/memory leak in Lustre or something else? Could it be related different compile options when compiling Lustre? We did compile the version on the MDS in house with these patches and there is always a chance we didn't quite use the same compile time options that the automatic build process would use...

      What can we do to debug this further and avoid it in the future?

      Attachments

        Issue Links

          Activity

            [LU-5585] MDS became unresponsive, clients hanging until MDS fail over

            Depends on your view, we've got just under 300 clients on this file system.

            We'll try limiting the lru_size and will continue to monitor, looking at LU-5727, I'm not sure how much this will give us.

            Considering that we have been cleaning the file system, it is also entirely possible that we hit something similar to LU-5726, i.e. we almost certainly have run 'rm -rf' or similar in parallel on multiple clients. I will try to reproduce this tomorrow.

            ferner Frederik Ferner (Inactive) added a comment - Depends on your view, we've got just under 300 clients on this file system. We'll try limiting the lru_size and will continue to monitor, looking at LU-5727 , I'm not sure how much this will give us. Considering that we have been cleaning the file system, it is also entirely possible that we hit something similar to LU-5726 , i.e. we almost certainly have run 'rm -rf' or similar in parallel on multiple clients. I will try to reproduce this tomorrow.
            pjones Peter Jones added a comment -

            Bobijam

            Could this be related to the issue reported in LU-5727?

            Peter

            pjones Peter Jones added a comment - Bobijam Could this be related to the issue reported in LU-5727 ? Peter
            green Oleg Drokin added a comment -

            Do you have many clients on this system?

            It's been a known problem in the past that if you let cleint lrus to grow uncontrollably, servers become somewhat memory starved.

            One possible workaround is to set lru_size on the clients to something conservative like 100 or 200. Also if you have mostly non-intersecting jobs on the clients that don't reuse same files between different jobs, some sites are dropping lock lrus (and other caches) forcefully in between job runs.

            green Oleg Drokin added a comment - Do you have many clients on this system? It's been a known problem in the past that if you let cleint lrus to grow uncontrollably, servers become somewhat memory starved. One possible workaround is to set lru_size on the clients to something conservative like 100 or 200. Also if you have mostly non-intersecting jobs on the clients that don't reuse same files between different jobs, some sites are dropping lock lrus (and other caches) forcefully in between job runs.

            Unfortunately this started to severely affect file system performance so we had to fail over. I was nearly in time to do a clean unmount but not quite. By the time I started typing the umount command, the MDS froze completely and I was not able to collect any debug_log.

            Since this is now a repeating feature of this file system any idea how we could prevent this from re-occuring would be much appreciated. If there is anything we can do do help debug this, let us know, we'll do what we can.

            Frederik

            ferner Frederik Ferner (Inactive) added a comment - Unfortunately this started to severely affect file system performance so we had to fail over. I was nearly in time to do a clean unmount but not quite. By the time I started typing the umount command, the MDS froze completely and I was not able to collect any debug_log. Since this is now a repeating feature of this file system any idea how we could prevent this from re-occuring would be much appreciated. If there is anything we can do do help debug this, let us know, we'll do what we can. Frederik

            I've done that, unfortunately it didn't seem to free up much memory.

            During the initial sweep of lctl get_param ldlm.namespaces.MDT.lru_size for this file system, adding up the numbers for all reachable clients (a few are currently unresponsive and are being looked at, we assume unrelated), we seem to have about 5.3M locks on clients (corresponding to most recent snapshot of slabinfo of 5.4M ldlm_locks).

            After the lru_size=clear, both numbers dropped, now, about 20 minutes later they are back at about 1.5M each.

            Fresh meminfo/slabinfo about 20 minutes after clearing the locks are attached.

            ferner Frederik Ferner (Inactive) added a comment - I've done that, unfortunately it didn't seem to free up much memory. During the initial sweep of lctl get_param ldlm.namespaces. MDT .lru_size for this file system, adding up the numbers for all reachable clients (a few are currently unresponsive and are being looked at, we assume unrelated), we seem to have about 5.3M locks on clients (corresponding to most recent snapshot of slabinfo of 5.4M ldlm_locks). After the lru_size=clear, both numbers dropped, now, about 20 minutes later they are back at about 1.5M each. Fresh meminfo/slabinfo about 20 minutes after clearing the locks are attached.

            What is a bit odd here is that there are 5.5M in-use ldlm_locks on 2.9M ldlm_resources, yet there are only 190K inodes in memory (166K objects). This implies there is something kind of strange happening in the DLM, since there should only be a single resource per MDT object. There should be at least one ldlm_resource for each ldlm_lock, though having more locks than resources is OK as multiple clients may lock the same resource, or a single client may lock different parts of the same resource.

            One experiment you might do is to run lctl get_param ldlm.namespaces.MDT.lru_size to get the count of locks held by all the clients, and then lctl set_param ldlm.namespaces.MDT.lru_size=clear on the clients to drop all their DLM locks. The set_param will cancel the corresponding locks on the server and flush the client metadata cache as a result, which may have a short term negative impact on metadata performance, in case that is unacceptable.

            The cancellation of locks on the clients should result in all of the ldlm_locks structures being freed on the MDS (or at least the sum of the locks on the clients should match the number of ACTIVE ldlm_locks allocated on the MDS). If that isn't the case, it seems we have some kind of leak in the DLM.

            adilger Andreas Dilger added a comment - What is a bit odd here is that there are 5.5M in-use ldlm_locks on 2.9M ldlm_resources, yet there are only 190K inodes in memory (166K objects). This implies there is something kind of strange happening in the DLM, since there should only be a single resource per MDT object. There should be at least one ldlm_resource for each ldlm_lock, though having more locks than resources is OK as multiple clients may lock the same resource, or a single client may lock different parts of the same resource. One experiment you might do is to run lctl get_param ldlm.namespaces. MDT .lru_size to get the count of locks held by all the clients, and then lctl set_param ldlm.namespaces. MDT .lru_size=clear on the clients to drop all their DLM locks. The set_param will cancel the corresponding locks on the server and flush the client metadata cache as a result, which may have a short term negative impact on metadata performance, in case that is unacceptable. The cancellation of locks on the clients should result in all of the ldlm_locks structures being freed on the MDS (or at least the sum of the locks on the clients should match the number of ACTIVE ldlm_locks allocated on the MDS). If that isn't the case, it seems we have some kind of leak in the DLM.

            This might have come back, this time no LVM snapshot involved.

            Over the last few days buffer memory usage has steadily gone up again, without going down as it used to, currently about 88% of the memory appears to be buffers and has been for over a day now. So far clients don't appear to be affected.

            Most recent /proc/meminfo and /proc/slabinfo are attached. Below are the first few lines of slabtop output as well.

            [bnh65367@cs04r-sc-mds03-01 ~]$ slabtop -o | head -20
             Active / Total Objects (% used)    : 40400048 / 44245203 (91.3%)
             Active / Total Slabs (% used)      : 2464035 / 2464417 (100.0%)
             Active / Total Caches (% used)     : 137 / 253 (54.2%)
             Active / Total Size (% used)       : 8144191.17K / 9226241.88K (88.3%)
             Minimum / Average / Maximum Object : 0.02K / 0.21K / 4096.00K
            
              OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
            29618648 28952513  97%    0.10K 800504       37   3202016K buffer_head            
            6944364 5580655  80%    0.50K 992052        7   3968208K ldlm_locks             
            3200268 2900338  90%    0.31K 266689       12   1066756K ldlm_resources         
            1036350 444641  42%    0.12K  34545       30    138180K size-128               
            904113 851578  94%    0.55K 129159        7    516636K radix_tree_node        
            574240 331937  57%    0.19K  28712       20    114848K size-192               
            360192 166594  46%    0.08K   7504       48     30016K mdd_obj                
            311202 166140  53%    0.11K   9153       34     36612K lod_obj                
            282840 165746  58%    0.25K  18856       15     75424K mdt_obj                
            260792 257126  98%    0.50K  32599        8    130396K size-512               
            202257 190027  93%    1.02K  67419        3    269676K ldiskfs_inode_cache    
            175761  83015  47%    0.06K   2979       59     11916K size-64                
             78288  62865  80%    0.03K    699      112      2796K size-32                
            [bnh65367@cs04r-sc-mds03-01 ~]$ 
            
            ferner Frederik Ferner (Inactive) added a comment - This might have come back, this time no LVM snapshot involved. Over the last few days buffer memory usage has steadily gone up again, without going down as it used to, currently about 88% of the memory appears to be buffers and has been for over a day now. So far clients don't appear to be affected. Most recent /proc/meminfo and /proc/slabinfo are attached. Below are the first few lines of slabtop output as well. [bnh65367@cs04r-sc-mds03-01 ~]$ slabtop -o | head -20 Active / Total Objects (% used) : 40400048 / 44245203 (91.3%) Active / Total Slabs (% used) : 2464035 / 2464417 (100.0%) Active / Total Caches (% used) : 137 / 253 (54.2%) Active / Total Size (% used) : 8144191.17K / 9226241.88K (88.3%) Minimum / Average / Maximum Object : 0.02K / 0.21K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 29618648 28952513 97% 0.10K 800504 37 3202016K buffer_head 6944364 5580655 80% 0.50K 992052 7 3968208K ldlm_locks 3200268 2900338 90% 0.31K 266689 12 1066756K ldlm_resources 1036350 444641 42% 0.12K 34545 30 138180K size-128 904113 851578 94% 0.55K 129159 7 516636K radix_tree_node 574240 331937 57% 0.19K 28712 20 114848K size-192 360192 166594 46% 0.08K 7504 48 30016K mdd_obj 311202 166140 53% 0.11K 9153 34 36612K lod_obj 282840 165746 58% 0.25K 18856 15 75424K mdt_obj 260792 257126 98% 0.50K 32599 8 130396K size-512 202257 190027 93% 1.02K 67419 3 269676K ldiskfs_inode_cache 175761 83015 47% 0.06K 2979 59 11916K size-64 78288 62865 80% 0.03K 699 112 2796K size-32 [bnh65367@cs04r-sc-mds03-01 ~]$

            Ah, I think I may now have fixed this immediate problem.

            We created a LVM snapshot just before extending the file system earlier this week. We had kept this snapshot around and wanted to keep it for a little longer while we were performing tests on the extended file system. However, searching for information on the kcopyd process, I came across a post to dm-devel about the performance impact of kcopyd. Even though the post was from May 2007, we decided to remove the snapshot and load average immediately started to drop and is now down at around 8, client metadata performance has also recovered, nicely noticeable as jump up for file open rates on the MDT at the time of disabling the snapshot...

            I guess the lesson is that snapshots can still have a very large performance impact.

            ferner Frederik Ferner (Inactive) added a comment - Ah, I think I may now have fixed this immediate problem. We created a LVM snapshot just before extending the file system earlier this week. We had kept this snapshot around and wanted to keep it for a little longer while we were performing tests on the extended file system. However, searching for information on the kcopyd process, I came across a post to dm-devel about the performance impact of kcopyd. Even though the post was from May 2007, we decided to remove the snapshot and load average immediately started to drop and is now down at around 8, client metadata performance has also recovered, nicely noticeable as jump up for file open rates on the MDT at the time of disabling the snapshot... I guess the lesson is that snapshots can still have a very large performance impact.

            two dump files collected while I suspect the problem was ongoing. manual_dump.txt has been collected while I was experiencing the problem.

            ferner Frederik Ferner (Inactive) added a comment - two dump files collected while I suspect the problem was ongoing. manual_dump.txt has been collected while I was experiencing the problem.

            MDT log from the time after failing over the MDT to this machine until now.

            ferner Frederik Ferner (Inactive) added a comment - MDT log from the time after failing over the MDT to this machine until now.

            People

              bobijam Zhenyu Xu
              ferner Frederik Ferner (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: