[LU-7836] MDSes crashed with oom-killer Created: 02/Mar/16  Updated: 05/Aug/16  Resolved: 05/Aug/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: Lustre 2.9.0

Type: Bug Priority: Blocker
Reporter: Frank Heckes (Inactive) Assignee: Di Wang
Resolution: Fixed Votes: 0
Labels: soak
Environment:

lola
build: https://build.hpdd.intel.com/job/lustre-b2_8/11/


Attachments: File console-lola-10.bz2     File console-lola-11.bz2     File console-lola-11.log-20160608.bz2     File console-lola-8.bz2     File console-lola-8.log-20160318.bz2     File dmesg-lola-11-20160609-0811.bz2     File lola-10-lustre-log.20160302-0008.bz2     File lola-10-memory-counter-2214-0130.dat.bz2     File lola-10-slab-detailed-2214-0130.dat.bz2     File lola-11-lustre-log.20160302-0008.bz2     Text File lola-11-lustre-log.20160608-0656.bz2     File lola-11-mem-counter-20160311_1729-1910.dat.bz2     Text File lola-11-memory-debug-messages.20160311-183459.bz2     File lola-8-lustre-log-20160318-0134.bz2     File lola-8-lustre-log-20160318-0145.bz2     File lola-8-lustre-log-20160318-0147.bz2     File lola-8-lustre-log-20160318-0150.bz2     File lola-8-lustre-log-20160318-0153.bz2     File lola-8-lustre-log-20160318-0156.bz2     File lola-8-lustre-log-20160318-0200.bz2     File lola-8-lustre-log-20160318-0204.bz2     File lola-8-lustre-log.20160302-0155.bz2     File lola-8-slab-detailed-counter-20160318.tar.bz2     Text File lustre-log.1456893718.5074.bz2     File messages-lola-10.bz2     File messages-lola-11.bz2     File messages-lola-11.log-20160608.bz2     File messages-lola-8.bz2     File messages-lola-8.log-20160318.bz2     HTML File recovery-status-20160302     File slab-details-2041-2208-one-file-per-slab.tar.bz2     File slab-details-2214-0130-one-file-per-slab.tar.bz2     File slab-sorted-alloaction-2041-2208.dat.bz2     File slab-sorted-alloaction-2214-0130.dat.bz2    
Issue Links:
Related
is related to LU-7780 MDS crashed with oom-killer Resolved
is related to LU-8070 sanity-scrub test_5 oom-killer and ti... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Error occurred during soak testing of build '20160302' (b2_8 RC4) (see: https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160302 also). DNE is enabled. MDTs had been formatted using ldiskfs, OSTs using zfs. MDS nodes are configured in active - active HA failover configuration. (For teset set-up configuration see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-Configuration)

Note: This might be a duplicate of LU-7780

Sequence of events:

  • 2016-03-01 20:38:40 triggering fault mds_failover (lola-10 --> lola-11)
  • 2016-03-01 20:41:58 lola-8 LNet process hang on lola-8
    LNet: Service thread pid 5074 was inactive for 200.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes to /tmp/lustre-log.1456893718.5074
    

    (see attached file lustre-log.1456893718.5074

  • Lustre Recovery never completed (till next failover)
  • All slurm jobs are stalled and began to timeout after ~ 20:40
    Slabs continuously allocated memory. Top 10 consumers before next
    failover are
    slab-details/size-1048576.dat:20160301 22:09:20 size-1048576 13215 13856931840 13215 13856931840 13215 13856931840 13215 13856931840 67108864 0
    slab-details/size-262144.dat:20160301 22:09:20 size-262144 385 100925440 385 100925440 385 100925440 385 100925440 0 0
    slab-details/size-192.dat:20160301 22:09:20 size-192 268155 51485760 268320 51517440 13416 54951936 13416 54951936 24576 0
    slab-details/size-8192.dat:20160301 22:09:20 size-8192 5508 45121536 5508 45121536 5508 45121536 5508 45121536 0 0
    slab-details/size-1024.dat:20160301 22:09:20 size-1024 41857 42861568 41864 42868736 10466 42868736 10466 42868736 110592 0
    slab-details/ptlrpc_cache.dat:20160301 22:09:20 ptlrpc_cache 35590 27333120 35605 27344640 7121 29167616 7121 29167616 122880 0
    slab-details/size-65536.dat:20160301 22:09:20 size-65536 361 23658496 361 23658496 361 23658496 361 23658496 0 0
    slab-details/size-512.dat:20160301 22:09:20 size-512 37639 19271168 38920 19927040 4865 19927040 4865 19927040 45056 0
    slab-details/kmem_cache.dat:20160301 22:09:20 kmem_cache 289 9506944 289 9506944 289 18939904 289 18939904 0 0
    slab-details/size-4096.dat:20160301 22:09:20 size-4096 2970 12165120 2971 12169216 2970 12165120 2971 12169216 4096 0
    slab-details/inode_cache.dat:20160301 22:09:20 inode_cache 15612 9242304 15678 9281376 2613 10702848 2613 10702848 0 0
    
  • 2016-03-01 22:09:46,199:fsmgmt.fsmgmt:INFO triggering fault mds_failover (lola-10 --> lola-11)
  • 2016-03-01 22:16:19,483:fsmgmt.fsmgmt:INFO mds_failover just completed
  • Recovery of MDTs (mdt-4,5) never completed
  • 2016-03-02 00:15-30 Created stack traces and debug log files on lola-[8,10,11]
    *2016-02-02 01:30:01 approximately at this time oom-killer started on MDS lola-10
    Top 10 consumers are:
    slab-details/size-1048576.dat:20160302 01:32:00 size-1048576 29882 31333548032 29882 31333548032 29882 31333548032 29882 31333548032 1048576 0
    slab-details/size-262144.dat:20160302 01:32:00 size-262144 641 168034304 641 168034304 641 168034304 641 168034304 0 0
    slab-details/size-1024.dat:20160302 01:32:00 size-1024 84449 86475776 84484 86511616 21121 86511616 21121 86511616 106496 0
    slab-details/ptlrpc_cache.dat:20160302 01:32:00 ptlrpc_cache 95492 73337856 95510 73351680 19102 78241792 19102 78241792 114688 0
    slab-details/size-192.dat:20160302 01:32:00 size-192 298834 57376128 299260 57457920 14963 61288448 14963 61288448 0 0
    slab-details/size-8192.dat:20160302 01:32:00 size-8192 5862 48021504 5862 48021504 5862 48021504 5862 48021504 -8192 0
    slab-details/size-512.dat:20160302 01:32:00 size-512 77790 39828480 79096 40497152 9879 40464384 9887 40497152 28672 0
    slab-details/size-65536.dat:20160302 01:32:00 size-65536 361 23658496 361 23658496 361 23658496 361 23658496 0 0
    slab-details/kmem_cache.dat:20160302 01:32:00 kmem_cache 289 9506944 289 9506944 289 18939904 289 18939904 0 0
    slab-details/size-128.dat:20160302 01:32:00 size-128 78707 10074496 100920 12917760 3364 13778944 3364 13778944 0 0
    

    *2016-02-03 03:00 MDS lola-8 also crashed with oom-killer

Attached messages, console and debug logs of nodes lola-8,10,11,
file recovery-status-20160302 showing recovery and process status around 2016-03-02 00:15, memory and detailed slab,counters of lola-10



 Comments   
Comment by Frank Heckes (Inactive) [ 02/Mar/16 ]

slab counters of lola-8 can be uploaded on demand.

Comment by Oleg Drokin [ 04/Mar/16 ]

just as I asked in LU-7780 - we really need the same debug log from lustre with memory allocation tracing enabled so that we can see who is it that does the allocations, not just their sizes.

Do you think you can collect something like that?

Comment by Oleg Drokin [ 04/Mar/16 ]

you do this by adding the "malloc" debug mask to the run before the problem starts.

Comment by Frank Heckes (Inactive) [ 07/Mar/16 ]

Debug mask has been extended with '+malloc'

Comment by Frank Heckes (Inactive) [ 07/Mar/16 ]

The error didn't happened again till now.

Comment by Frank Heckes (Inactive) [ 14/Mar/16 ]

The issue happened again during soak testing of b2_8 RC5 (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160309).
Unfortunately the error occurred at the weekend (at Mar 12 01:05:02) on MDS node lola-11.
Therefore th latest debug log written on the affected node lola-11 only contains memory allocation messages from 20160311 18:08:06 till 20160311 18:34:59.

The MDS nodes, especially lola-11 had been restarted (randomly) at the following times:

  • mds_restart : 2016-03-11 17:13:36,111 - 2016-03-11 17:24:02,736 lola-11
  • mds_restart : 2016-03-11 17:29:11,738 - 2016-03-11 17:35:40,652 lola-11
  • mds_restart : 2016-03-11 19:10:02,531 - 2016-03-11 19:17:28,133 lola-11
  • mds_restart : 2016-03-11 19:28:55,134 - 2016-03-11 19:39:27,661 lola-11
  • mds_restart : 2016-03-11 19:45:13,663 - 2016-03-11 19:58:16,686 lola-9
  • mds_restart : 2016-03-11 20:24:31,687 - 2016-03-11 20:32:39,547 lola-11

The oom-killer on lola-11 ran at 2016-03-12 01:05 after the last restart for lola-11 finishing at 2016-03-11 20:32:39 (line 6 in list above).
The memory debug information are from the restarts between 17:35 and 19:10 (line 2 and 3).
I attached the extracted allocation messages and the original debug file and the collectl memory counters to the
ticket. I hope it might contain a pointer to the code section causing the memory leak as the slab memory counter show an continuous increasing amount of allocated memory resources. Anyway, the error might haven't been triggered before the restart at 20:24 (line 6).
Please let me know if any other log files are needed.

Comment by Frank Heckes (Inactive) [ 14/Mar/16 ]

Uploaded files are

  • lola-11-lustre-loglog.20160311-183459.bz2 - full debug log
  • lola-11-mem-counter-20160311_1729-1910.dat.bz2 - collectl memory counters
  • lola-11-memory-debug-messages.20160311-183459.bz2 - memory allocation debug log
Comment by Frank Heckes (Inactive) [ 14/Mar/16 ]

The upload of file lola-11-lustre-loglog.20160311-183459.bz2 stalled every time I time half the way before completion.

The effect of continuously increasing amount of slabs immediately occured after the clean-up (remount of MDTS on lola-10) and restart
of soak during the first MDS restart of lola-10.
I uploaded the debug log files dirctory lhn:/scratch/lu-7836

  • Till 2016-03-14 07:25 slab allocation continues to grow till ~ 9 GB
  • Recovery completed at:
     lola-10.log:Mar 14 07:24:35 lola-10 kernel: Lustre: soaked-MDT0005: Recovery over after 66:57, of 16 clients 16 recovered and 0 were evicted.
    lola-10.log:Mar 14 07:25:38 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 67:00, of 16 clients 15 recovered and 1 was evicted.
    
  • After that slab memory allocation remains constant at 9GB
  • During 06:50 - 07:27 the following debug log were created and saved to lhn:/scratch/lu-7638
    lustre-log.20160314-0650, lustre-log.20160314-0656, lustre-log.20160314-0714, lustre-log.20160314-0727
  • umount / mount the MDTs --> recovery went to state INACTIVE
  • Reboot and mount of MDTs brought back MDS lola-10 into state with 'wild' allocation of slabs, see:
    lustre-log.20160314-0801, lustre-log.20160314-0820
Comment by Peter Jones [ 14/Mar/16 ]

Di

Could you please look into this?

Thanks

Peter

Comment by Di Wang [ 15/Mar/16 ]

After some investigation, it looks like the MDT is blocked on update recovery, then queue too much final ping req there. I will try to make a patch.

Comment by Gerrit Updater [ 15/Mar/16 ]

wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/18915
Subject: LU-7836 ptlrpc: No new signal complete request
Project: fs/lustre-release
Branch: b2_8
Current Patch Set: 1
Commit: c80d98b7c1d741e239656be30b80ce723f3aaab5

Comment by Peter Jones [ 16/Mar/16 ]

Moving to 2.9 because it seems that this issue only occurs with multiple MDTs per MDS and does not happen with the more common configuration of a single MDT per MDS. Is this a duplicate of LU-7780?

Comment by Frank Heckes (Inactive) [ 17/Mar/16 ]

Soak has been continued to execute b2_8 RC5 build with reformatted Lustre FS.
Now there's only 1 MDT per MDS and 5 OSTs per OSS (unchanged). MDT had
been formatted with ldiskfs and OSTs using zfs.
Soak session is running for ~ 2.5 days without appearance of this bug.
As LU-7848 is most likely related, it's worth mentioning here that all recovery
times are below 2 mins now.

Comment by Frank Heckes (Inactive) [ 18/Mar/16 ]

The error happened after executions of soak test for approximately ~ 73 hours.

Sequence of events

  • 2016-03-18 00:34:30,557:fsmgmt.fsmgmt:INFO triggering fault mds_restart
  • 2016-03-18 00:40:49,655:fsmgmt.fsmgmt:INFO lola-8 is up!!
  • 2016-03-18 00:42:25,091:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 mounted successfully on lola-8
  • till then recovery process stalled and slab allocation is increasing continuously.
  • 2016-03-18 01:34 - 02:04 created debug logs with frequency ~ 4min

The distribution of the biggest consumers is similar to the 2 MDTs per MDS configuration listed above:

   #Date Time SlabName ObjInUse ObjInUseB ObjAll ObjAllB SlabInUse SlabInUseB SlabAll SlabAllB SlabChg SlabPct
size-1048576.dat:20160318 03:27:00 size-1048576 10115 10606346240 10115 10606346240 10115 10606346240 10115 10606346240 0 0
size-262144.dat:20160318 03:27:00 size-262144 449 117702656 449 117702656 449 117702656 449 117702656 0 0
size-8192.dat:20160318 03:27:00 size-8192 4663 38199296 4663 38199296 4663 38199296 4663 38199296 0 0
size-1024.dat:20160318 03:27:00 size-1024 35610 36464640 35628 36483072 8903 36466688 8907 36483072 106496 0
ptlrpc_cache.dat:20160318 03:27:00 ptlrpc_cache 41048 31524864 41080 31549440 8216 33652736 8216 33652736 53248 0
size-65536.dat:20160318 03:27:00 size-65536 360 23592960 360 23592960 360 23592960 360 23592960 0 0
size-512.dat:20160318 03:27:00 size-512 45384 23236608 45472 23281664 5684 23281664 5684 23281664 8192 0
kmem_cache.dat:20160318 03:27:00 kmem_cache 289 9506944 289 9506944 289 18939904 289 18939904 0 0
inode_cache.dat:20160318 03:27:00 inode_cache 15638 9257696 15684 9284928 2614 10706944 2614 10706944 0 0
Acpi-Operand.dat:20160318 03:27:00 Acpi-Operand 133270 9595440 135468 9753696 2556 10469376 2556 10469376 0 0

After the occurrance of the error, debug logs with filter '+malloc + trace have been taken for ~ 30 Mins in 4 minutes intervals. Buffer size was increased from initially 128MB to 1024 MB.
Uploaded files:

  • message, console and collectl slab performance counter lola-8
  • I try to attache the kernel debug logs to the tickets or will upload them to a Intel cluster for further investigations if I'll face size limits or transfer problems.
Comment by Frank Heckes (Inactive) [ 18/Mar/16 ]

Also, it is impossible to abort the recovery process (see https://jira.hpdd.intel.com/browse/LU-7848?focusedCommentId=146077&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-146077)

Comment by Frank Heckes (Inactive) [ 18/Mar/16 ]

debug files have been uploaded. Oleg: I kept them in the binary state as I was unsure what should be extracted
since Andreas requested '+trace' also.

Comment by Gerrit Updater [ 21/Apr/16 ]

wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/19693
Subject: LU-7836 ptlrpc: remove duplicate final ping req
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: b6f85ba66403a30f135037ae38fb88b35d9b86aa

Comment by Frank Heckes (Inactive) [ 09/May/16 ]

oom-killer was active for build '20160427' (see https://wiki.hpdd.intel.com/pages/viewpage.action?title=Soak+Testing+on+Lola&spaceKey=Releases#SoakTestingonLola-20160427) also. The patch above wasn't applied. Crash dump files are saved in:

lhn.hpdd.intel.com:/var/crashdumps/lu-7836/lola-11/127.0.0.1-2016-05-07-10:48:57
lhn.hpdd.intel.com:/var/crashdumps/lu-7836/lola-11/127.0.0.1-2016-05-07-17:34:05
Comment by Frank Heckes (Inactive) [ 08/Jun/16 ]

The error also occurred while soak testing build '20160601' (see: https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160601) after the recovery process stalled for a MDT that failed over to the secondary node. Although LU-7848 is part of the build, the recovery process is stalls.
Configuration: DNE enabled; 1 MDT per MDS; nodes lola-10, 11 from an active-active failover cluster.

1st Event:

  • 2016-06-03 11:31:10 - failover resource of lola-10 (MDT-2) --> lola-11
  • 2016-06-03 11:36:37 - ... soaked-MDT0002 mounted successfully on lola-11
  • till 2016-06-04-00:44 - soaked-MDT0002 in status 'RECOVERING'.
  • 2016-06-04-00:44:52 - lola-11 crash with oom-killer

2nd Event:

  • 2016-06-07 08:34:06,621 triggering fault mds_failover lola-10 (MDT-2) --> lola-11
  • 2016-06-07 08:38:42 - Mounting soaked-MDT0002 on lola-11
  • since 2016-06-07 08:39:32,155 Wait for recovery to complete
  • memory resources are nearly exhausted:
    [root@lola-11 ~]# date
    Wed Jun  8 07:59:49 PDT 2016
    [root@lola-11 ~]# collectl -sm --verbose
    waiting for 1 second sample...
    
    # MEMORY SUMMARY
    #<-------------------------------Physical Memory--------------------------------------><-----------Swap------------><-------Paging------>
    #   Total    Used    Free    Buff  Cached    Slab  Mapped    Anon  Commit  Locked Inact Total  Used  Free   In  Out Fault MajFt   In  Out
       32006M  30564M   1441M 127144K 676256K  28701M  16196K  69072K 201740K   5008K  509M   15G     0   15G    0    0    28     0    0    8
       32006M  30565M   1441M 127144K 676256K  28701M  16196K  69072K 201740K   5008K  509M   15G     0   15G    0    0    63     0    0    4
       32006M  30565M   1441M 127144K 676256K  28701M  16196K  69072K 201740K   5008K  509M   15G     0   15G    0    0     1     0    0    0
       32006M  30564M   1441M 127144K 676256K  28701M  16196K  69072K 201740K   5008K  509M   15G     0   15G    0    0    17     0    0    0
    

    Attached files:
    *1st event only: Saved crash dump file to lhn.hpdd.intel.com:/var/crashdumps/lu-7836/lola-11/127.0.0.1-2016-06-04-00:44:52

  • 2nd event only: kernel debug log of lola-11
  • Both event: messages, console logs
Comment by Frank Heckes (Inactive) [ 08/Jun/16 ]

I double checked the server node lola-11 and found no HW related errors.

Comment by Frank Heckes (Inactive) [ 20/Jul/16 ]

The error didn't occurred for soak test of build https://build.hpdd.intel.com/job/lustre-master/3406 (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160713) during a test session that is ongoing and last already for 7 days.

Comment by Gerrit Updater [ 20/Jul/16 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/19693/
Subject: LU-7836 ptlrpc: remove duplicate final ping req
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 1f2bb415543f8801f3b4a7c55e29b715b09f327b

Comment by Joseph Gmitter (Inactive) [ 26/Jul/16 ]

Is this issue resolved with the landing of the above patch?

Comment by Frank Heckes (Inactive) [ 27/Jul/16 ]

New build (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160727) include the fix above has been started 1 hour ago. The previous test session for '20160713' ran till yesterday (Aug, 26th) without the occurrence of this bug.
If possible I'd like to run build '20160727' till Friday before closing the ticket. Anyway feel free to close the ticket.

Comment by Joseph Gmitter (Inactive) [ 27/Jul/16 ]

Thanks Frank. We can wait until Friday to validate that the issue is resolved.

Comment by Joseph Gmitter (Inactive) [ 01/Aug/16 ]

Hi Frank,
Are you comfortable that the issue is resolved in the soak run from the end of last week?
Thanks.
Joe

Comment by Peter Jones [ 05/Aug/16 ]

As this fix has landed and is intended to fix this issue then let's mark this as resolved and then reopen if it reoccurs

Generated at Sat Feb 10 02:12:20 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.