Details

    • Bug
    • Resolution: Cannot Reproduce
    • Blocker
    • None
    • Lustre 2.12.0
    • None
    • CentOS 7.6, Servers and clients 2.12.0+patches
    • 2
    • 9223372036854775807

    Description

      I recently commented in several tickets regarding OSS issues. I think this is some kind of deadlock like we had for Oak in LU-12162 or LU-12018. But we run with 2.12.0+ the patch for LU-12018 (quota: do not start a thread under memory pressure) so I don't think this is the same issue. I tried a SysRq+t and I see a high numbers of threads blocked in ldiskfs. Because the dump took a SUPER long time, and we couldn't wait more, I tried a crash dump but it failed. So I have logs and a partial sysrq-t.

      What makes me think of a new deadlock is that the load keeps increasing if the server doesn't crash:

      fir-io2-s2:  19:09:23 up  8:45,  1 user,  load average: 797.66, 750.86, 519.06
      

      Symptoms can be either this trace like in LU-11644 reported by NASA:

      Apr 22 17:12:27 fir-io2-s2 kernel: Pid: 83769, comm: ll_ost01_036 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018
      Apr 22 17:12:27 fir-io2-s2 kernel: Call Trace:
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc1232640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc11effe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc121169b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc181310b>] ofd_intent_policy+0x69b/0x920 [ofd]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc11f0d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc12196d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc12a00b2>] tgt_enqueue+0x62/0x210 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc12a710a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc124b6db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffc124f00c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffa90c1c31>] kthread+0xd1/0xe0
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffa9774c24>] ret_from_fork_nospec_begin+0xe/0x21
      Apr 22 17:12:27 fir-io2-s2 kernel: [<ffffffffffffffff>] 0xffffffffffffffff
      Apr 22 17:12:27 fir-io2-s2 kernel: LustreError: dumping log to /tmp/lustre-log.1555978347.83769
      

      or these network-related messages:

      Apr 22 17:12:47 fir-io2-s2 kernel: LustreError: 81272:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff90da74f1d200
      Apr 22 17:12:53 fir-io2-s2 kernel: LustreError: 81276:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff90fedd133c00
      Apr 22 17:12:53 fir-io2-s2 kernel: LustreError: 81276:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff90fedd133c00
      
      Apr 22 18:48:53 fir-io2-s2 kernel: LustreError: 81277:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff90e22fd35000
      Apr 22 18:48:53 fir-io2-s2 kernel: LustreError: 38519:0:(ldlm_lib.c:3264:target_bulk_io()) @@@ network error on bulk WRITE  req@ffff90e109b76850 x1631329926812992/t0(0) o4->95e6fd6a-706d-ff18-fa02-0b0e9d53d014@10.8.19.8@o2ib6:301/0 lens 488/448 e 0 to 0 dl 1555984331 ref 1 fl Interpret:/0/0 rc 0/0
      

      but clearly it's not a network issue, just a server deadlock. Again, very like the issue on Oak from LU-12162. I'm not sure the list of the tasks provided here is enough to troubleshoot this, but it would be great if you could take a look. This should be considered at least a Sev 2 as we have been pretty much down lately due to this. At the moment, I'm trying to restart without ost quota disabled to see if that's better. Thanks much.

      NOTE: we tried to run with net enabled but this doesn't seem to help. also we see a spike in msgs_alloc like NASA but I think it's just due to the deadlock.

      Attaching kernel logs + sysrq-t (PARTIAL) as fir-io2-s2_kernel+sysrq_PARTIAL_20190422.log

      Attachments

        1. fir-io1-s1-previous_bulk.log
          108 kB
        2. fir-io2-s2_kernel+sysrq_PARTIAL_20190422.log
          2.70 MB
        3. fir-io3-s2_foreach_bt_20190422_204054.txt
          1.10 MB
        4. fir-io3-s2-vmcore-dmesg-20190422_204054.log
          825 kB
        5. fir-lnet-msgs_alloc-20190423.png
          fir-lnet-msgs_alloc-20190423.png
          258 kB
        6. fir-ossbw-20190423.png
          fir-ossbw-20190423.png
          302 kB
        7. lnet-fir-oss.txt
          1 kB
        8. lnet-sh-1-fdr.txt
          1 kB
        9. lnet-sh-2-edr.txt
          1 kB
        10. lnet-sh-rtr-fir-1.txt
          2 kB
        11. lnet-sh-rtr-fir-2.txt
          2 kB

        Activity

          [LU-12215] OSS network errors 2.12

          Yes, I can see many of them actually. An example is attached in fir-io1-s1-previous_bulk.log, just scroll a bit and you'll see. Then we can also see a few backtraces that NASA is seeing too. Looking at the I/O graph during that time, I see a small drop in I/O at the time of the event, and then it recovers.

          sthiell Stephane Thiell added a comment - Yes, I can see many of them actually. An example is attached in  fir-io1-s1-previous_bulk.log , just scroll a bit and you'll see. Then we can also see a few backtraces that NASA is seeing too. Looking at the I/O graph during that time, I see a small drop in I/O at the time of the event, and then it recovers.

          That's possible.  To clarify an earlier question:
          Do you see the "-125" error code associated with those messages in earlier instances?

          By the way, if you are rebooting clients, is the dip in activity perhaps from the MDS related hang on client loss we've discussed previously that you recently reminded me is still an issue? (Are you still not unmounting on reboot?  )

          [Edit:]

          Actually, the length of the gap mostly rules that out.

          pfarrell Patrick Farrell (Inactive) added a comment - - edited That's possible.  To clarify an earlier question: Do you see the "-125" error code associated with those messages in earlier instances? By the way, if you are rebooting clients, is the dip in activity perhaps from the MDS related hang on client loss we've discussed previously that you recently reminded me is still an issue? (Are you still not unmounting on reboot?  ) [Edit:] Actually, the length of the gap mostly rules that out.

          Oops, sorry about that, missing legend... that's bad! . Overall read I/O from all OSS's are in blue (positive) and write in red (using negative values so it's more readable). They are taken from 

          /proc/fs/lustre/obdfilter/-OST/stats

          Curious to know more about the server_bulk_callback errors too.

          And you said there was a node "having an enormous amount of trouble doing network communication", are you able to tell me which one? or this information is missing from the logs?

          Also just to add more context, we're currently completing a client upgrade to include the patch from "LU-11359 mdt: fix mdt_dom_discard_data() timeouts", so we're rebooting a lot of clients. Perhaps this was a contributing factor too...

          sthiell Stephane Thiell added a comment - Oops, sorry about that, missing legend... that's bad!  . Overall read I/O from all OSS's are in blue (positive) and write in red (using negative values so it's more readable). They are taken from  /proc/fs/lustre/obdfilter/ -OST /stats Curious to know more about the server_bulk_callback  errors too. And you said there was a node "having an enormous amount of trouble doing network communication", are you able to tell me which one? or this information is missing from the logs? Also just to add more context, we're currently completing a client upgrade to include the patch from " LU-11359  mdt: fix mdt_dom_discard_data() timeouts", so we're rebooting a lot of clients. Perhaps this was a contributing factor too...

          Can you help interpret the bandwidth graph?  I see positive and negative numbers but I'm not sure what they mean.  Is one read and the other write?

          About the msg_allocs: While it would be good for Amir to weigh in, I think those are associated with resends due to message failures.

          About the -5:
          Yes, -5 isn't that rare, but I think the -125 error code is?  (It would be great to know if I'm wrong about that and it is in fact common.)

          pfarrell Patrick Farrell (Inactive) added a comment - Can you help interpret the bandwidth graph?  I see positive and negative numbers but I'm not sure what they mean.  Is one read and the other write? About the msg_allocs: While it would be good for Amir to weigh in, I think those are associated with resends due to message failures. About the -5: Yes, -5 isn't that rare, but I think the -125 error code is?  (It would be great to know if I'm wrong about that and it is in fact common.)
          sthiell Stephane Thiell added a comment - - edited

          Hi Patrick,

          Thanks SO much for having a look at this so quickly! Ok, I may have been confused with the previous deadlock that we have seen. And duly noted for osd_ldiskfs/ldiskfs, thanks!

          I've looked a bit further at the logs from last night, and I see one occurrence of event type 5 (multiple messages on each OSS but nothing like before), and it happened on all OSS at the same time. But it looks like it's just a client timing out:

          [root@fir-hn01 sthiell.root]# clush -w@oss journalctl -kn 10000 \| grep server_bulk_callback \| tail -1
          fir-io3-s2: Apr 23 00:07:24 fir-io3-s2 kernel: LustreError: 95122:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef1f90a4e00
          fir-io3-s1: Apr 23 00:07:24 fir-io3-s1 kernel: LustreError: 108580:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8969ed6f0200
          fir-io4-s2: Apr 23 00:07:25 fir-io4-s2 kernel: LustreError: 74909:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9e65a0062600
          fir-io2-s2: Apr 23 00:07:24 fir-io2-s2 kernel: LustreError: 100670:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9c2062801e00
          fir-io4-s1: Apr 23 00:07:25 fir-io4-s1 kernel: LustreError: 76336:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9b56df6bca00
          fir-io1-s2: Apr 23 00:07:24 fir-io1-s2 kernel: LustreError: 108542:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff98ba981c0200
          fir-io2-s1: Apr 23 00:07:24 fir-io2-s1 kernel: LustreError: 99911:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9a85ce362800
          fir-io1-s1: Apr 23 00:07:24 fir-io1-s1 kernel: LustreError: 102249:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9416e9788800
          
          
          
          Logs on one OSS:
          
          Apr 23 00:06:20 fir-io3-s2 kernel: Lustre: fir-OST001d: Client 6ea99810-c4ef-751c-68b4-b60bb649210c (at 10.8.8.35@o2ib6) reconnecting
          Apr 23 00:06:20 fir-io3-s2 kernel: Lustre: Skipped 1 previous similar message
          Apr 23 00:06:21 fir-io3-s2 kernel: Lustre: 123046:0:(client.c:2132:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1556003174/real 0]  req@ffff8f14d87f3c00 x1631583327882336/t0(0) o104->fir-OST0023@10.8.1.26@o2ib6:15/16 lens 296/224 e 0 to 1 dl 1556003181 ref 2 fl Rpc:X/0/ffffffff r
          Apr 23 00:06:21 fir-io3-s2 kernel: Lustre: 123046:0:(client.c:2132:ptlrpc_expire_one_request()) Skipped 1 previous similar message
          Apr 23 00:06:22 fir-io3-s2 kernel: LNetError: 95119:0:(lib-msg.c:811:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0)
          Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800
          Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800
          Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800
          Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ed50ad20600
          Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ed50ad20600
          Apr 23 00:06:22 fir-io3-s2 kernel: Lustre: fir-OST001d: Bulk IO write error with f2a4d35e-05ff-fe02-d6c7-6c183d27b8a1 (at 10.8.8.29@o2ib6), client will retry: rc = -110
          Apr 23 00:06:22 fir-io3-s2 kernel: LNetError: 95119:0:(lib-msg.c:811:lnet_is_health_check()) Skipped 5 previous similar messages
          Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95119:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800
          Apr 23 00:06:23 fir-io3-s2 kernel: Lustre: fir-OST0019: Client 057a35ee-7eb3-cd47-7142-9e6ee9c8aa59 (at 10.8.13.7@o2ib6) reconnecting
          Apr 23 00:06:23 fir-io3-s2 kernel: Lustre: Skipped 4 previous similar messages
          Apr 23 00:06:27 fir-io3-s2 kernel: LNetError: 95120:0:(lib-msg.c:811:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0)
          Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef08388e800
          Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800
          Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800
          Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800
          Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800
          Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: fir-OST001d: Bulk IO write error with f2a4d35e-05ff-fe02-d6c7-6c183d27b8a1 (at 10.8.8.29@o2ib6), client will retry: rc = -110
          Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: Skipped 1 previous similar message
          Apr 23 00:06:27 fir-io3-s2 kernel: LNetError: 95120:0:(lib-msg.c:811:lnet_is_health_check()) Skipped 5 previous similar messages
          Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef08388e800
          Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: fir-OST001d: Client 43d6af20-5e3c-1ef3-577c-0e2086a05c21 (at 10.8.10.14@o2ib6) reconnecting
          Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: Skipped 2 previous similar messages
          Apr 23 00:06:40 fir-io3-s2 kernel: Lustre: fir-OST0023: Client 38fc721f-2581-5cc7-2331-7b71af28244a (at 10.8.7.30@o2ib6) reconnecting
          Apr 23 00:06:40 fir-io3-s2 kernel: Lustre: Skipped 4 previous similar messages
          

           

          Also, I have looked at past OSS logs, and the event type 5 is actually present pretty often when a client timed out.

           

          Lastly, I'm attaching two graphs, just because I wanted to show you the mess of last evening. First one is the overall OSS I/O bandwidth and the second one is the rate of msgs_alloc (all servers). The small peak around midnight is actually matching event type -5 that I've seen above.

          sthiell Stephane Thiell added a comment - - edited Hi Patrick, Thanks SO much for having a look at this so quickly! Ok, I may have been confused with the previous deadlock that we have seen. And duly noted for osd_ldiskfs/ldiskfs, thanks! I've looked a bit further at the logs from last night, and I see one occurrence of event type 5 (multiple messages on each OSS but nothing like before), and it happened on all OSS at the same time. But it looks like it's just a client timing out: [root@fir-hn01 sthiell.root]# clush -w@oss journalctl -kn 10000 \| grep server_bulk_callback \| tail -1 fir-io3-s2: Apr 23 00:07:24 fir-io3-s2 kernel: LustreError: 95122:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef1f90a4e00 fir-io3-s1: Apr 23 00:07:24 fir-io3-s1 kernel: LustreError: 108580:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8969ed6f0200 fir-io4-s2: Apr 23 00:07:25 fir-io4-s2 kernel: LustreError: 74909:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9e65a0062600 fir-io2-s2: Apr 23 00:07:24 fir-io2-s2 kernel: LustreError: 100670:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9c2062801e00 fir-io4-s1: Apr 23 00:07:25 fir-io4-s1 kernel: LustreError: 76336:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9b56df6bca00 fir-io1-s2: Apr 23 00:07:24 fir-io1-s2 kernel: LustreError: 108542:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff98ba981c0200 fir-io2-s1: Apr 23 00:07:24 fir-io2-s1 kernel: LustreError: 99911:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9a85ce362800 fir-io1-s1: Apr 23 00:07:24 fir-io1-s1 kernel: LustreError: 102249:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff9416e9788800 Logs on one OSS: Apr 23 00:06:20 fir-io3-s2 kernel: Lustre: fir-OST001d: Client 6ea99810-c4ef-751c-68b4-b60bb649210c (at 10.8.8.35@o2ib6) reconnecting Apr 23 00:06:20 fir-io3-s2 kernel: Lustre: Skipped 1 previous similar message Apr 23 00:06:21 fir-io3-s2 kernel: Lustre: 123046:0:(client.c:2132:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1556003174/real 0] req@ffff8f14d87f3c00 x1631583327882336/t0(0) o104->fir-OST0023@10.8.1.26@o2ib6:15/16 lens 296/224 e 0 to 1 dl 1556003181 ref 2 fl Rpc:X/0/ffffffff r Apr 23 00:06:21 fir-io3-s2 kernel: Lustre: 123046:0:(client.c:2132:ptlrpc_expire_one_request()) Skipped 1 previous similar message Apr 23 00:06:22 fir-io3-s2 kernel: LNetError: 95119:0:(lib-msg.c:811:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0) Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800 Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800 Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800 Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ed50ad20600 Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ed50ad20600 Apr 23 00:06:22 fir-io3-s2 kernel: Lustre: fir-OST001d: Bulk IO write error with f2a4d35e-05ff-fe02-d6c7-6c183d27b8a1 (at 10.8.8.29@o2ib6), client will retry: rc = -110 Apr 23 00:06:22 fir-io3-s2 kernel: LNetError: 95119:0:(lib-msg.c:811:lnet_is_health_check()) Skipped 5 previous similar messages Apr 23 00:06:22 fir-io3-s2 kernel: LustreError: 95119:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef2ceb77800 Apr 23 00:06:23 fir-io3-s2 kernel: Lustre: fir-OST0019: Client 057a35ee-7eb3-cd47-7142-9e6ee9c8aa59 (at 10.8.13.7@o2ib6) reconnecting Apr 23 00:06:23 fir-io3-s2 kernel: Lustre: Skipped 4 previous similar messages Apr 23 00:06:27 fir-io3-s2 kernel: LNetError: 95120:0:(lib-msg.c:811:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0) Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef08388e800 Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800 Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800 Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800 Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95117:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8eeaf3f75800 Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: fir-OST001d: Bulk IO write error with f2a4d35e-05ff-fe02-d6c7-6c183d27b8a1 (at 10.8.8.29@o2ib6), client will retry: rc = -110 Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: Skipped 1 previous similar message Apr 23 00:06:27 fir-io3-s2 kernel: LNetError: 95120:0:(lib-msg.c:811:lnet_is_health_check()) Skipped 5 previous similar messages Apr 23 00:06:27 fir-io3-s2 kernel: LustreError: 95120:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff8ef08388e800 Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: fir-OST001d: Client 43d6af20-5e3c-1ef3-577c-0e2086a05c21 (at 10.8.10.14@o2ib6) reconnecting Apr 23 00:06:27 fir-io3-s2 kernel: Lustre: Skipped 2 previous similar messages Apr 23 00:06:40 fir-io3-s2 kernel: Lustre: fir-OST0023: Client 38fc721f-2581-5cc7-2331-7b71af28244a (at 10.8.7.30@o2ib6) reconnecting Apr 23 00:06:40 fir-io3-s2 kernel: Lustre: Skipped 4 previous similar messages   Also, I have looked at past OSS logs, and the event type 5 is actually present pretty often when a client timed out.   Lastly, I'm attaching two graphs, just because I wanted to show you the mess of last evening. First one is the overall OSS I/O bandwidth and the second one is the rate of msgs_alloc (all servers). The small peak around midnight is actually matching event type -5 that I've seen above.

          Amir,

          Are you able to take a look at this one?  The main messages of interest seem to be the bulk callback error and the lnet_is_health_check errors I highlighted.

          pfarrell Patrick Farrell (Inactive) added a comment - Amir, Are you able to take a look at this one?  The main messages of interest seem to be the bulk callback error and the lnet_is_health_check errors I highlighted.

          Hmm, after looking over the logs, there's no evidence of deadlock - it looks like a network issue.  Possibly the same as reported by NASA, but the signature is a bit different.

          It's good to distinguish between osd_ldiskfs and ldiskfs itself, by the way.  You're seeing a lot of threads stuck in osd_ldiskfs, which is a layer up from ldiskfs itself.

          Having dug through the traces, etc, I don't see any evidence of deadlock.  I see a node that is having an enormous amount of trouble doing network communication, and suffering because of it.

          We see thousands of this message:

          [45514.433468] LustreError: 74198:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff980a1b566600 

          And these:

          [45513.596412] LNetError: 74206:0:(lib-msg.c:811:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0)
          [45513.608927] LNetError: 74206:0:(lib-msg.c:811:lnet_is_health_check()) Skipped 148 previous similar messages 

          We're seeing a lot of evictions, apparently from the same cause.  All of this chaos is affecting various worker threads, which over time have become stuck (this is your increasing load average, which reports waiting threads).  They're all stuck in pretty "normal" places, waiting for I/O or communication, so if they are indeed stuck (I suspect some are truly stuck and some would make progress given time), it's because they've become confused by repeated communication failures and evictions.

          At first glance, these messages are not the same as those in the NASA bug.  I'll look around and see what I can find, but I'm also going to see if Amir can take a look.

          pfarrell Patrick Farrell (Inactive) added a comment - Hmm, after looking over the logs, there's no evidence of deadlock - it looks like a network issue.  Possibly the same as reported by NASA, but the signature is a bit different. It's good to distinguish between osd_ldiskfs and ldiskfs itself, by the way.  You're seeing a lot of threads stuck in osd_ldiskfs, which is a layer up from ldiskfs itself. Having dug through the traces, etc, I don't see any evidence of deadlock.  I see a node that is having an enormous amount of trouble doing network communication, and suffering because of it. We see thousands of this message: [45514.433468] LustreError: 74198:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff980a1b566600 And these: [45513.596412] LNetError: 74206:0:(lib-msg.c:811:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0) [45513.608927] LNetError: 74206:0:(lib-msg.c:811:lnet_is_health_check()) Skipped 148 previous similar messages We're seeing a lot of evictions, apparently from the same cause.  All of this chaos is affecting various worker threads, which over time have become stuck (this is your increasing load average, which reports waiting threads).  They're all stuck in pretty "normal" places, waiting for I/O or communication, so if they are indeed stuck (I suspect some are truly stuck and some would make progress given time), it's because they've become confused by repeated communication failures and evictions. At first glance, these messages are not the same as those in the NASA bug.  I'll look around and see what I can find, but I'm also going to see if Amir can take a look.
          sthiell Stephane Thiell added a comment - - edited

          The issue seems te be resolved now after we decided to mount all targets in abort_recov..., we probably lost most of the jobs on the cluster. But the filesystem has run all night. The only remaining suspicious messages I can see now are these:

          fir-io3-s2: Apr 23 05:32:41 fir-io3-s2 kernel: LustreError: 123049:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0019: cli 0cb811d4-9df8-f3cc-1525-eefdbc079d76 claims 3481600 GRANT, real grant 0
          fir-io3-s2: Apr 23 05:32:41 fir-io3-s2 kernel: LustreError: 123049:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 87 previous similar messages
          fir-io2-s2: Apr 23 05:33:03 fir-io2-s2 kernel: LustreError: 3155:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0015: cli 7ed43e69-e1c8-0e51-0bfe-f44bf39fd025 claims 36864 GRANT, real grant 0
          fir-io2-s2: Apr 23 05:33:03 fir-io2-s2 kernel: LustreError: 3155:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 30 previous similar messages
          fir-io2-s1: Apr 23 05:33:47 fir-io2-s1 kernel: LustreError: 2374:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0016: cli 5f0dd240-53c1-516b-7224-272e9211f8ae claims 32768 GRANT, real grant 0
          fir-io2-s1: Apr 23 05:33:47 fir-io2-s1 kernel: LustreError: 2374:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 17 previous similar messages
          fir-io4-s2: Apr 23 05:34:04 fir-io4-s2 kernel: LustreError: 102956:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0025: cli a0307bc6-c839-435b-9342-1c622269d753 claims 53248 GRANT, real grant 0
          fir-io4-s2: Apr 23 05:34:04 fir-io4-s2 kernel: LustreError: 102956:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 124 previous similar messages
          fir-io3-s1: Apr 23 05:34:33 fir-io3-s1 kernel: LustreError: 6026:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0022: cli a7e75d11-51ad-9d9c-ce73-18ed393f55b8 claims 2203648 GRANT, real grant 0
          fir-io3-s1: Apr 23 05:34:33 fir-io3-s1 kernel: LustreError: 6026:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 47 previous similar messages
          

          Also saw these messages shortly after bringing the filesystem back up:

          fir-io4-s1: Apr 22 22:59:34 fir-io4-s1 kernel: LustreError: 79918:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_obd_disconnect: tot_granted 58720256 != fo_tot_granted 67108864
          

          I added another and more complete "foreach bt" output from a voluntary crash dump we took shortly after seeing the problem (lots of "event type -5" messages). Please see fir-io3-s2_foreach_bt_20190422_204054.txt with full logs in fir-io3-s2-vmcore-dmesg-20190422_204054.log. To me, this looks like very much another instance of LU-12162 or LU-12018 as I said originally. So perhaps the fix doesn't really work (because we have it)? bzzz what do you think?

          Those backtraces in the crash dump look familiar:

          PID: 77085  TASK: ffff97fdb4fd30c0  CPU: 29  COMMAND: "ll_ost_io01_001"
           #0 [ffff982d2a1af748] __schedule at ffffffff8bf67747
           #1 [ffff982d2a1af7d8] schedule at ffffffff8bf67c49
           #2 [ffff982d2a1af7e8] schedule_timeout at ffffffff8bf65721
           #3 [ffff982d2a1af898] io_schedule_timeout at ffffffff8bf672ed
           #4 [ffff982d2a1af8c8] io_schedule at ffffffff8bf67388
           #5 [ffff982d2a1af8d8] bit_wait_io at ffffffff8bf65d71
           #6 [ffff982d2a1af8f0] __wait_on_bit_lock at ffffffff8bf65921
           #7 [ffff982d2a1af930] __lock_page at ffffffff8b9b5b44
           #8 [ffff982d2a1af988] __find_lock_page at ffffffff8b9b6844
           #9 [ffff982d2a1af9b0] find_or_create_page at ffffffff8b9b74b4
          #10 [ffff982d2a1af9f0] osd_bufs_get at ffffffffc14d7ea7 [osd_ldiskfs]
          #11 [ffff982d2a1afa48] ofd_preprw at ffffffffc1624e67 [ofd]
          #12 [ffff982d2a1afae8] tgt_brw_read at ffffffffc101157b [ptlrpc]
          #13 [ffff982d2a1afcc8] tgt_request_handle at ffffffffc101010a [ptlrpc]
          #14 [ffff982d2a1afd50] ptlrpc_server_handle_request at ffffffffc0fb46db [ptlrpc]
          #15 [ffff982d2a1afdf0] ptlrpc_main at ffffffffc0fb800c [ptlrpc]
          #16 [ffff982d2a1afec8] kthread at ffffffff8b8c1c31
          #17 [ffff982d2a1aff50] ret_from_fork_nospec_begin at ffffffff8bf74c24
          

          Before performing the global remount in abort_recov, we tried to bring the filesystem back up many times (for several hours), but either we had these event type -5 messages + spikes in msgs_alloc etc. (= OSS deadlock for me), or a clear LBUG, like this one (looks like LU-12120):

          [ 5351.683710] Lustre: 70393:0:(ldlm_lib.c:1771:extend_recovery_timer()) fir-OST002e: extended recovery timer reaching hard limit: 900, extend: 1
          [ 5351.696503] Lustre: 70393:0:(ldlm_lib.c:1771:extend_recovery_timer()) Skipped 1 previous similar message
          [ 5351.706002] Lustre: 70393:0:(ldlm_lib.c:2048:target_recovery_overseer()) fir-OST002e recovery is aborted by hard timeout
          [ 5351.716900] Lustre: 70393:0:(ldlm_lib.c:2058:target_recovery_overseer()) recovery is aborted, evict exports in recovery
          [ 5352.056860] Lustre: fir-OST002e: deleting orphan objects from 0x0:2804316 to 0x0:2804385
          [ 5352.077834] Lustre: 70393:0:(ldlm_lib.c:2554:target_recovery_thread()) too long recovery - read logs
          [ 5352.082207] LustreError: 84286:0:(tgt_grant.c:563:tgt_grant_incoming()) fir-OST002e: cli ecaf3ea1-3d24-ab9f-6856-0bf7294c7cf4/ffff8dca263c2400 dirty 0 pend 0 grant -18752512
          [ 5352.082210] LustreError: 84286:0:(tgt_grant.c:565:tgt_grant_incoming()) LBUG
          [ 5352.082212] Pid: 84286, comm: ll_ost_io00_063 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018
          [ 5352.082212] Call Trace:
          [ 5352.082235]  [<ffffffffc0a097cc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
          [ 5352.082243]  [<ffffffffc0a0987c>] lbug_with_loc+0x4c/0xa0 [libcfs]
          [ 5352.082304]  [<ffffffffc0e5b6f0>] tgt_grant_prepare_read+0x0/0x3b0 [ptlrpc]
          [ 5352.082352]  [<ffffffffc0e5b7fb>] tgt_grant_prepare_read+0x10b/0x3b0 [ptlrpc]
          [ 5352.082372]  [<ffffffffc1413c00>] ofd_preprw+0x450/0x1160 [ofd]
          [ 5352.082419]  [<ffffffffc0e3f57b>] tgt_brw_read+0x9db/0x1e50 [ptlrpc]
          [ 5352.082465]  [<ffffffffc0e3e10a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
          [ 5352.082509]  [<ffffffffc0de26db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
          [ 5352.082548]  [<ffffffffc0de600c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
          [ 5352.082553]  [<ffffffffbc8c1c31>] kthread+0xd1/0xe0
          [ 5352.082558]  [<ffffffffbcf74c24>] ret_from_fork_nospec_begin+0xe/0x21
          [ 5352.082581]  [<ffffffffffffffff>] 0xffffffffffffffff
          
          sthiell Stephane Thiell added a comment - - edited The issue seems te be resolved now after we decided to mount all targets in abort_recov ..., we probably lost most of the jobs on the cluster. But the filesystem has run all night. The only remaining suspicious messages I can see now are these: fir-io3-s2: Apr 23 05:32:41 fir-io3-s2 kernel: LustreError: 123049:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0019: cli 0cb811d4-9df8-f3cc-1525-eefdbc079d76 claims 3481600 GRANT, real grant 0 fir-io3-s2: Apr 23 05:32:41 fir-io3-s2 kernel: LustreError: 123049:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 87 previous similar messages fir-io2-s2: Apr 23 05:33:03 fir-io2-s2 kernel: LustreError: 3155:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0015: cli 7ed43e69-e1c8-0e51-0bfe-f44bf39fd025 claims 36864 GRANT, real grant 0 fir-io2-s2: Apr 23 05:33:03 fir-io2-s2 kernel: LustreError: 3155:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 30 previous similar messages fir-io2-s1: Apr 23 05:33:47 fir-io2-s1 kernel: LustreError: 2374:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0016: cli 5f0dd240-53c1-516b-7224-272e9211f8ae claims 32768 GRANT, real grant 0 fir-io2-s1: Apr 23 05:33:47 fir-io2-s1 kernel: LustreError: 2374:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 17 previous similar messages fir-io4-s2: Apr 23 05:34:04 fir-io4-s2 kernel: LustreError: 102956:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0025: cli a0307bc6-c839-435b-9342-1c622269d753 claims 53248 GRANT, real grant 0 fir-io4-s2: Apr 23 05:34:04 fir-io4-s2 kernel: LustreError: 102956:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 124 previous similar messages fir-io3-s1: Apr 23 05:34:33 fir-io3-s1 kernel: LustreError: 6026:0:(tgt_grant.c:742:tgt_grant_check()) fir-OST0022: cli a7e75d11-51ad-9d9c-ce73-18ed393f55b8 claims 2203648 GRANT, real grant 0 fir-io3-s1: Apr 23 05:34:33 fir-io3-s1 kernel: LustreError: 6026:0:(tgt_grant.c:742:tgt_grant_check()) Skipped 47 previous similar messages Also saw these messages shortly after bringing the filesystem back up: fir-io4-s1: Apr 22 22:59:34 fir-io4-s1 kernel: LustreError: 79918:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_obd_disconnect: tot_granted 58720256 != fo_tot_granted 67108864 I added another and more complete "foreach bt" output from a voluntary crash dump we took shortly after seeing the problem (lots of "event type -5" messages). Please see fir-io3-s2_foreach_bt_20190422_204054.txt with full logs in fir-io3-s2-vmcore-dmesg-20190422_204054.log . To me, this looks like very much another instance of LU-12162 or LU-12018 as I said originally. So perhaps the fix doesn't really work (because we have it)? bzzz what do you think? Those backtraces in the crash dump look familiar: PID: 77085 TASK: ffff97fdb4fd30c0 CPU: 29 COMMAND: "ll_ost_io01_001" #0 [ffff982d2a1af748] __schedule at ffffffff8bf67747 #1 [ffff982d2a1af7d8] schedule at ffffffff8bf67c49 #2 [ffff982d2a1af7e8] schedule_timeout at ffffffff8bf65721 #3 [ffff982d2a1af898] io_schedule_timeout at ffffffff8bf672ed #4 [ffff982d2a1af8c8] io_schedule at ffffffff8bf67388 #5 [ffff982d2a1af8d8] bit_wait_io at ffffffff8bf65d71 #6 [ffff982d2a1af8f0] __wait_on_bit_lock at ffffffff8bf65921 #7 [ffff982d2a1af930] __lock_page at ffffffff8b9b5b44 #8 [ffff982d2a1af988] __find_lock_page at ffffffff8b9b6844 #9 [ffff982d2a1af9b0] find_or_create_page at ffffffff8b9b74b4 #10 [ffff982d2a1af9f0] osd_bufs_get at ffffffffc14d7ea7 [osd_ldiskfs] #11 [ffff982d2a1afa48] ofd_preprw at ffffffffc1624e67 [ofd] #12 [ffff982d2a1afae8] tgt_brw_read at ffffffffc101157b [ptlrpc] #13 [ffff982d2a1afcc8] tgt_request_handle at ffffffffc101010a [ptlrpc] #14 [ffff982d2a1afd50] ptlrpc_server_handle_request at ffffffffc0fb46db [ptlrpc] #15 [ffff982d2a1afdf0] ptlrpc_main at ffffffffc0fb800c [ptlrpc] #16 [ffff982d2a1afec8] kthread at ffffffff8b8c1c31 #17 [ffff982d2a1aff50] ret_from_fork_nospec_begin at ffffffff8bf74c24 Before performing the global remount in abort_recov , we tried to bring the filesystem back up many times (for several hours), but either we had these event type -5 messages + spikes in msgs_alloc etc. (= OSS deadlock for me), or a clear LBUG, like this one (looks like LU-12120 ): [ 5351.683710] Lustre: 70393:0:(ldlm_lib.c:1771:extend_recovery_timer()) fir-OST002e: extended recovery timer reaching hard limit: 900, extend: 1 [ 5351.696503] Lustre: 70393:0:(ldlm_lib.c:1771:extend_recovery_timer()) Skipped 1 previous similar message [ 5351.706002] Lustre: 70393:0:(ldlm_lib.c:2048:target_recovery_overseer()) fir-OST002e recovery is aborted by hard timeout [ 5351.716900] Lustre: 70393:0:(ldlm_lib.c:2058:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5352.056860] Lustre: fir-OST002e: deleting orphan objects from 0x0:2804316 to 0x0:2804385 [ 5352.077834] Lustre: 70393:0:(ldlm_lib.c:2554:target_recovery_thread()) too long recovery - read logs [ 5352.082207] LustreError: 84286:0:(tgt_grant.c:563:tgt_grant_incoming()) fir-OST002e: cli ecaf3ea1-3d24-ab9f-6856-0bf7294c7cf4/ffff8dca263c2400 dirty 0 pend 0 grant -18752512 [ 5352.082210] LustreError: 84286:0:(tgt_grant.c:565:tgt_grant_incoming()) LBUG [ 5352.082212] Pid: 84286, comm: ll_ost_io00_063 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018 [ 5352.082212] Call Trace: [ 5352.082235] [<ffffffffc0a097cc>] libcfs_call_trace+0x8c/0xc0 [libcfs] [ 5352.082243] [<ffffffffc0a0987c>] lbug_with_loc+0x4c/0xa0 [libcfs] [ 5352.082304] [<ffffffffc0e5b6f0>] tgt_grant_prepare_read+0x0/0x3b0 [ptlrpc] [ 5352.082352] [<ffffffffc0e5b7fb>] tgt_grant_prepare_read+0x10b/0x3b0 [ptlrpc] [ 5352.082372] [<ffffffffc1413c00>] ofd_preprw+0x450/0x1160 [ofd] [ 5352.082419] [<ffffffffc0e3f57b>] tgt_brw_read+0x9db/0x1e50 [ptlrpc] [ 5352.082465] [<ffffffffc0e3e10a>] tgt_request_handle+0xaea/0x1580 [ptlrpc] [ 5352.082509] [<ffffffffc0de26db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 5352.082548] [<ffffffffc0de600c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc] [ 5352.082553] [<ffffffffbc8c1c31>] kthread+0xd1/0xe0 [ 5352.082558] [<ffffffffbcf74c24>] ret_from_fork_nospec_begin+0xe/0x21 [ 5352.082581] [<ffffffffffffffff>] 0xffffffffffffffff

          Sorry, I forgot to say an important thing regarding another symptom: when this happens, after some time, it is sometimes followed by LU-12120 (LBUG tgt_grant_incoming). I have several crash dumps of these, so maybe the answer is in the crashdumps, not sure.

          sthiell Stephane Thiell added a comment - Sorry, I forgot to say an important thing regarding another symptom: when this happens, after some time, it is sometimes followed by LU-12120 (LBUG tgt_grant_incoming). I have several crash dumps of these, so maybe the answer is in the crashdumps, not sure.

          People

            pfarrell Patrick Farrell (Inactive)
            sthiell Stephane Thiell
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: