Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-179

lustre client lockup when under memory pressure

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 1.8.6
    • None
    • None
    • Client is running 2.6.27.45-lustre-1.8.3.ddn3.3. Connectivity is 10GigE
    • 3
    • 10103

    Description

      A customer is seeing a problem on a client where the client loses access to Lustre when the node is subjected to memory pressure from an errant application.

      Lustre starts reporting -113 (No route to host) errors for certain NIDS in the filesystem despite the TCP/IP network being functional. After the memory pressure is relieved the Lustre errors remain. I am collecting logs currently.

      From the customer report:

      Lnet is reporting no-route-to-host for a significant number of OSS / MDSs (client log attached).

      Mar 29 09:23:27 cgp-bigmem kernel: [589295.826095] LustreError: 4980:0:(events.c:66:request_out_callback()) @@@ type 4, status 113 req@ffff881d2e995400 x1363985318437337/t0 o8>lus03-OST0000_UUID@172.17.128.130@tcp:28/4 lens 368/584 e 0 to 1 dl 1301387122 ref 2 fl Rpc:N/0/0 rc 0/0

      but from user-space on the client, all those nodes are pingable:

      cgp-bigmem:/var/log# ping 172.17.128.130
      PING 172.17.128.130 (172.17.128.130) 56(84) bytes of data.
      64 bytes from 172.17.128.130: icmp_seq=1 ttl=62 time=0.102 ms
      64 bytes from 172.17.128.130: icmp_seq=2 ttl=62 time=0.091 ms
      64 bytes from 172.17.128.130: icmp_seq=3 ttl=62 time=0.091 ms
      64 bytes from 172.17.128.130: icmp_seq=4 ttl=62 time=0.090 ms

      however a lnet ping hangs:
      cgp-bigmem:~# lctl ping 172.17.128.130@tcp

      From another client, the ping works as expected

      farm2-head1:# lctl ping 172.17.128.130@tcp
      12345-0@lo
      12345-172.17.128.130@tcp

      cgp-bigmem:~# lfs check servers | grep -v active
      error: check 'lus01-OST0007-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST0008-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST0009-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST000a-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST000b-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST000c-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST000d-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus01-OST000e-osc-ffff88205bd52000' Resource temporarily unavailable
      error: check 'lus02-MDT0000-mdc-ffff8880735ea000' Resource temporarily unavailable
      error: check 'lus03-OST0000-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0001-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0002-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0003-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0004-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0005-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0006-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0007-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0008-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0009-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST000a-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST000b-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST000c-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST0019-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus03-OST001a-osc-ffff8840730a1400' Resource temporarily unavailable
      error: check 'lus05-OST0010-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0012-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0014-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0016-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0018-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST001a-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST001c-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST000f-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0011-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0013-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0015-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0017-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST0019-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST001b-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus05-OST001d-osc-ffff886070dab800' Resource temporarily unavailable
      error: check 'lus04-OST0001-osc-ffff88806e9d8c00' Resource temporarily unavailable
      error: check 'lus04-OST0003-osc-ffff88806e9d8c00' Resource temporarily unavailable
      error: check 'lus04-OST0005-osc-ffff88806e9d8c00' Resource temporarily unavailable
      error: check 'lus04-OST0007-osc-ffff88806e9d8c00' Resource temporarily unavailable
      error: check 'lus04-OST0009-osc-ffff88806e9d8c00' Resource temporarily unavailable
      error: check 'lus04-OST000b-osc-ffff88806e9d8c00' Resource temporarily unavailable
      error: check 'lus04-OST000d-osc-ffff88806e9d8c00' Resource temporarily unavailable

      Attachments

        Activity

          [LU-179] lustre client lockup when under memory pressure
          pjones Peter Jones made changes -
          Reporter Original: Ashley Pittman [ apittman ] New: Shuichi Ihara [ ihara ]
          bobijam Zhenyu Xu made changes -
          Resolution New: Fixed [ 1 ]
          Status Original: Reopened [ 4 ] New: Resolved [ 5 ]
          bobijam Zhenyu Xu added a comment -

          close ticket per Guy Coates' update info.

          bobijam Zhenyu Xu added a comment - close ticket per Guy Coates' update info.
          gmpc@sanger.ac.uk Guy Coates added a comment -

          We upgraded the kernel on this machine from 2.6.27/SLES11 kernel + 1.8.5.56 lustre client to 2.6.32 kernel +1.8.5.56 lustre client , and the problems seems to have stopped.

          You can close this issue.

          Thanks,

          Guy

          gmpc@sanger.ac.uk Guy Coates added a comment - We upgraded the kernel on this machine from 2.6.27/SLES11 kernel + 1.8.5.56 lustre client to 2.6.32 kernel +1.8.5.56 lustre client , and the problems seems to have stopped. You can close this issue. Thanks, Guy
          bobijam Zhenyu Xu added a comment - - edited

          can you help checkout what lustre threads was doing during this hang? (better to have thread stacks)

          bobijam Zhenyu Xu added a comment - - edited can you help checkout what lustre threads was doing during this hang? (better to have thread stacks)

          Yes.

          The system is a NUMA system with 512Gb ram currently. The problem seems to happen during memory pressure, a figure of 70% has been quoted but it's worth saying the application is single-threaded so it's quite likely that some NUMA regions are experiencing 100% memory usage.

          One thing I've suggested is pinning the application to a different NUMA region to the lustre kernel threads (if this is even possible) so the application wouldn't starve Lustre of memory so easily.

          apittman Ashley Pittman (Inactive) added a comment - Yes. The system is a NUMA system with 512Gb ram currently. The problem seems to happen during memory pressure, a figure of 70% has been quoted but it's worth saying the application is single-threaded so it's quite likely that some NUMA regions are experiencing 100% memory usage. One thing I've suggested is pinning the application to a different NUMA region to the lustre kernel threads (if this is even possible) so the application wouldn't starve Lustre of memory so easily.
          bobijam Zhenyu Xu added a comment -

          is it the same pattern as lnet ping hangs while ping works ok?

          bobijam Zhenyu Xu added a comment - is it the same pattern as lnet ping hangs while ping works ok?
          bobijam Zhenyu Xu made changes -
          Resolution Original: Fixed [ 1 ]
          Status Original: Resolved [ 5 ] New: Reopened [ 4 ]

          As above the customer is was still observing this problem using the latest code on the 10th Jun, could you reopen this bug accordingly.

          apittman Ashley Pittman (Inactive) added a comment - As above the customer is was still observing this problem using the latest code on the 10th Jun, could you reopen this bug accordingly.
          bobijam Zhenyu Xu made changes -
          Comment [ to Sebastien Piechurski (I didn't see your comment here but I did on my mail notification),

          Try bz 21776 1attachment 29521 first which is a port for 1.8.x. ]

          People

            bobijam Zhenyu Xu
            ihara Shuichi Ihara (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: