Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-17323

fork() leaks ERESTARTNOINTR (errno 513) to user application

Details

    • Bug
    • Resolution: Unresolved
    • Critical
    • None
    • Lustre 2.11.0, Lustre 2.12.5, Lustre 2.12.6, Lustre 2.12.9
    • None
    • RHEL6, RHEL7/CentOS7 (various kernels)
    • 3
    • 9223372036854775807

    Description

      When using file locks on a Lustre mount with the 'flock' mount option, fork()
      can leak ERESTARTNOINTR to a user application.  The fork() system call checks
      if a signal is pending, and if so, cleans up everything it did and returns 
      ERESTARTNOINTR.  The kernel transparently restarts the fork() from scratch,
      the user application is never supposed to get the ERESTARTNOINTR errno.

      The fork() cleanup code calls exit_files() which calls Lustre code.  I'm not
      positive what the problem is at a low level.  It may be that the Lustre code
      clears the TIF_SIGPENDING flag, which prevents the kernel from restarting the
      fork() and it leaks the ERESTARTNOINTR errno to the user application.

      It seems there has to be multiple threads involved.  My reproducer has two
      threads. Thread 1 does fork() calls in an infinite loop, spawning children
      that exit after a random number of seconds.  Thread 2 sleeps for a random
      number of seconds in an infinite loop.  There is a SIGCHLD handler set up and
      both threads can handle SIGCHLD signals.  The fork() gets interrupted by
      pending SIGCHLD signals from exiting children.  I think thread 2 has to handle
      the SIGCHLD signal for the problem to happen.  If thread 2 has SIGCHLD signals
      blocked, the problem never happens.

      The problem doesn't reproduce with the 'localflock' mount option, so we
      believe 'localflock' is safe from this issue.

      We've seen this on RHEL6, RHEL7/CentOS7 kernels,
      and Lustre 2.11.0, 2.12.5 and 2.12.6
      Lustre 2.12.0 does not reproduce the issue.

      Steps to reproduce:

      1) Lustre mount must be using 'flock' mount option.
      2) gcc -o repro ./repro.c -lpthread
      3) Run reproducer:

      Problem usually reproduces within 5-60 seconds.
      Reproducer runs indefinitely or until the issue occurs, 
      enter Ctrl-C to quit

      > touch /lustre_mnt/testfile.txt
      > ./repro /lustre_mnt/testfile.txt
      Fork returned -1, errno = 513, exiting...

      Use POSIX style read lock
      > ./repro /lustre_mnt/testfile.txt posix
      Fork returned -1, errno = 513, exiting...

      Use BSD style read lock
      > ./repro /lustre_mnt/testfile.txt flock
      Fork returned -1, errno = 513, exiting...

      Don't lock at all (this won't reproduce and will run indefinitely)
      > ./repro /lustre_mnt/testfile.txt none

      NOTE: be aware the reproducer can exhaust your maxprocs limit

      Attachments

        Issue Links

          Activity

            [LU-17323] fork() leaks ERESTARTNOINTR (errno 513) to user application

            Mike, please file your gcc issue in a separate Jira ticket, or it will be lost here. There should be proper interop between 2.15 clients and 2.12 servers.

            adilger Andreas Dilger added a comment - Mike, please file your gcc issue in a separate Jira ticket, or it will be lost here. There should be proper interop between 2.15 clients and 2.12 servers.
            mikedoo4 Mike D added a comment -

            I tried the latest Lustre 2.15 client and have not been able to reproduce the issue on CentOS7.  However, I did notice a problem (I haven't investigated it much yet):

            > gcc hello.c

            > ./a.out

            ./a.out: Command not found.

            > /bin/ls a.out

            a.out

            > ./a.out

            hello world

             

            The file isn't there until I do the ls.  This is reproducible every time.

             

            Is it recommended to use a Lustre 2.15 client with 2.12.x servers?

            mikedoo4 Mike D added a comment - I tried the latest Lustre 2.15 client and have not been able to reproduce the issue on CentOS7.  However, I did notice a problem (I haven't investigated it much yet): > gcc hello.c > ./a.out ./a.out: Command not found. > /bin/ls a.out a.out > ./a.out hello world   The file isn't there until I do the ls.  This is reproducible every time.   Is it recommended to use a Lustre 2.15 client with 2.12.x servers?
            mikedoo4 Mike D added a comment -

            I plan to try Lustre 2.15 client (assuming that will connect to the 2.12.x server) but it will probably be several weeks before I can try it and report back.  I don't know if the problem occurs with RHEL8/9 as I don't have an easy way to test that.

            mikedoo4 Mike D added a comment - I plan to try Lustre 2.15 client (assuming that will connect to the 2.12.x server) but it will probably be several weeks before I can try it and report back.  I don't know if the problem occurs with RHEL8/9 as I don't have an easy way to test that.

            Interesting that the signal clearing is in those macros.  Those are the ones neilb ported in to Lustre to replace our hand rolled stuff, so maybe the issue isn't fixed.  Or perhaps Neil knows - should've tagged him earlier.

            paf0186 Patrick Farrell added a comment - Interesting that the signal clearing is in those macros.  Those are the ones neilb ported in to Lustre to replace our hand rolled stuff, so maybe the issue isn't fixed.  Or perhaps Neil knows - should've tagged him earlier.

            Mike, thank you for the detailed analysis (including a reproducer!).

            Since RHEL6/7 are basically EOL at this point, this issue would only be of interest if the problem persists in RHEL8/9 since we've run the full lifetime of EL6/7 without hitting this problem in actual production usage (or at least nothing has been reported to us up to this point).

            I don't see ERESTARTNOINTR used or returned anywhere in the Lustre code, so this error code is definitely coming from the kernel fork() handling. There is indeed code in libcfs/include/libcfs/linux/linux-wait.h that is clearing TIF_SIGPENDING in the RPC completion wait routines (__wait_event_idle() or __wait_event_lifo(), which are conditionally used depending on the kernel version in use. I suspect these routines are clones of similar code from newer kernels just for compatibility use with older kernels, so there may be some variations.

            If the problem still persists with newer kernels and Lustre releases then it would be useful to continue investigation and add the repro.c test case into our regression test suite.

            adilger Andreas Dilger added a comment - Mike, thank you for the detailed analysis (including a reproducer!). Since RHEL6/7 are basically EOL at this point, this issue would only be of interest if the problem persists in RHEL8/9 since we've run the full lifetime of EL6/7 without hitting this problem in actual production usage (or at least nothing has been reported to us up to this point). I don't see ERESTARTNOINTR used or returned anywhere in the Lustre code, so this error code is definitely coming from the kernel fork() handling. There is indeed code in libcfs/include/libcfs/linux/linux-wait.h that is clearing TIF_SIGPENDING in the RPC completion wait routines ( __ wait_event_idle() or __wait_event_lifo() , which are conditionally used depending on the kernel version in use. I suspect these routines are clones of similar code from newer kernels just for compatibility use with older kernels, so there may be some variations. If the problem still persists with newer kernels and Lustre releases then it would be useful to continue investigation and add the repro.c test case into our regression test suite.

            Hi Mike,

            I think I know the issue you're hitting here and its root cause.  Historically, Lustre had to do some nasty things with signal handling due to the lack of the ability to express "waiting" without contributing to load, and we also rolled our own for some waiting primitives that didn't exist at the time.  This results in some weird behavior with certain signals in some cases.  I saw this with ptrace, but this problem has a very similar feel to it.

            Neil Brown did a thorough rework of signal handling and task waiting in Lustre, spread over a number of patches (If it were just one, I would link it), which I believe landed for 2.13 but I don't think was ported to 2.12 (it was seen as code cleanup rather than fixing specific bugs).  (I think your not hitting the problem with 2.12 is probably a coincidence/timing change.)

            2.15 is the current maintenance release, so it would be good to see if you can reproduce this with 2.15, which has the full bevy of wait and signal handling changes.

            paf0186 Patrick Farrell added a comment - Hi Mike, I think I know the issue you're hitting here and its root cause.  Historically, Lustre had to do some nasty things with signal handling due to the lack of the ability to express "waiting" without contributing to load, and we also rolled our own for some waiting primitives that didn't exist at the time.  This results in some weird behavior with certain signals in some cases.  I saw this with ptrace, but this problem has a very similar feel to it. Neil Brown did a thorough rework of signal handling and task waiting in Lustre, spread over a number of patches (If it were just one, I would link it), which I believe landed for 2.13 but I don't think was ported to 2.12 (it was seen as code cleanup rather than fixing specific bugs).  (I think your not hitting the problem with 2.12 is probably a coincidence/timing change.) 2.15 is the current maintenance release, so it would be good to see if you can reproduce this with 2.15, which has the full bevy of wait and signal handling changes.

            People

              wc-triage WC Triage
              mikedoo4 Mike D
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated: