Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-428

Lustre: 16290:0:(quota_interface.c:460:quota_chk_acq_common()) still haven't managed to acquire quota space from the quota master after 10 retries (err=0, rc=0)

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Minor
    • None
    • Lustre 1.8.6
    • None
    • x86_64, CentOS5, 2.6.18-194.17.1.el5_lustre.1.8.5, OFED 1.5.2, 4 OSS nodes, 4 8TB OSTs/OSS, 700 clients (some o2ib, some tcp)
    • 3
    • 10141

    Description

      We've deployed a new filesystem recently and enabled quotas. We've gotten over 1200 of these messages since we've been in production the last couple weeks:

      Lustre: 16290:0:(quota_interface.c:460:quota_chk_acq_common()) still haven't managed to acquire quota space from the quota master after 10 retries (err=0, rc=0)

      Some days we get none, or very few, and some days we might get 50-100. The MDS has very little load on it. We're not aware of an operational problem associated with the above messages - no one has complained to us about I/O or quota problems. But we'd like to solve whatever issue is causing these messages.

      One strange thing is that when we get one of the above messages, it is always on the 10th retry, and err is always zero and rc is always zero in that case - it seems funny to me that the 10th call to acquire() is always successful even if it failed 9 times in a row prior to this.

      Attachments

        Issue Links

          Activity

            [LU-428] Lustre: 16290:0:(quota_interface.c:460:quota_chk_acq_common()) still haven't managed to acquire quota space from the quota master after 10 retries (err=0, rc=0)

            Close it as per customer's suggestion.

            niu Niu Yawei (Inactive) added a comment - Close it as per customer's suggestion.

            Yes, the sync_acq_req is the stats for this issue, and the 40ms/20ms looks reasonable to me.

            niu Niu Yawei (Inactive) added a comment - Yes, the sync_acq_req is the stats for this issue, and the 40ms/20ms looks reasonable to me.

            I think the sync_acq_req times in the OST stats are probably the relevant timing stats for this issue - is that correct? I've been watching the average sync_acq_req times, and they have remained constant over the last 6 weeks or so - ~40ms on average, though one OST is averaging a 20ms sync_acq_req time. Are these times reasonable?

            In any case, since we can't seem to trigger these messages, it doesn't happen (much) anymore, and does not seem to cause any problem anyway - I think we can just close this report.

            prescott@hpc.ufl.edu Craig Prescott (Inactive) added a comment - I think the sync_acq_req times in the OST stats are probably the relevant timing stats for this issue - is that correct? I've been watching the average sync_acq_req times, and they have remained constant over the last 6 weeks or so - ~40ms on average, though one OST is averaging a 20ms sync_acq_req time. Are these times reasonable? In any case, since we can't seem to trigger these messages, it doesn't happen (much) anymore, and does not seem to cause any problem anyway - I think we can just close this report.

            I used 10000 (10GB) for the size - i.e., 'lctl debug_daemon start /var/tmp/debug.log 10000'. Using smaller sizes, I found that the amount of time that would be logged just wasn't very much (probably due to the D_TRACE producing so much output). I'll turn off D_TRACE and wait for the messages to reappear.

            prescott@hpc.ufl.edu Craig Prescott (Inactive) added a comment - I used 10000 (10GB) for the size - i.e., 'lctl debug_daemon start /var/tmp/debug.log 10000'. Using smaller sizes, I found that the amount of time that would be logged just wasn't very much (probably due to the D_TRACE producing so much output). I'll turn off D_TRACE and wait for the messages to reappear.

            Hi, Craig
            What's the file size did you specified in the 'lctl debug_daemon'? I think it might because there are two much messages with D_TRACE turned on, so the useful messages have been overwritten when start to process the log.

            Given that there isn't any way to trigger the messages, I sugguest you turn on the D_QUOTA only this time. Thank you.

            niu Niu Yawei (Inactive) added a comment - Hi, Craig What's the file size did you specified in the 'lctl debug_daemon'? I think it might because there are two much messages with D_TRACE turned on, so the useful messages have been overwritten when start to process the log. Given that there isn't any way to trigger the messages, I sugguest you turn on the D_QUOTA only this time. Thank you.

            Our quota_chk_acq_common() messages seem to have stopped for the time being - none since Saturday.

            Before they stopped, I did some tracing with the lustre debug_daemon, as you suggested. Since we can't trigger the problem, I'd have to wait for the quota_chk_acq_common() messages to happen "naturally". I turned on the debug_daemon to start tracing, and when I saw the quota_chk_acq_common() message in syslog, I stopped the debug_daemon and looked at the decoded debug file. A 10GB debug file got anywhere from 15 seconds to a couple minutes of debug file output.

            Is there a timing issue between when something is logged in the syslog and when the corresponding debug info gets logged by the Lustre debug daemon? Even though I'd sit and watch the syslog, I never saw a trace message that corresponded to the syslog "10 retries" messages. But I did see traces that had several retries.

            I could not really learn anything new by looking at the debug debug_daemon output, though. Is there something you can suggest that I should look for?

            prescott@hpc.ufl.edu Craig Prescott (Inactive) added a comment - Our quota_chk_acq_common() messages seem to have stopped for the time being - none since Saturday. Before they stopped, I did some tracing with the lustre debug_daemon, as you suggested. Since we can't trigger the problem, I'd have to wait for the quota_chk_acq_common() messages to happen "naturally". I turned on the debug_daemon to start tracing, and when I saw the quota_chk_acq_common() message in syslog, I stopped the debug_daemon and looked at the decoded debug file. A 10GB debug file got anywhere from 15 seconds to a couple minutes of debug file output. Is there a timing issue between when something is logged in the syslog and when the corresponding debug info gets logged by the Lustre debug daemon? Even though I'd sit and watch the syslog, I never saw a trace message that corresponded to the syslog "10 retries" messages. But I did see traces that had several retries. I could not really learn anything new by looking at the debug debug_daemon output, though. Is there something you can suggest that I should look for?

            Hi, Craig

            I think you can collect more information by following way:

            • enable D_TRACE & D_QUOTA on the OSS by 'echo +trace > /proc/sys/lnet/debug' & 'echo +quota > /proc/sys/lnet/debug'
            • clear current debug log buffer on the OSS by 'lctl clear'
            • start debug daemon on the OSS by 'lctl debug_daemon start debuglog 100'
            • trigger the messages by some operation, for instance, writing to a file to exceed the user or group quota limit (on client).
            • stop the debug daemon on the OSS by 'lctl debug_daemon stop'
            • convert the binary debug log to ascii file by 'lctl debug_file debuglog debugfile'

            hope we can find the source of these messages from the 'debugfile'.

            Thanks
            Niu

            niu Niu Yawei (Inactive) added a comment - Hi, Craig I think you can collect more information by following way: enable D_TRACE & D_QUOTA on the OSS by 'echo +trace > /proc/sys/lnet/debug' & 'echo +quota > /proc/sys/lnet/debug' clear current debug log buffer on the OSS by 'lctl clear' start debug daemon on the OSS by 'lctl debug_daemon start debuglog 100' trigger the messages by some operation, for instance, writing to a file to exceed the user or group quota limit (on client). stop the debug daemon on the OSS by 'lctl debug_daemon stop' convert the binary debug log to ascii file by 'lctl debug_file debuglog debugfile' hope we can find the source of these messages from the 'debugfile'. Thanks Niu

            Thanks, Niu. Sorry, I assumed a return value of zero from acquire() meant success.

            We set block hard and soft limits for each user. We don't have any inode-based quotas. 'lfs quota' works properly, and the block-hardlimit quota works. When the soft limit is violated, we see an asterisk in the 'lfs quota' output.

            The quota_chk_acq_common() messages on the OSS nodes aren't triggered by an 'lfs quota' command - they just happen periodically during the day. There are no corresponding messages on the MDS.

            Is there a way to enable additional logging to see more information about the source of these messages?

            prescott@hpc.ufl.edu Craig Prescott (Inactive) added a comment - Thanks, Niu. Sorry, I assumed a return value of zero from acquire() meant success. We set block hard and soft limits for each user. We don't have any inode-based quotas. 'lfs quota' works properly, and the block-hardlimit quota works. When the soft limit is violated, we see an asterisk in the 'lfs quota' output. The quota_chk_acq_common() messages on the OSS nodes aren't triggered by an 'lfs quota' command - they just happen periodically during the day. There are no corresponding messages on the MDS. Is there a way to enable additional logging to see more information about the source of these messages?

            Hi, Craig

            quota_chk_acq_common() only print the warning message on (retry_count % 10 == 0), that's why you always see it on 10th retry. and "rc/err == 0" doesn't indicate a successful quota acquire. I think the acquire() in quota_chk_acq_common() must return zero for some reason (a successful acquire() should return 1).

            When did the messages show up? On server start or some special operations (quota check for instance)? Can 'lfs quota' work properly on your system? I suspect the quota on server isn't initilized correctly.

            Thanks
            Niu

            niu Niu Yawei (Inactive) added a comment - Hi, Craig quota_chk_acq_common() only print the warning message on (retry_count % 10 == 0), that's why you always see it on 10th retry. and "rc/err == 0" doesn't indicate a successful quota acquire. I think the acquire() in quota_chk_acq_common() must return zero for some reason (a successful acquire() should return 1). When did the messages show up? On server start or some special operations (quota check for instance)? Can 'lfs quota' work properly on your system? I suspect the quota on server isn't initilized correctly. Thanks Niu
            pjones Peter Jones added a comment -

            This does sound strange and I have not heard of any problems of this nature elsewhere. Niu, do you have something to suggest here?

            pjones Peter Jones added a comment - This does sound strange and I have not heard of any problems of this nature elsewhere. Niu, do you have something to suggest here?

            People

              niu Niu Yawei (Inactive)
              prescott@hpc.ufl.edu Craig Prescott (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: