[LU-7842] ACL's applied over NFS are not consistent when looping file operations Created: 03/Mar/16 Updated: 14/Jun/18 Resolved: 24/Jul/17 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.7.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | Dave Bond (Inactive) | Assignee: | Lai Siyao |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||
| Description |
|
We are experiencing an issue where on a lustre client we do not see the issue, but exported over NFS we see this issue on one of our three production lustre file systems. We cannot reproduce this issue on any other system, but is causing us production issues on our oldest lustre instance. Running the script attached over NFS after a few iterations we hit the following issue: [joe59240@vws250 joe59240]$ /dls/tmp/joe59240/stresstest Each "." is an iteration of the loop as you will see in the script This persists for maybe as long as five seconds before files can be written to the folder again and the script runs. At the moment we have not ever had the script run to completion, but on other lustre file systems we can run it hundreds of times to completion. We are thinking after a number of weeks looking at the issue that it is not the exporter as this is across all servers that export lustre but down to the interaction between FS and NFS. We have put in a few sleeps in the script to try and identify if there is a buffering issue where we are modifying or deleting before a flush to disk. But this has not improved the symptoms. Would it be possible to advise further debugging? |
| Comments |
| Comment by Peter Jones [ 04/Mar/16 ] |
|
Lai Could you please advise? Thanks Peter |
| Comment by Lai Siyao [ 08/Mar/16 ] |
|
I couldn't reproduce locally, but I suspect this is a duplicate of |
| Comment by Dave Bond (Inactive) [ 08/Mar/16 ] |
|
After testing this. I can no longer reproduce this on a client that I once could. |
| Comment by Frederik Ferner (Inactive) [ 08/Mar/16 ] |
|
I was the one originally reporting So yes, we would very much appreciate versions of all relevant patches that we can apply to our 2.7 based clients. As we have a maintenance period approaching at the end of this week, I would also very much appreciate if we could have these patches before Friday. Thanks, |
| Comment by Peter Jones [ 09/Mar/16 ] |
|
Dave/Frederik The relevant patches have been ported and are going through testing and reviews atm Peter |
| Comment by Dave Bond (Inactive) [ 16/Mar/16 ] |
|
Hi All, Could we possibly have an update as to the progress of the patch testing. We would like to get the latest 2.7 including this to test in our production environment ASAP. |
| Comment by Jian Yu [ 17/Mar/16 ] |
|
Hi Dave, |
| Comment by Dave Bond (Inactive) [ 02/Jun/16 ] |
|
Hello, It would appear that the issue got a lot better but never went away. In the latest client version running lustre: 2.7.2 From an NFS client mounting this area [joe59240@vws250 mx-scratch]$ ~/dls-science-user-area/benchmarking/stresstest Would you have expected this to include the fix? |
| Comment by Dave Bond (Inactive) [ 06/Jun/16 ] |
|
We are approaching the end of our maintenance period. Would it be possible to get an update on this? |
| Comment by Lai Siyao [ 06/Jun/16 ] |
|
|
| Comment by Dave Bond (Inactive) [ 07/Jun/16 ] |
|
I have just attached the logs you requested. Let me know if there are any more details I can give you. |
| Comment by Lai Siyao [ 08/Jun/16 ] |
|
I do see -13 error code in nfs logs, but I'm afraid the lustre debug log is not dumped in time, and the related logs were discarded (lustre only keeps a certain amount of debug logs in memory). Could you modify your test script a bit to check errors of each command, and dump logs just upon error? |
| Comment by Dave Bond (Inactive) [ 14/Jun/16 ] |
|
Just uploaded new dump file. This was collected by sudo lctl debug_daemon start /tmp/lustre-dump-14-05-16 |
| Comment by Dave Bond (Inactive) [ 21/Jun/16 ] |
|
From the time stamp it has been 6 days since the last update when I uploaded the latest logs. Any chance of an update even to say you are still looking? We have shared this ticket number with the developers whom this is causing pain and I would like to provide them with an update. |
| Comment by Lai Siyao [ 22/Jun/16 ] |
|
I don't find any clue in debug logs, so the -13 might be generated from NFS code (though it may be caused by lustre code, but it may be wrong attribute fetched to NFS server from MDS). I'll see whether I can make a patch to add some debug messages. |
| Comment by Gerrit Updater [ 22/Jun/16 ] |
|
Lai Siyao (lai.siyao@intel.com) uploaded a new patch: http://review.whamcloud.com/20920 |
| Comment by Lai Siyao [ 22/Jun/16 ] |
|
hi Dave, I just pushed a patch, which changes MDS code only, could you apply it and test again? |
| Comment by Frederik Ferner (Inactive) [ 23/Jun/16 ] |
|
Lai, I've looked at the patch, it has received a '-1' from maloo but I can't work out if this is a failure that is seen elsewhere. I think it might be but would like to double check before considering to apply this patch for a test. Considering we are unfortunately only seeing this on a production file system, and MDS changes require a full file system outage, we will need to schedule this and I currently can't promise when this will happen. Hopefully early next week if everything else looks good. Thanks, |
| Comment by Lai Siyao [ 24/Jun/16 ] |
|
The autotest failure looks to be caused by |
| Comment by Peter Jones [ 24/Jun/16 ] |
|
Lai While Thanks Peter |
| Comment by Lai Siyao [ 27/Jun/16 ] |
|
okay, I'll do it now. |
| Comment by Lai Siyao [ 28/Jun/16 ] |
|
the patch for b2_7_fe is on: http://review.whamcloud.com/#/c/20992/ |
| Comment by Peter Jones [ 04/Jul/16 ] |
|
Dave/Frederik Have you applied the supplied diagnostic patch? Peter |
| Comment by Frederik Ferner (Inactive) [ 05/Jul/16 ] |
|
Peter, the patch last week was a bit late for that maintenance window, so we had to wait until this week. We have applied the patch on the MDS this morning and so far we've not been able to reproduce the issue, though if I remember right, this sometimes had been the case immediately after rebooting the NFS server. And we did have to reboot the NFS server as it suffered a LBUG after finishing recovery. We're looking into this and if we can't find anything in Jira, we'll open another ticket for that. Thanks, |
| Comment by Peter Jones [ 18/Mar/17 ] |
|
Frederik Any news? Peter |
| Comment by Peter Jones [ 24/Jul/17 ] |
|
ok so either this is no longer happening or you are no longer concerned about it. Either way, I'll close out the ticket |