[LU-7842] ACL's applied over NFS are not consistent when looping file operations Created: 03/Mar/16  Updated: 14/Jun/18  Resolved: 24/Jul/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.7.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Dave Bond (Inactive) Assignee: Lai Siyao
Resolution: Cannot Reproduce Votes: 0
Labels: None

Attachments: File lustre-dump.tar.gz     File lustre-logs.tar.gz     HTML File stresstest    
Issue Links:
Related
is related to LU-7630 permission denied over NFS Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

We are experiencing an issue where on a lustre client we do not see the issue, but exported over NFS we see this issue on one of our three production lustre file systems. We cannot reproduce this issue on any other system, but is causing us production issues on our oldest lustre instance.

Running the script attached over NFS after a few iterations we hit the following issue:

[joe59240@vws250 joe59240]$ /dls/tmp/joe59240/stresstest
....mkdir: cannot create directory `5': Permission denied

Each "." is an iteration of the loop as you will see in the script

This persists for maybe as long as five seconds before files can be written to the folder again and the script runs.

At the moment we have not ever had the script run to completion, but on other lustre file systems we can run it hundreds of times to completion.
The file system has many different NFS exporters each exporting a different folder in the root of the file system. As is common practice on all other systems at Diamond. We can re produce this on all exporters attached to this particular file system.

We are thinking after a number of weeks looking at the issue that it is not the exporter as this is across all servers that export lustre but down to the interaction between FS and NFS.

We have put in a few sleeps in the script to try and identify if there is a buffering issue where we are modifying or deleting before a flush to disk. But this has not improved the symptoms.

Would it be possible to advise further debugging?



 Comments   
Comment by Peter Jones [ 04/Mar/16 ]

Lai

Could you please advise?

Thanks

Peter

Comment by Lai Siyao [ 08/Mar/16 ]

I couldn't reproduce locally, but I suspect this is a duplicate of LU-6528, and http://review.whamcloud.com/14978 and http://review.whamcloud.com/17815 are the fix of it. Could you test with latest master build (on lustre client only) to verify whether it can fix? If so, I can backport these two patches to 2.7.

Comment by Dave Bond (Inactive) [ 08/Mar/16 ]

After testing this. I can no longer reproduce this on a client that I once could.
Please can this be pushed into 2.7?

Comment by Frederik Ferner (Inactive) [ 08/Mar/16 ]

I was the one originally reporting LU-6528. We have (a version of) http://review.whamcloud.com/14978 already included since they were referenced in that bug. However it seems we have completely missed the updates and http://review.whamcloud.com/17815 most likely as there wasn't any reference to this on any ticket we were monitoring.

So yes, we would very much appreciate versions of all relevant patches that we can apply to our 2.7 based clients. As we have a maintenance period approaching at the end of this week, I would also very much appreciate if we could have these patches before Friday.

Thanks,
Frederik

Comment by Peter Jones [ 09/Mar/16 ]

Dave/Frederik

The relevant patches have been ported and are going through testing and reviews atm

Peter

Comment by Dave Bond (Inactive) [ 16/Mar/16 ]

Hi All,

Could we possibly have an update as to the progress of the patch testing. We would like to get the latest 2.7 including this to test in our production environment ASAP.

Comment by Jian Yu [ 17/Mar/16 ]

Hi Dave,
The back-ported patch for LU-7630 in http://review.whamcloud.com/18828 is now ready to land.

Comment by Dave Bond (Inactive) [ 02/Jun/16 ]

Hello,

It would appear that the issue got a lot better but never went away. In the latest client version running

lustre: 2.7.2
kernel: patchless_client
build: v2_7_1_DLS_20160330-gf4709ff-CHANGED-2.6.32-573.22.1.el6.x86_64

From an NFS client mounting this area

[joe59240@vws250 mx-scratch]$ ~/dls-science-user-area/benchmarking/stresstest
......touch: cannot touch `5/somefile': Permission denied
[joe59240@vws250 mx-scratch]$

Would you have expected this to include the fix?

Comment by Dave Bond (Inactive) [ 06/Jun/16 ]

We are approaching the end of our maintenance period. Would it be possible to get an update on this?

Comment by Lai Siyao [ 06/Jun/16 ]

LU-6528 and LU-7630 are the known permission deny issues, your test failure looks to be a new one. We need more informations to triage, could you collect lustre debuglog on NFS server and MDS server? And can you also collect NFS client and server logs (enabled by `echo 2047 > /proc/sys/sunrpc/nfs_debug` on NFS client, and `echo 2047 > /proc/sys/sunrpc/nfsd_debug` on NFS server)?

Comment by Dave Bond (Inactive) [ 07/Jun/16 ]

I have just attached the logs you requested. Let me know if there are any more details I can give you.

Comment by Lai Siyao [ 08/Jun/16 ]

I do see -13 error code in nfs logs, but I'm afraid the lustre debug log is not dumped in time, and the related logs were discarded (lustre only keeps a certain amount of debug logs in memory).

Could you modify your test script a bit to check errors of each command, and dump logs just upon error?

Comment by Dave Bond (Inactive) [ 14/Jun/16 ]

Just uploaded new dump file.

This was collected by

sudo lctl debug_daemon start /tmp/lustre-dump-14-05-16
and
stopped after the error had shown up on the NFS client.

Comment by Dave Bond (Inactive) [ 21/Jun/16 ]

From the time stamp it has been 6 days since the last update when I uploaded the latest logs. Any chance of an update even to say you are still looking? We have shared this ticket number with the developers whom this is causing pain and I would like to provide them with an update.

Comment by Lai Siyao [ 22/Jun/16 ]

I don't find any clue in debug logs, so the -13 might be generated from NFS code (though it may be caused by lustre code, but it may be wrong attribute fetched to NFS server from MDS).

I'll see whether I can make a patch to add some debug messages.

Comment by Gerrit Updater [ 22/Jun/16 ]

Lai Siyao (lai.siyao@intel.com) uploaded a new patch: http://review.whamcloud.com/20920
Subject: LU-7842 nfs: don't drop cap for getattr too
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 8f76d6583ebb377200f14c4d547dca73d6581d1b

Comment by Lai Siyao [ 22/Jun/16 ]

hi Dave, I just pushed a patch, which changes MDS code only, could you apply it and test again?

Comment by Frederik Ferner (Inactive) [ 23/Jun/16 ]

Lai,

I've looked at the patch, it has received a '-1' from maloo but I can't work out if this is a failure that is seen elsewhere. I think it might be but would like to double check before considering to apply this patch for a test. Considering we are unfortunately only seeing this on a production file system, and MDS changes require a full file system outage, we will need to schedule this and I currently can't promise when this will happen. Hopefully early next week if everything else looks good.

Thanks,
Frederik

Comment by Lai Siyao [ 24/Jun/16 ]

The autotest failure looks to be caused by LU-8305. I'll watch the progress of that ticket.

Comment by Peter Jones [ 24/Jun/16 ]

Lai

While LU-8305 may prevent this change completing testing on master, it should have no relevance to Diamond running on the 2.7 FE branch so could you please port the patch there for them to try?

Thanks

Peter

Comment by Lai Siyao [ 27/Jun/16 ]

okay, I'll do it now.

Comment by Lai Siyao [ 28/Jun/16 ]

the patch for b2_7_fe is on: http://review.whamcloud.com/#/c/20992/

Comment by Peter Jones [ 04/Jul/16 ]

Dave/Frederik

Have you applied the supplied diagnostic patch?

Peter

Comment by Frederik Ferner (Inactive) [ 05/Jul/16 ]

Peter,

the patch last week was a bit late for that maintenance window, so we had to wait until this week.

We have applied the patch on the MDS this morning and so far we've not been able to reproduce the issue, though if I remember right, this sometimes had been the case immediately after rebooting the NFS server. And we did have to reboot the NFS server as it suffered a LBUG after finishing recovery. We're looking into this and if we can't find anything in Jira, we'll open another ticket for that.

Thanks,
Frederik

Comment by Peter Jones [ 18/Mar/17 ]

Frederik

Any news?

Peter

Comment by Peter Jones [ 24/Jul/17 ]

ok so either this is no longer happening or you are no longer concerned about it. Either way, I'll close out the ticket

Generated at Sat Feb 10 02:12:23 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.