[LU-4765] Failure on test suite sanity test_39l: atime is not updated Created: 13/Mar/14 Updated: 25/Feb/21 Resolved: 25/Feb/21 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.6.0, Lustre 2.9.0, Lustre 2.10.0, Lustre 2.11.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | Maloo | Assignee: | Bob Glossman (Inactive) |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Environment: |
server: lustre-master build # 1937 |
||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 13108 | ||||||||
| Description |
|
This issue was created by maloo for sarah <sarah@whamcloud.com> This issue relates to the following test suite run: http://maloo.whamcloud.com/test_sets/5b796592-aa08-11e3-b4b1-52540035b04c. The sub-test test_39l failed with the following error:
== sanity test 39l: directory atime update =========================================================== 10:15:24 (1394558124) CMD: shadow-8vm12 lctl get_param -n mdd.*MDT0000*.atime_diff sanity test_39l: @@@@@@ FAIL: atime is not updated from future: 1426094124, 1394558125<atime<1394558125 |
| Comments |
| Comment by Bob Glossman (Inactive) [ 13/Mar/14 ] |
|
seems to be something wrong with the client/server combo being run here. the server Centos rpms are from a recent build, lustre-master #1937, but the sles11sp3 client rpms appear to be much older. Is this a version interop run of some sort? If not why aren't the client and server rpms from the same build? |
| Comment by Andreas Dilger [ 17/Mar/14 ] |
|
This may relate to LU-1783 which was also a problem on newer kernels with atime updates from the future. Yang Sheng had made a patch for upstream, but it wasn't accepted for a variety of minor reasons. It should probably be refreshed and tried again. |
| Comment by Bob Glossman (Inactive) [ 28/Apr/14 ] |
|
another: |
| Comment by Bob Glossman (Inactive) [ 28/Apr/14 ] |
|
seeing a lot of these in autotest test runs with sles11sp3 clients. have been able to reproduce this bug manually, but only if I explicitly mount a client with the -o relatime option. as far as I can tell this option isn't typically used, mount.lustre sets MS_STRICTATIME by default. Is there something in autotest that is forcing the relatime option in client mounts? I haven't found anything in logs that would indicate that is happening. |
| Comment by Bob Glossman (Inactive) [ 29/Apr/14 ] |
|
Following a suspicion that mount options on SP3 were out of wack I took a close look at some #include files currently on our sles11sp3 builder nodes. I discovered that in fact the #include files there were seriously out of date. In particular in mount.h on the builder node MS_RDONLY is #defined but MS_STRICTATIME isn't. I'm pretty sure this causes the mount.lustre built in our sles11sp3 artifacts to not have the special override code making MS_STRICTATIME the default in all mounts. This would explain why the 39l is failing in autotest runs, but not in my manual local runs where mount.lustre is built with more up to date tools and #includes. I will file a TEI ticket requesting the environment on SP3 builders be updated. Fairly certain that will make this problem go away. |
| Comment by Bob Glossman (Inactive) [ 29/Apr/14 ] |
|
test only patch to verify TEI-2052 changes |
| Comment by Bob Glossman (Inactive) [ 30/Apr/14 ] |
|
https://maloo.whamcloud.com/test_sets/a9add412-d0ac-11e3-b9d4-52540035b04c shows test 39l now passing. The only remaining failures in sanity,sh on sles11sp3 clients are known bugs |
| Comment by Mark Mansk [ 02/Jun/14 ] |
|
We're seeing this failure as well for 2.6. Not sure if/when we'll be updating our sles11sp3. Our current klibc is at 1.5.18. Any way to workaround this besides comment out the test? |
| Comment by Bob Glossman (Inactive) [ 02/Jun/14 ] |
|
currently not reproducible in our 2.6 builds anymore. Are you rolling your own builds? Again, as far as I know you should not be having problems if you just install and use our prebuilt rpms. |
| Comment by Jinshan Xiong (Inactive) [ 15/Sep/14 ] |
|
I took a look at atime problem in Lustre when I was investigating POSIX compliance test failure at http://review.whamcloud.com/11844. The conclusion is that POSIX atime is not strictly honored when the reading data is already in cache. To fix this problem, we need to send a RPC to the MDT to update atime for every read syscall, this will introduce huge performance loss. This issue can be easily reproduce as follows: [root@mercury lustre]# cat passwd > /dev/null [root@mercury lustre]# stat passwd File: `passwd' Size: 1533 Blocks: 8 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115205255725455 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-09-15 07:26:39.000000000 -0700 Modify: 2014-09-15 06:34:54.000000000 -0700 Change: 2014-09-15 06:34:54.000000000 -0700 [root@mercury lustre]# cat passwd > /dev/null [root@mercury lustre]# stat passwd File: `passwd' Size: 1533 Blocks: 8 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115205255725455 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-09-15 07:26:48.000000000 -0700 Modify: 2014-09-15 06:34:54.000000000 -0700 Change: 2014-09-15 06:34:54.000000000 -0700 [root@mercury lustre]# lctl set_param ldlm.namespaces.*.lru_size=0 ldlm.namespaces.MGC172.30.24.132@tcp.lru_size=0 ldlm.namespaces.lustre-MDT0000-lwp-MDT0000.lru_size=0 ldlm.namespaces.lustre-MDT0000-lwp-OST0000.lru_size=0 ldlm.namespaces.lustre-MDT0000-lwp-OST0001.lru_size=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff88014ce49400.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-MDT0000.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88014ce49400.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-MDT0000.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88014ce49400.lru_size=0 [root@mercury lustre]# stat passwd File: `passwd' Size: 1533 Blocks: 8 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115205255725455 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-09-15 07:26:39.000000000 -0700 Modify: 2014-09-15 06:34:54.000000000 -0700 Change: 2014-09-15 06:34:54.000000000 -0700 The atime of the 2nd read "cat passwd > /dev/null" was not honored. It only updated atime in memory but never updated it on the OST. Therefore, newly refreshed read kept the old atime of the first read. |
| Comment by Andreas Dilger [ 15/Sep/14 ] |
|
What your test was missing is that the MDS does not write the atime update to disk unless the atime is at least 60s old. The atime is updated on the MDT at close time when the atime exceeds the mdd.*.atime_diff tunable, so that this doesn't cause numerous updates at close time when the same file is opened by thousands of clients. mds# lctl get_param mdd.*.atime_diff mdd.myth-MDT0000.atime_diff=60 In Jinshan's example, the atime happened to be only 9s old (07:26:39 vs. 07:26:48) so it was not updated on the MDT on disk. Testing this on an old file I have on my filesystem (which you can see has already had the atime updated due to previous accesses): $ stat /myth/tmp/f9 File: `/myth/tmp/f9' : Access: 2014-06-23 12:59:48.000000000 -0600 Modify: 2013-05-28 22:04:47.000000000 -0600 Change: 2013-11-27 15:26:07.000000000 -0700 $ cat /myth/tmp/f9 > /dev/null $ stat /myth/tmp/f9 : Access: 2014-09-15 12:38:12.000000000 -0600 $ sudo lctl set_param ldlm.namespaces.*.lru_size=clear ldlm.namespaces.MGC192.168.20.1@tcp.lru_size=clear ldlm.namespaces.myth-MDT0000-mdc-ffff8800377fc000.lru_size=clear ldlm.namespaces.myth-OST0000-osc-ffff8800377fc000.lru_size=clear ldlm.namespaces.myth-OST0001-osc-ffff8800377fc000.lru_size=clear ldlm.namespaces.myth-OST0002-osc-ffff8800377fc000.lru_size=clear ldlm.namespaces.myth-OST0003-osc-ffff8800377fc000.lru_size=clear ldlm.namespaces.myth-OST0004-osc-ffff8800377fc000.lru_size=clear $ stat /myth/tmp/f9 : Access: 2014-09-15 12:38:12.000000000 -0600 Modify: 2013-05-28 22:04:47.000000000 -0600 Change: 2013-11-27 15:26:07.000000000 -0700 I also verified with debugfs that this inode's atime was updated on disk on the MDT. The original problem reported in this bug is quite different. In newer kernels, setting the atime into the past (e.g. after accessing a file with "tar" for backup and trying to reset the atime to the original value) does not affect filesystems mounted using the "relatime" option. This bug has not been fixed in upstream kernels yet, and may never be. PS: setting "lru_size=0" doesn't necessarily clear the DLM locks from the client, but rather disables the static DLM LRU maximum and enables dynamic LRU sizing. Use "lctl set_param ldlm.namespaces.*.lru_size=clear" to clear the DLM lock cache on the client. |
| Comment by Bob Glossman (Inactive) [ 26/Dec/14 ] |
|
another one seen on sles11sp3: |
| Comment by Sarah Liu [ 13/Jun/16 ] |
|
failure seen in interop testing between 2.9 server and 2.8 client: https://testing.hpdd.intel.com/test_sets/a9ca904c-2d62-11e6-80b9-5254006e85c2 |
| Comment by Saurabh Tandan (Inactive) [ 14/Oct/16 ] |
|
This issue has occurred around 44 times in past 30 days. |
| Comment by Andreas Dilger [ 14/Oct/16 ] |
|
In general, I think 30-day results are less useful than, say, 7-day results, because a problem may have happened many times three weeks ago but was fixed and hasn't happened at all in the past week. If you are reporting on a problem that is still happening regularly (i.e. it also happened 8 times in the past week) then please include that information in the summary. For example, the sanityn test_77[a-e] failures appear even in the 7-day results, but not at all in the 3-day results since the problem was resolved earlier in the week. |
| Comment by Bob Glossman (Inactive) [ 23/Dec/16 ] |
|
another on master: |
| Comment by Bob Glossman (Inactive) [ 23/Dec/16 ] |
|
wondering if the recent rash of these failures have something to do with the mod landed this year; " I note this includes changes to test 39l |
| Comment by Bob Glossman (Inactive) [ 14/Mar/17 ] |
|
another on master: |
| Comment by Bob Glossman (Inactive) [ 31/Mar/17 ] |
|
another on master: |
| Comment by Andreas Dilger [ 25/Feb/21 ] |
|
Close as duplicate of |