[LU-12265] LustreError: 141027:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5 Created: 06/May/19 Updated: 28/Jan/22 |
|
| Status: | Reopened |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.12.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | Matt Rásó-Barnett (Inactive) | Assignee: | Hongchao Zhang |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Issue Links: |
|
||||||||||||||||||||
| Severity: | 3 | ||||||||||||||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||||||||||||||
| Description |
|
Hello, The following are errors on one of the servers, which is a combined OSS & MDS, and is one of 24 such servers: May 06 09:29:43 dac-e-3 kernel: LustreError: 141015:0:(osd_iam_lfix.c:188:iam_lfix_init()) Skipped 11 previous similar messages May 06 09:29:43 dac-e-3 kernel: LustreError: 141015:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5 May 06 08:49:09 dac-e-3 kernel: LustreError: 140855:0:(osd_iam_lfix.c:188:iam_lfix_init()) Skipped 9 previous similar messages May 06 08:49:09 dac-e-3 kernel: LustreError: 140855:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5 May 06 08:47:25 dac-e-3 kernel: LustreError: 141027:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5 I see no other lustre errors on any other servers, or on any of the clients, but the client application sees an error. These errors are also only rarely seen so I'm not sure if I can easily reproduce them - I have been running this benchmark suite very intensely the past few days and we are fairly frequently re-formatting all of the hardware and rebuilding filesystems on this hardware as it is a pool of hardware that we use in a filesystem-on-demand style of usage. At the time of the errors I was running an mdtest benchmark from the 'md easy' portion of the suite, with 128 clients, 32 ranks, so a very large number of files were being created at the time: mdtest-1.9.3 was launched with 2048 total task(s) on 128 node(s) Command line used: /home/mjr208/projects/benchmarking/io-500-src-stonewall-fix/bin/mdtest "-C" "-n" "140000" "-u" "-L" "-F" "-d" "/dac/fs1/mjr208/job11312297-2019-05-05-2356/mdt_easy" Path: /dac/fs1/mjr208/job11312297-2019-05-05-2356 FS: 412.6 TiB Used FS: 24.2% Inodes: 960.0 Mi Used Inodes: 0.0% 2048 tasks, 286720000 files ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) Abort(-1) on node 480 (rank 480 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 480 ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) Abort(-1) on node 486 (rank 486 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 486 ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) Abort(-1) on node 488 (rank 488 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 488 ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) Abort(-1) on node 491 (rank 491 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 491 ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) Abort(-1) on node 492 (rank 492 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 492 ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376) Abort(-1) on node 493 (rank 493 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 493 Abort(-1) on node 482 (rank 482 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 482 The filesystem itself is configured using DNE and specifically we are using DNE2 striped directories for all mdtest runs. We are using a large number of MDTs, 24 at the moment, one-per server, (which other than this problem, is otherwise working excellently), and the directory-stripe is '-1', so we are striping all the directories over all 24 MDTs, one per server. Each server contains 12 NVMe drives, and we partition one of the drives so it has both an OST and MDT partition. Lustre and Kernel versions are as follows: Server: kernel-3.10.0-957.el7_lustre.x86_64 Server: lustre-2.12.0-1.el7.x86_64 Clients: kernel-3.10.0-957.10.1.el7.x86_64 Clients: lustre-client-2.10.7-1.el7.x86_64 Could I get some advice on what this error indicates here? |
| Comments |
| Comment by Peter Jones [ 06/May/19 ] |
|
Hongchao Could you please investigate? Thanks Peter |
| Comment by Hongchao Zhang [ 09/May/19 ] |
|
This error shows the index files (OI, scrub etc) are corrupted, it could be caused by some hardware issue or disk driver issues, Thanks! |
| Comment by Matt Rásó-Barnett (Inactive) [ 10/May/19 ] |
|
Hi Hongchao, I have another ticket open https://jira.whamcloud.com/browse/LU-12268 where I do see 'LDISKFS-fs error' messages, however these are usually not occurring at the same time as the above errors. They are both seen though when I'm doing the same benchmark, so perhaps they are related. My normal mode of operation is to re-create the entire filesystem after I get these errors, but I have seen them relatively frequently when running these large-scale mdtest benchmarks. I also, only get these errors when I'm striping the mdtest directories over all 24 MDTs in the filesystem, eg: lfs setdirstripe -c -1 -D mdt_easy I don't see any problems when I'm either not striping every directory (just using DNE remote directories), eg: lfs setdirstripe -c -1 mdt_easy (so all the subdirectories are on a different MDT, but are not themselves striped) or the directory stripe_count: <= 12, eg: lfs setdirstripe -c 12 -D mdt_easy I haven't probed stripe_count: (12-24] yet. So perhaps this is DNE related? I can't rule out a hardware or driver issue, however I've seen this error move around and occur on different MDTs in the filesystem, and as mentioned above it appears to go away if I don't use large stripe_count. |
| Comment by Hongchao Zhang [ 17/May/19 ] |
|
I have tried to run mdtest with 24 MDT in one VM, and don't encounter this problem. [root@zhanghc tests]# ../utils/lfs getstripe -D /mnt/lustre/mdt_easy (Default)/mnt/lustre/mdt_easy lmv_stripe_count: 4294967295 lmv_stripe_offset: -1 lmv_hash_type: fnv_1a_64 [root@zhanghc mdtest]# ./mdtest -C -n 70000 -u -L -F -d /mnt/lustre/mdt_easy -- started at 05/15/2019 10:29:42 -- mdtest-1.9.3 was launched with 1 total task(s) on 1 node(s) Command line used: ./mdtest -C -n 70000 -u -L -F -d /mnt/lustre/mdt_easy Path: /mnt/lustre FS: 0.6 GiB Used FS: 5.1% Inodes: 0.2 Mi Used Inodes: 3.4% 1 tasks, 70000 files SUMMARY: (of 1 iterations) Operation Max Min Mean Std Dev --------- --- --- ---- ------- File creation : 2066.418 2066.418 2066.418 0.000 File stat : 0.000 0.000 0.000 0.000 File read : 0.000 0.000 0.000 0.000 File removal : 0.000 0.000 0.000 0.000 Tree creation : 11.047 11.047 11.047 0.000 Tree removal : 0.000 0.000 0.000 0.000 -- finished at 05/15/2019 10:30:15 -- |
| Comment by Matt Rásó-Barnett (Inactive) [ 17/May/19 ] |
|
Hi Hongchao, I've just started seeing this error again the past couple of days again unfortunately, and as mentioned in my last comment, this is where I am using a striped directory only on the parent directory, to ensure that all the subdirectories are put on different MDTs. The application sees an I/O error failing to make the directory: error could not create directory "/dac/fs1/mjr208/job11535231-2019-05-17-1421/mdt_easy/#test-dir.0-0/mdtest_tree.408.0/" ior ERROR: open64() failed, errno 2, No such file or directory (aiori-POSIX.c:407) Abort(-1) on node 408 (rank 408 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 408 and I get the error on the MDS as mentioned before. I'm not sure what else would be useful to look for on this? I don't see any other syslog messages indicating hardware/driver issues of the device, and I've seen this error on different MDS/MDTs. Thanks, |
| Comment by Hongchao Zhang [ 22/May/19 ] |
|
Hi, I have tried to reproduce this issue in local VMs, but it can't be reproduced. |
| Comment by Matt Rásó-Barnett (Inactive) [ 24/May/19 ] |
|
Hi Hongchao, So perhaps we close this and my other ticket for now, and if I run into this issue again I can reopen and get you onto the platform to investigate? Thanks, |
| Comment by Andreas Dilger [ 07/Jun/19 ] |
|
This looks like it is still being hit. |
| Comment by Hongchao Zhang [ 11/Jun/19 ] |
|
Hi Andreas, |
| Comment by Andreas Dilger [ 11/Jun/19 ] |
|
Matt has been hitting it regularly in his large-scale IO-500 runs in CAM-79. |
| Comment by Alexander Zarochentsev [ 23/Sep/19 ] |
|
r/w semaphores are broken in RH kernels up to RH7.7 , see https://access.redhat.com/solutions/3393611
|
| Comment by Artem Blagodarenko (Inactive) [ 07/Dec/20 ] |
|
Faced with this problem on one of our clusters. While researching iam code found there is dead code. Created I wonder if this semaphore was used somewhere in past and can be useful now. |
| Comment by Andreas Dilger [ 10/Dec/20 ] |
|
Artem, I think this problem was fixed in the RHEL7 kernel. It was seen by a number of sites that had this same kernel, but upgrading to the later RHEL7 kernels fixed the problem. |
| Comment by Artem Blagodarenko (Inactive) [ 10/Dec/20 ] |
|
adilger, do you know the exact rootcause of the problem? I am asking to know what patches we need to prevent this bug happen again. Thanks. |
| Comment by Andreas Dilger [ 10/Dec/20 ] |
|
See earlier comment in this ticket:
|
| Comment by Artem Blagodarenko (Inactive) [ 11/Dec/20 ] |
|
We faced the problem while having these patches applied
2291-kernel-locking-rwsem-Fix-possible-missed-wakeup.patch
2290-kernel-futex-Fix-possible-missed-wakeup.patch
2289-kernel-futex-Use-smp_store_release-in-mark_wake_fute.patch
2288-kernel-sched-wake_q-Fix-wakeup-ordering-for-wake_q.patch
There are no other rwsem-related patches to apply and the problem still exists. |
| Comment by Andreas Dilger [ 17/Sep/21 ] |
|
Aside from determining and fixing the root cause of this IAM corruption, it makes sense for the IAM/OSD code to handle this in a more robust manner. If the IAM block is corrupted, the current remedy is only to delete and rebuild all the OI files. It would be useful (and not more disruptive) to just reset the corrupt IAM block and then trigger a full OI Scrub to verify/reinsert any missing FIDs. This makes the OI file at least somewhat self-healing. As part of this process, it might make sense to try and scan/repair the IAM file itself. However, since we need a full OI Scrub to find any FIDs affected by the corruption, it probably makes more sense to build a new "shadow OI" file (for the corrupted OI file only, because they can grow to tens of GB in size for a large MDT). That is what LU-15016 is about. Since there are other benefits to rebuilding the OI file (compact/free old entries, improve insertion speed) I don't think it is worthwhile to spend too much time on repairing the existing OI file, just enough to keep the system usable. |
| Comment by Gerrit Updater [ 28/Sep/21 ] |
|
"Hongchao Zhang <hongchao@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/45071 |
| Comment by Alexander Boyko [ 28/Sep/21 ] |
|
FYI I pushed patch https://review.whamcloud.com/45072 " |