@Andreas Dilger Sure, we have evaluated the same test case in AlmaLinux 8.8 + 2.15.3 with the new kernel (4.18.0-477.10.1.el8_lustre.x86_64), now the issue did not occur. Thanks again!
Zuoru Yang
added a comment - @Andreas Dilger Sure, we have evaluated the same test case in AlmaLinux 8.8 + 2.15.3 with the new kernel (4.18.0-477.10.1.el8_lustre.x86_64), now the issue did not occur. Thanks again!
Zuoru Yang
added a comment - @Andreas Dilger, Hi Andreas, thanks for your insights. We double-checked the linux kernel in our env (actually, we install the kernel package from the Whamcloud with 2.15.0 repo (later upgrade Lustre server to 2.15.3): https://downloads.whamcloud.com/public/lustre/lustre-2.15.0-ib/MOFED-5.6-1.0.3.3/el8.5.2111/server/RPMS/x86_64/), and we confirm that the kernel in the link does not have the patch.
yzr95924, thank you for your launchpad reference. Indeed that bug looks like it could be related. That patch is reported included in upstream kernel 5.14 and stable kernel 5.11, and fixing a bug originally in kernel 5.11 (but also backported to the RHEL kernel):
commit 877ba3f729fd3d8ef0e29bc2a55e57cfa54b2e43
Author: Theodore Ts'o <tytso@mit.edu>
AuthorDate: Wed Aug 4 14:23:55 2021 -0400
ext4: fix potential htree corruption when growing large_dir directories
Commit b5776e7524af ("ext4: fix potential htree index checksum
corruption) removed a required restart when multiple levels of index
nodes need to be split. Fix this to avoid directory htree corruptions
when using the large_dir feature.
Cc: stable@kernel.org # v5.11
Cc: Artem Blagodarenko <artem.blagodarenko@gmail.com>
Fixes: b5776e7524af ("ext4: fix potential htree index checksum corruption)
Reported-by: Denis <denis@voxelsoft.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
I can confirm that the patch is applied in 4.18.0-425.13.1.el8_7.x86_64 in fs/ext4/namei.c:
if (err)
goto journal_error;
err = ext4_handle_dirty_dx_node(handle, dir,
frame->bh);
if (restart || err)
goto journal_error;
but I'm not sure whether it is applied in your kernel 4.18.0-348.2.1.el8_lustre.x86_64.
Andreas Dilger
added a comment - - edited yzr95924 , thank you for your launchpad reference. Indeed that bug looks like it could be related. That patch is reported included in upstream kernel 5.14 and stable kernel 5.11, and fixing a bug originally in kernel 5.11 (but also backported to the RHEL kernel):
commit 877ba3f729fd3d8ef0e29bc2a55e57cfa54b2e43
Author: Theodore Ts'o <tytso@mit.edu>
AuthorDate: Wed Aug 4 14:23:55 2021 -0400
ext4: fix potential htree corruption when growing large_dir directories
Commit b5776e7524af ("ext4: fix potential htree index checksum
corruption) removed a required restart when multiple levels of index
nodes need to be split. Fix this to avoid directory htree corruptions
when using the large_dir feature.
Cc: stable@kernel.org # v5.11
Cc: Artem Blagodarenko <artem.blagodarenko@gmail.com>
Fixes: b5776e7524af ("ext4: fix potential htree index checksum corruption)
Reported-by: Denis <denis@voxelsoft.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
I can confirm that the patch is applied in 4.18.0-425.13.1.el8_7.x86_64 in fs/ext4/namei.c :
if (err)
goto journal_error;
err = ext4_handle_dirty_dx_node(handle, dir,
frame->bh);
if (restart || err)
goto journal_error;
but I'm not sure whether it is applied in your kernel 4.18.0-348.2.1.el8_lustre.x86_64 .
which also reports "directory leaf block found instead of index block" when there are millions of files on ext4. Never mind, we will test this issue with a newer kernel (e.g., in AlmaLinux 8.8 + 2.15.3)
Zuoru Yang
added a comment - @Andreas Dilger BTW, the reason why I initially consider this issue is related large_dir is this link https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1933074
which also reports "directory leaf block found instead of index block" when there are millions of files on ext4. Never mind, we will test this issue with a newer kernel (e.g., in AlmaLinux 8.8 + 2.15.3)
Also, have you tried updating to a newer kernel? It is possible that the ext4 in the kernel (and ldiskfs that is generated from this) has a bug that has since been fixed.
Andreas Dilger
added a comment - Also, have you tried updating to a newer kernel? It is possible that the ext4 in the kernel (and ldiskfs that is generated from this) has a bug that has since been fixed.
Lustre does not modify the on-disk data structures of ldiskfs directly, although it is accessing the filesystem somewhat differently than a regular ext4 mount does. I don't think the issue is with large_dir, but more likely with parallel directory locking and updates. There would need to be some kind of bug in ext4 or the ldiskfs patches applied. It is not possible for the clients to corrupt the server filesystem directly.
That said, it appears from the e2fsck output that the on-disk data structures are not corrupted, so it seems like this is some kind of in-memory corruption? The free blocks/inode counts quota usage messages are normal for a filesystem that is in use.
There is a tunable parameter to disable the parallel directory locking and updates with "lctl set_param osd-ldiskfs.lustre-MDT*.pdo=0" on the MDS nodes. Note, that this is never tested and potentially could have some issues, beyond being much slower, but it would be useful to test if this avoids the issue.
Andreas Dilger
added a comment - Lustre does not modify the on-disk data structures of ldiskfs directly, although it is accessing the filesystem somewhat differently than a regular ext4 mount does. I don't think the issue is with large_dir, but more likely with parallel directory locking and updates. There would need to be some kind of bug in ext4 or the ldiskfs patches applied. It is not possible for the clients to corrupt the server filesystem directly.
That said, it appears from the e2fsck output that the on-disk data structures are not corrupted, so it seems like this is some kind of in-memory corruption? The free blocks/inode counts quota usage messages are normal for a filesystem that is in use.
There is a tunable parameter to disable the parallel directory locking and updates with " lctl set_param osd-ldiskfs.lustre-MDT*.pdo=0 " on the MDS nodes. Note, that this is never tested and potentially could have some issues, beyond being much slower, but it would be useful to test if this avoids the issue.
@Andreas Dilger Sorry for my late reply, we spend some time to check our RAID to ensure that this is not caused by the storage backend. We consider that it might be a bug in EXT4?
Yes, there exists some files in the filesystem of previous experiments, and we remove them and try again with the same test command this time. The issue still occurs, and the info is as follows (since it does not create all files due to this issue):
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] Aborting journal on device ultrapathb-8. [Tue Jan 9 20:51:22 2024] LDISKFS-fs (ultrapathb): Remounting filesystem read-only [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): ldiskfs_journal_check_start:61: Detected aborted journal [Tue Jan 9 20:51:22 2024] LustreError: 260307:0:(osd_handler.c:1790:osd_trans_commit_cb()) transaction @0x0000000069cf0f59 commit error: 2 [Tue Jan 9 20:51:22 2024] LustreError: 260307:0:(osd_handler.c:1790:osd_trans_commit_cb()) Skipped 52 previous similar messages [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt21_003: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt10_001: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt07_001: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt19_002: directory leaf block found instead of index block [Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error: 180 callbacks suppressed [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt20_004: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt10_003: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt20_004: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt07_003: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt19_000: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt10_003: directory leaf block found instead of index block [Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block
Note that device ultrapathb is the backend of MDT1, and the following is the process record when we do e2fsck for device ultrapathb
Script started on 2024-01-09 21:09:49+08:00 []0;root@server02:~^G[root@server02 ~]# e2fsck -f /dev/ut^H[[Klt^Grapathb^M
e2fsck 1.46.6-wc1 (10-Jan-2023)^M
MMP interval is 10 seconds and total wait time is 42 seconds. Please wait...^M
l_lfs-MDT0001: recovering journal^M
Pass 1: Checking inodes, blocks, and sizes^M
Pass 2: Checking directory structure^M
Pass 3: Checking directory connectivity^M
Pass 4: Checking reference counts^M
Pass 5: Checking group summary information^M
Free blocks count wrong (142139109, counted=142167445).^M
Fix<y>? yes^M
Free inodes count wrong (412219565, counted=412247109).^M
Fix<y>? yes^M [QUOTA WARNING] Usage inconsistent for ID 0:actual (72814075904, 17302001) != expected (72899485696, 17302001)^M
Update quota info for quota type 0<y>? yes^M [QUOTA WARNING] Usage inconsistent for ID 0:actual (72814075904, 17302001) != expected (72899485696, 17302001)^M
Update quota info for quota type 1<y>? yes^M [QUOTA WARNING] Usage inconsistent for ID 0:actual (72814075904, 17302001) != expected (72899485696, 17302001)^M
Update quota info for quota type 2<y>? yes^M
^M
l_lfs-MDT0001: ***** FILE SYSTEM WAS MODIFIED *****^M
l_lfs-MDT0001: 17302011/429549120 files (0.0% non-contiguous), 126261419/268428864 blocks^M
^[]0;root@server02:~^G[root@server02 ~]# exit^M
exit^M
Script done on 2024-01-09 21:16:33+08:00
Is that possible an issue from ext4 with large_dir?
Zuoru Yang
added a comment - @Andreas Dilger Sorry for my late reply, we spend some time to check our RAID to ensure that this is not caused by the storage backend. We consider that it might be a bug in EXT4?
Yes, there exists some files in the filesystem of previous experiments, and we remove them and try again with the same test command this time. The issue still occurs, and the info is as follows (since it does not create all files due to this issue):
[root@client02 ~] # lfs quota -u root /lustre/
Disk quotas for usr root (uid 0):
Filesystem kbytes quota limit grace files quota limit grace
/lustre/ 185332124 0 0 - 45025839 0 0 -
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] Aborting journal on device ultrapathb-8.
[Tue Jan 9 20:51:22 2024] LDISKFS-fs (ultrapathb): Remounting filesystem read-only
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): ldiskfs_journal_check_start:61: Detected aborted journal
[Tue Jan 9 20:51:22 2024] LustreError: 260307:0:(osd_handler.c:1790:osd_trans_commit_cb()) transaction @0x0000000069cf0f59 commit error: 2
[Tue Jan 9 20:51:22 2024] LustreError: 260307:0:(osd_handler.c:1790:osd_trans_commit_cb()) Skipped 52 previous similar messages
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt21_003: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt10_001: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt07_001: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt19_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:22 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error: 180 callbacks suppressed
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt20_004: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt18_002: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt10_003: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt20_004: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt07_003: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt19_000: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt10_003: directory leaf block found instead of index block
[Tue Jan 9 20:51:27 2024] LDISKFS-fs error (device ultrapathb): dx_probe:1169: inode #104316384: block 149479: comm mdt05_002: directory leaf block found instead of index block
Note that device ultrapathb is the backend of MDT1, and the following is the process record when we do e2fsck for device ultrapathb
Script started on 2024-01-09 21:09:49+08:00
[]0;root@server02:~^G [root@server02 ~] # e2fsck -f /dev/ut^H [[Klt^Grapathb^M
e2fsck 1.46.6-wc1 (10-Jan-2023)^M
MMP interval is 10 seconds and total wait time is 42 seconds. Please wait...^M
l_lfs-MDT0001: recovering journal^M
Pass 1: Checking inodes, blocks, and sizes^M
Pass 2: Checking directory structure^M
Pass 3: Checking directory connectivity^M
Pass 4: Checking reference counts^M
Pass 5: Checking group summary information^M
Free blocks count wrong (142139109, counted=142167445).^M
Fix<y>? yes^M
Free inodes count wrong (412219565, counted=412247109).^M
Fix<y>? yes^M
[QUOTA WARNING] Usage inconsistent for ID 0:actual (72814075904, 17302001) != expected (72899485696, 17302001)^M
Update quota info for quota type 0<y>? yes^M
[QUOTA WARNING] Usage inconsistent for ID 0:actual (72814075904, 17302001) != expected (72899485696, 17302001)^M
Update quota info for quota type 1<y>? yes^M
[QUOTA WARNING] Usage inconsistent for ID 0:actual (72814075904, 17302001) != expected (72899485696, 17302001)^M
Update quota info for quota type 2<y>? yes^M
^M
l_lfs-MDT0001: ***** FILE SYSTEM WAS MODIFIED *****^M
l_lfs-MDT0001: 17302011/429549120 files (0.0% non-contiguous), 126261419/268428864 blocks^M
^[]0;root@server02:~^G [root@server02 ~] # exit^M
exit^M
Script done on 2024-01-09 21:16:33+08:00
Is that possible an issue from ext4 with large_dir?
Just to confirm the test being run, each rank is creating 160000 files in a separate subdirectory from the other ranks, and there are 2^3 leaf subdirectories (branching factor 2, depth 3)? That would create about 82M files, but it looks like there are some existing files in the filesystem.
What does e2fsck show when run on the corrupt MDT?
Andreas Dilger
added a comment - Just to confirm the test being run, each rank is creating 160000 files in a separate subdirectory from the other ranks, and there are 2^3 leaf subdirectories (branching factor 2, depth 3)? That would create about 82M files, but it looks like there are some existing files in the filesystem.
What does e2fsck show when run on the corrupt MDT?
People
WC Triage
Zuoru Yang
Votes:
0Vote for this issue
Watchers:
4Start watching this issue
Dates
Created:
Updated:
Resolved:
1 of 26
{"searchers":{"groups":[{"searchers":[{"name":"Project","id":"project","key":"issue.field.project","isShown":true,"lastViewed":1748705626077},{"name":"Summary","id":"summary","key":"issue.field.summary","isShown":true},{"name":"Type","id":"issuetype","key":"issue.field.issuetype","isShown":true,"lastViewed":1748705626077},{"name":"Status","id":"status","key":"issue.field.status","isShown":true,"lastViewed":1748705626084},{"name":"Priority","id":"priority","key":"issue.field.priority","isShown":true},{"name":"Resolution","id":"resolution","key":"issue.field.resolution","isShown":true},{"name":"Creator","id":"creator","key":"issue.field.creator","isShown":true},{"name":"Affects Version","id":"version","key":"issue.field.affectsversions","isShown":true},{"name":"Fix Version","id":"fixfor","key":"issue.field.fixversions","isShown":true},{"name":"Component","id":"component","key":"issue.field.components","isShown":false},{"name":"% Limits","id":"workratio","key":"issue.field.workratio","isShown":true},{"name":"Link types","id":"issue_link_type","key":"issue.field.issuelinks","isShown":true},{"name":"Environment","id":"environment","key":"issue.field.environment","isShown":true},{"name":"Description","id":"description","key":"issue.field.description","isShown":true},{"name":"Comment","id":"comment","key":"issue.field.comment","isShown":true},{"name":"Label","id":"labels","key":"issue.field.labels","isShown":true},{"name":"Query","id":"text","key":"text","isShown":true},{"name":"Bugzilla ID","id":"customfield_10020","key":"com.atlassian.jira.plugin.system.customfieldtypes:float","isShown":false},{"name":"Business Value","id":"customfield_10003","key":"com.atlassian.jira.plugin.system.customfieldtypes:float","isShown":false},{"name":"Development","id":"customfield_10890","key":"com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary","isShown":true},{"name":"Epic","id":"customfield_10040","key":"com.atlassian.jira.plugin.system.customfieldtypes:labels","isShown":true,"lastViewed":1748705626087},{"name":"Epic Colour","id":"customfield_10095","key":"com.pyxis.greenhopper.jira:gh-epic-color","isShown":false},{"name":"Epic Link","id":"customfield_10092","key":"com.pyxis.greenhopper.jira:gh-epic-link","isShown":true},{"name":"Epic Name","id":"customfield_10093","key":"com.pyxis.greenhopper.jira:gh-epic-label","isShown":true},{"name":"Epic Status","id":"customfield_10094","key":"com.pyxis.greenhopper.jira:gh-epic-status","isShown":false},{"name":"Epic/Theme","id":"customfield_10030","key":"com.atlassian.jira.plugin.system.customfieldtypes:labels","isShown":true},{"name":"Flagged","id":"customfield_10000","key":"com.atlassian.jira.plugin.system.customfieldtypes:multicheckboxes","isShown":true},{"name":"IEEL Options","id":"customfield_10191","key":"com.atlassian.jira.plugin.system.customfieldtypes:multiselect","isShown":true},{"name":"Original story points","id":"customfield_11094","key":"com.atlassian.jpo:jpo-custom-field-original-story-points","isShown":true},{"name":"Parent Link","id":"customfield_11091","key":"com.atlassian.jpo:jpo-custom-field-parent","isShown":false},{"name":"Project","id":"customfield_10070","key":"com.atlassian.jira.plugin.system.customfieldtypes:select","isShown":true},{"name":"Rank","id":"customfield_10390","key":"com.pyxis.greenhopper.jira:gh-lexo-rank","isShown":true},{"name":"Rank (Obsolete)","id":"customfield_10001","key":"com.atlassian.jira.plugin.system.customfieldtypes:float","isShown":false},{"name":"Rank (Obsolete)","id":"customfield_10090","key":"com.pyxis.greenhopper.jira:gh-global-rank","isShown":true},{"name":"Release Version History","id":"customfield_10050","key":"com.pyxis.greenhopper.jira:greenhopper-releasedmultiversionhistory","isShown":true},{"name":"Severity","id":"customfield_10060","key":"com.atlassian.jira.plugin.system.customfieldtypes:select","isShown":true},{"name":"Site Affected:","id":"customfield_10190","key":"com.atlassian.jira.plugin.system.customfieldtypes:textfield","isShown":true},{"name":"Sprint","id":"customfield_10091","key":"com.pyxis.greenhopper.jira:gh-sprint","isShown":true},{"name":"Story Points","id":"customfield_10002","key":"com.atlassian.jira.plugin.system.customfieldtypes:float","isShown":true},{"name":"Support Region","id":"customfield_10990","key":"com.atlassian.jira.plugin.system.customfieldtypes:select","isShown":true},{"name":"Team","id":"customfield_11090","key":"com.atlassian.teams:rm-teams-custom-field-team","isShown":true},{"name":"Upstreaming","id":"customfield_10290","key":"com.atlassian.jira.plugin.system.customfieldtypes:select","isShown":true},{"name":"Whiteboard","id":"customfield_10591","key":"com.atlassian.jira.plugin.system.customfieldtypes:textfield","isShown":true},{"name":"issueFunction","id":"customfield_10590","key":"com.onresolve.jira.groovy.groovyrunner:jqlFunctionsCustomFieldType","isShown":true}],"type":"DETAILS","title":"Details"},{"searchers":[{"name":"Created Date","id":"created","key":"issue.field.created","isShown":true},{"name":"Updated Date","id":"updated","key":"issue.field.updated","isShown":true},{"name":"Resolution Date","id":"resolutiondate","key":"issue.field.resolution.date","isShown":true},{"name":"Due Date","id":"duedate","key":"issue.field.duedate","isShown":true},{"name":"Baseline end date","id":"customfield_10494","key":"com.atlassian.jira.plugin.system.customfieldtypes:datepicker","isShown":true},{"name":"Baseline start date","id":"customfield_10492","key":"com.atlassian.jira.plugin.system.customfieldtypes:datepicker","isShown":true},{"name":"Baseline start date","id":"customfield_10790","key":"com.atlassian.jira.plugin.system.customfieldtypes:datepicker","isShown":true},{"name":"End date","id":"customfield_10490","key":"com.atlassian.jira.plugin.system.customfieldtypes:datepicker","isShown":true},{"name":"Start date","id":"customfield_10493","key":"com.atlassian.jira.plugin.system.customfieldtypes:datepicker","isShown":true},{"name":"Target end","id":"customfield_11093","key":"com.atlassian.jpo:jpo-custom-field-baseline-end","isShown":true},{"name":"Target start","id":"customfield_11092","key":"com.atlassian.jpo:jpo-custom-field-baseline-start","isShown":true}],"type":"DATES","title":"Dates"},{"searchers":[{"name":"Assignee","id":"assignee","key":"issue.field.assignee","isShown":true,"lastViewed":1748705626086},{"name":"Reporter","id":"reporter","key":"issue.field.reporter","isShown":true}],"type":"PEOPLE","title":"People"}]},"values":{"issuetype":{"name":"Type","editHtml":"\n\n\n\n <div class=\"field-group aui-field-issuetype\" >\n <label for=\"searcher-type\">Type</label> <select class=\"select js-default-checkboxmultiselect\"\n id=\"searcher-type\"\n multiple=\"multiple\"\n name=\"type\"\n data-max-inline-results-displayed=\"100\"\n data-placeholder-text=\"Find Issue Types...\">\n <optgroup>\n \n <option class=\" \"\n id=\"type_-2\"\n title=\"All Standard Issue Types\"\n value=\"-2\">All Standard Issue Types</option>\n \n <option class=\" \"\n id=\"type_-3\"\n title=\"All Sub-Task Issue Types\"\n value=\"-3\">All Sub-Task Issue Types</option>\n </optgroup>\n\n <optgroup label=\"Standard Issue Types\">\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11303&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_1\"\n title=\"Bug\"\n value=\"1\">Bug</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/images/icons/issuetypes/epic.png\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_5\"\n title=\"Epic\"\n value=\"5\">Epic</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11310&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_4\"\n title=\"Improvement\"\n value=\"4\">Improvement</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11311&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_2\"\n title=\"New Feature\"\n value=\"2\">New Feature</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/images/icons/issuetypes/undefined.png\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_9\"\n title=\"Question/Request\"\n value=\"9\">Question/Request</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11300&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_10200\"\n title=\"Requirement\"\n value=\"10200\">Requirement</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11315&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_6\"\n title=\"Story\"\n value=\"6\">Story</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11318&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_3\"\n title=\"Task\"\n value=\"3\">Task</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/download/resources/com.thed.zephyr.je/images/icons/ico_zephyr_issuetype.png\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_10100\"\n title=\"Test\"\n value=\"10100\">Test</option>\n </optgroup>\n\n <optgroup label=\"Sub-Task Issue Types\">\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11316&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_10300\"\n title=\"Requirement task\"\n value=\"10300\">Requirement task</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/secure/viewavatar?size=xsmall&avatarId=11300&avatarType=issuetype\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_8\"\n title=\"Review task\"\n value=\"8\">Review task</option>\n \n <option class=\" imagebacked 10000 \"\n data-icon=\"/images/icons/issuetypes/task_agile.png\"\n data-fallback-icon=\"/images/icons/issuetypes/blank.png\"\n id=\"type_7\"\n title=\"Technical task\"\n value=\"7\">Technical task</option>\n </optgroup>\n </select>\n </div>\n ","validSearcher":true,"isShown":true},"customfield_10040":{"name":"Epic","viewHtml":"\n\n\n <div class=\"searcherValue\">\n \n <label class=\"fieldLabel\" for=\"fieldcustomfield_10040\">Epic:</label><span id=\"fieldcustomfield_10040\" class=\"fieldValue \">\n \nmetadata\n</span></div>\n","editHtml":"\n <div class=\"field-group aui-field-text\" >\n <label for=\"searcher-customfield_10040\">Epic</label> <input class=\"text\" id=\"searcher-customfield_10040\" name=\"customfield_10040\" type=\"text\" value=\"metadata\" />\n <div class=\"description\" id=\"customfield_10040-description\"><p>Link epics to child stories</p></div>\n </div>\n ","jql":"Epic = metadata","validSearcher":true,"isShown":true},"project":{"name":"Project","editHtml":" \n <div class=\"field-group aui-field-project\" >\n <label for=\"searcher-pid\">Project</label> <select class=\"js-project-checkboxmultiselect\"\n data-placeholder-text=\"Find Projects...\"\n id=\"searcher-pid\"\n multiple=\"multiple\"\n name=\"pid\">\n <optgroup label=\"Recent Projects\">\n </optgroup>\n <optgroup label=\"All Projects\" >\n \n <option data-icon=\"/secure/projectavatar?pid=11910&size=small\"\n title=\"Lemur\"\n value=\"11910\">\n Lemur (LMR)\n </option>\n <option data-icon=\"/secure/projectavatar?pid=10000&size=small\"\n title=\"Lustre\"\n value=\"10000\">\n Lustre (LU)\n </option>\n <option data-icon=\"/secure/projectavatar?pid=10070&size=small\"\n title=\"Lustre Documentation\"\n value=\"10070\">\n Lustre Documentation (LUDOC)\n </option>\n </optgroup>\n </select>\n </div>\n \n\n","validSearcher":true,"isShown":true},"assignee":{"name":"Assignee","editHtml":"\n \n <div class=\"field-group aui-field-userlist\" >\n <label for=\"searcher-assigneeSelect\">Assignee</label> <fieldset rel=\"assignee\" class=\"hidden user-group-searcher-params\">\n </fieldset>\n <select class=\"js-usergroup-checkboxmultiselect\" multiple=\"multiple\" id=\"assignee\" name=\"assignee\" data-placeholder-text=\"Enter username or group\">\n <optgroup>\n <option class=\"headerOption\" data-icon=\"https://jira.whamcloud.com/secure/useravatar?size=xsmall&avatarId=10123\" value=\"empty\" title=\"Unassigned\">Unassigned</option>\n </optgroup>\n <optgroup>\n </optgroup>\n </select>\n <input type=\"hidden\" name=\"check_prev_assignee\" value=\"true\">\n </div>\n \n","validSearcher":true,"isShown":true},"status":{"name":"Status","editHtml":"\n <div class=\"field-group aui-field-constants\" >\n <label for=\"searcher-status\">Status</label> <select class=\"select js-default-checkboxmultiselectstatuslozenge\"\n data-placeholder-text=\"Find Statuses...\"\n id=\"searcher-status\"\n multiple=\"multiple\"\n name=\"status\"\n data-max-inline-results-displayed=\"100\"\n data-footer-text=\"-78 more options. Continue typing to refine further.\" data-status-lozenge=\"true\">\n <optgroup >\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/open.png\" value=\"1\" title=\"Open\" data-simple-status=\"{"id":"1","name":"Open","description":"The issue is open and ready for the assignee to start work on it.","iconUrl":"/images/icons/statuses/open.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">Open</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/inprogress.png\" value=\"3\" title=\"In Progress\" data-simple-status=\"{"id":"3","name":"In Progress","description":"This issue is being actively worked on at the moment by the assignee.","iconUrl":"/images/icons/statuses/inprogress.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">In Progress</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/reopened.png\" value=\"4\" title=\"Reopened\" data-simple-status=\"{"id":"4","name":"Reopened","description":"This issue was once resolved, but the resolution was deemed incorrect. From here issues are either marked assigned or resolved.","iconUrl":"/images/icons/statuses/reopened.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">Reopened</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/resolved.png\" value=\"5\" title=\"Resolved\" data-simple-status=\"{"id":"5","name":"Resolved","description":"A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.","iconUrl":"/images/icons/statuses/resolved.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Resolved</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/closed.png\" value=\"6\" title=\"Closed\" data-simple-status=\"{"id":"6","name":"Closed","description":"The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.","iconUrl":"/images/icons/statuses/closed.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Closed</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10000\" title=\"Accepted\" data-simple-status=\"{"id":"10000","name":"Accepted","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Accepted</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10001\" title=\"In Backlog\" data-simple-status=\"{"id":"10001","name":"In Backlog","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">In Backlog</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10002\" title=\"Blocked External\" data-simple-status=\"{"id":"10002","name":"Blocked External","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">Blocked External</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10003\" title=\"Blocked Internal\" data-simple-status=\"{"id":"10003","name":"Blocked Internal","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">Blocked Internal</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10004\" title=\"Pending Review\" data-simple-status=\"{"id":"10004","name":"Pending Review","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">Pending Review</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10105\" title=\"Waiting On Gatekeeper\" data-simple-status=\"{"id":"10105","name":"Waiting On Gatekeeper","description":"Ticket is waiting on the gate keeper for code to land.","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">Waiting On Gatekeeper</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/closed.png\" value=\"10205\" title=\"Done\" data-simple-status=\"{"id":"10205","name":"Done","description":"","iconUrl":"/images/icons/statuses/closed.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Done</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/open.png\" value=\"10206\" title=\"To Do\" data-simple-status=\"{"id":"10206","name":"To Do","description":"","iconUrl":"/images/icons/statuses/open.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">To Do</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/information.png\" value=\"10305\" title=\"In Review\" data-simple-status=\"{"id":"10305","name":"In Review","description":"","iconUrl":"/images/icons/statuses/information.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">In Review</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10505\" title=\"Waiting\" data-simple-status=\"{"id":"10505","name":"Waiting","description":"Waiting for a response or something else that is required to complete the issue.","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">Waiting</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10605\" title=\"Committed\" data-simple-status=\"{"id":"10605","name":"Committed","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">Committed</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10606\" title=\"Rejected\" data-simple-status=\"{"id":"10606","name":"Rejected","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Rejected</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10607\" title=\"Completed\" data-simple-status=\"{"id":"10607","name":"Completed","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Completed</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10608\" title=\"New\" data-simple-status=\"{"id":"10608","name":"New","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">New</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10705\" title=\"Awaiting Verification\" data-simple-status=\"{"id":"10705","name":"Awaiting Verification","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":4,"key":"indeterminate","colorName":"inprogress"}}\">Awaiting Verification</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10706\" title=\"Fix Verified\" data-simple-status=\"{"id":"10706","name":"Fix Verified","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":3,"key":"done","colorName":"success"}}\">Fix Verified</option>\n <option class=\"imagebacked\" data-icon=\"/images/icons/statuses/generic.png\" value=\"10806\" title=\"Need Information\" data-simple-status=\"{"id":"10806","name":"Need Information","description":"","iconUrl":"/images/icons/statuses/generic.png","statusCategory":{"id":2,"key":"new","colorName":"default"}}\">Need Information</option>\n </optgroup>\n</select>\n </div>\n \n","validSearcher":true,"isShown":true}}}
[{"id":-1,"name":"My open issues","jql":"assignee = currentUser() AND resolution = Unresolved order by updated DESC","isSystem":true,"sharePermissions":[],"requiresLogin":true},{"id":-2,"name":"Reported by me","jql":"reporter = currentUser() order by created DESC","isSystem":true,"sharePermissions":[],"requiresLogin":true},{"id":-4,"name":"All issues","jql":"order by created DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false},{"id":-5,"name":"Open issues","jql":"resolution = Unresolved order by priority DESC,updated DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false},{"id":-9,"name":"Done issues","jql":"statusCategory = Done order by updated DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false},{"id":-3,"name":"Viewed recently","jql":"issuekey in issueHistory() order by lastViewed DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false},{"id":-6,"name":"Created recently","jql":"created >= -1w order by created DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false},{"id":-7,"name":"Resolved recently","jql":"resolutiondate >= -1w order by updated DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false},{"id":-8,"name":"Updated recently","jql":"updated >= -1w order by updated DESC","isSystem":true,"sharePermissions":[],"requiresLogin":false}]
@Andreas Dilger Sure, we have evaluated the same test case in AlmaLinux 8.8 + 2.15.3 with the new kernel (4.18.0-477.10.1.el8_lustre.x86_64), now the issue did not occur. Thanks again!