<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:03:48 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13739] mount fails with SSK keys </title>
                <link>https://jira.whamcloud.com/browse/LU-13739</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Client mount fails with the below error message. This is with &quot;lustre02.srpc.flavor.tcp.cli2mdt=skpi&quot; and &quot;lustre02.srpc.flavor.tcp.cli2ost=skpi&quot;&lt;/p&gt;

&lt;p&gt;The server and clients have the correct SSK keys and keyring shows them as loaded.&#160;&lt;/p&gt;

&lt;p&gt;Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(lmv_obd.c:315:lmv_connect_mdc()) target lustre02-MDT0000_UUID connect error -1&lt;br/&gt;
Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(lmv_obd.c:315:lmv_connect_mdc()) Skipped 1 previous similar message&lt;br/&gt;
Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(llite_lib.c:292:client_common_fill_super()) cannot connect to lustre02-clilmv-ffff8964b4dea800: rc = -1&lt;br/&gt;
Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(llite_lib.c:292:client_common_fill_super()) Skipped 1 previous similar message&lt;br/&gt;
Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(lov_obd.c:839:lov_cleanup()) lustre02-clilov-ffff8964b4dea800: lov tgt 0 not cleaned! deathrow=0, lovrc=1&lt;br/&gt;
Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(lov_obd.c:839:lov_cleanup()) Skipped 13 previous similar messages&lt;br/&gt;
Jul 1 17:11:54 zabbix01 kernel: LustreError: 22972:0:(obd_mount.c:1608:lustre_fill_super()) Unable to mount (-1)&lt;/p&gt;</description>
                <environment>RHEL7 </environment>
        <key id="59811">LU-13739</key>
            <summary>mount fails with SSK keys </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="jfilizetti">Jeremy Filizetti</assignee>
                                    <reporter username="raot">Joe Frith</reporter>
                        <labels>
                    </labels>
                <created>Wed, 1 Jul 2020 21:16:48 +0000</created>
                <updated>Wed, 8 Jul 2020 20:14:22 +0000</updated>
                            <resolved>Wed, 8 Jul 2020 20:14:22 +0000</resolved>
                                    <version>Lustre 2.12.5</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="274207" author="raot" created="Wed, 1 Jul 2020 21:23:51 +0000"  >&lt;p&gt;On the client&#160;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@ ~&amp;#93;&lt;/span&gt;# keyctl show&lt;br/&gt;
Session Keyring&lt;br/&gt;
 422481900 --alswrv 0 0 keyring: _ses&lt;br/&gt;
 51709085 --alswrv 0 65534 &amp;#95; keyring: _uid.0&lt;br/&gt;
 180533941 --alswrv 0 0 &amp;#95; user: lustre:lustre02&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@ ~&amp;#93;&lt;/span&gt;# keyctl pipe 180533941 | lgss_sk -r -&lt;br/&gt;
Version: 1&lt;br/&gt;
Type: client&lt;br/&gt;
HMAC alg: sha256&lt;br/&gt;
Crypto alg: ctr(aes)&lt;br/&gt;
Ctx Expiration: 604800 seconds&lt;br/&gt;
Shared keylen: 256 bits&lt;br/&gt;
Prime length: 2048 bits&lt;br/&gt;
File system: lustre02&lt;br/&gt;
MGS NIDs:&lt;br/&gt;
Nodemap name: default&lt;br/&gt;
Shared key:&lt;br/&gt;
 0000: 5900 0856 8af8 5fbc 2549 168f 8a29 56a8 Y..V.._.%I...)V.&lt;br/&gt;
 0010: 858e 3bd9 5fd1 23af 780f b92b 3bcc c406 ..;._.#.x..+;...&lt;br/&gt;
Prime (p):&lt;br/&gt;
 0000: f470 ae37 4a92 24bd d442 9456 2cf8 9cbd .p.7J.$..B.V,...&lt;br/&gt;
 0010: 4dc6 400c 76b8 9edc c823 18e5 86f9 a0ba M.@.v....#......&lt;br/&gt;
 0020: 70e2 72ba f5bf 4320 0386 a047 e772 2567 p.r...C ...G.r%g&lt;br/&gt;
 0030: 8b65 10ef 758d e8a3 0441 bc0c 8b36 2be9 .e..u....A...6+.&lt;br/&gt;
 0040: c38f cbdc 20ea 7461 890e c59b c948 f964 .... .ta.....H.d&lt;br/&gt;
 0050: 8dd8 3891 4947 cb34 93d9 4150 1f4a 7eae ..8.IG.4..AP.J~.&lt;br/&gt;
 0060: 65c1 d7b4 6e3b 274a 753f d0af 242a 8e10 e...n;&apos;Ju?..$*..&lt;br/&gt;
 0070: 9055 3ad5 b195 856c c7b8 b9f0 2b34 666a .U:....l....+4fj&lt;br/&gt;
 0080: e8fc 2988 5f77 ced9 cbc0 2911 179b c1d9 ..)._w....).....&lt;br/&gt;
 0090: 0717 e8d7 a14c 14f6 4907 fa0c 3de9 fffa .....L..I...=...&lt;br/&gt;
 00a0: f524 623c 6664 fa20 4246 c1f3 1c06 27cc .$b&amp;lt;fd. BF....&apos;.&lt;br/&gt;
 00b0: ea75 7d10 8804 3489 88fd 392f 5c89 284b .u}...4...9/\.(K&lt;br/&gt;
 00c0: 0aee 6df5 5471 95a7 6e1d 669c 658f e848 ..m.Tq..n.f.e..H&lt;br/&gt;
 00d0: f74b 15be 4a1e fbc1 8fcc 78ed 87c5 1abe .K..J.....x.....&lt;br/&gt;
 00e0: 028a 66cd e230 d6f9 8e8b f9e9 9cfe 6013 ..f..0........`.&lt;br/&gt;
 00f0: c6d5 fcae b2be d59f 9375 8beb 8564 ad63 .........u...d.c&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;On the server side -&lt;/p&gt;

&lt;p&gt;&#160;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@ ~&amp;#93;&lt;/span&gt;# keyctl show&lt;br/&gt;
Session Keyring&lt;br/&gt;
 13605391 --alswrv 0 0 keyring: _ses&lt;br/&gt;
 861388744 --alswrv 0 65534 &amp;#95; keyring: _uid.0&lt;br/&gt;
 77605995 --alswrv 0 0 &amp;#95; user: lustre:lustre02:default&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@ ~&amp;#93;&lt;/span&gt;# keyctl pipe 77605995 | lgss_sk -r -&lt;br/&gt;
Version: 1&lt;br/&gt;
Type: server&lt;br/&gt;
HMAC alg: sha256&lt;br/&gt;
Crypto alg: ctr(aes)&lt;br/&gt;
Ctx Expiration: 604800 seconds&lt;br/&gt;
Shared keylen: 256 bits&lt;br/&gt;
Prime length: 2048 bits&lt;br/&gt;
File system: lustre02&lt;br/&gt;
MGS NIDs:&lt;br/&gt;
Nodemap name: default&lt;br/&gt;
Shared key:&lt;br/&gt;
 0000: 5900 0856 8af8 5fbc 2549 168f 8a29 56a8 Y..V.._.%I...)V.&lt;br/&gt;
 0010: 858e 3bd9 5fd1 23af 780f b92b 3bcc c406 ..;._.#.x..+;...&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@ ~&amp;#93;&lt;/span&gt;#&lt;/p&gt;</comment>
                            <comment id="274556" author="jfilizetti" created="Mon, 6 Jul 2020 19:22:04 +0000"  >&lt;p&gt;Do you have the MDS server logs as well?&lt;/p&gt;</comment>
                            <comment id="274557" author="raot" created="Mon, 6 Jul 2020 19:35:37 +0000"  >&lt;p&gt;I see this on the MDS server.&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;kernel: LustreError: 233128:0:(tgt_handler.c:921:tgt_connect_check_sptlrpc()) lustre02-MDT0000: unauthorized rpc flavor 0 from x.x.x.x@tcp, expect 22&lt;/p&gt;</comment>
                            <comment id="274562" author="jfilizetti" created="Mon, 6 Jul 2020 20:31:40 +0000"  >&lt;p&gt;This seems to indicate the client is sending the wrong RPC flavor SPTLRPC_POLICY_NULL.&#160; For some reason your client isn&apos;t pulling the correct info out of the MGS llog.&#160; What does &quot;lctl get_param mgs.MGS.live.*&quot; say?&lt;/p&gt;</comment>
                            <comment id="274566" author="raot" created="Mon, 6 Jul 2020 20:56:31 +0000"  >&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@mds ~&amp;#93;&lt;/span&gt;# lctl get_param mgs.MGS.live.*&lt;br/&gt;
mgs.MGS.live.lustre02=&lt;br/&gt;
fsname: lustre02&lt;br/&gt;
flags: 0x20 gen: 58&lt;br/&gt;
lustre02-MDT0000&lt;br/&gt;
lustre02-OST0000&lt;br/&gt;
lustre02-OST0001&lt;br/&gt;
lustre02-OST0002&lt;br/&gt;
lustre02-OST0003&lt;br/&gt;
lustre02-OST0004&lt;br/&gt;
lustre02-OST0005&lt;br/&gt;
lustre02-OST0006&lt;/p&gt;

&lt;p&gt;Secure RPC Config Rules:&lt;br/&gt;
lustre02.srpc.flavor.tcp.cli2mdt=skpi&lt;/p&gt;

&lt;p&gt;imperative_recovery_state:&lt;br/&gt;
 state: full&lt;br/&gt;
 nonir_clients: 0&lt;br/&gt;
 nidtbl_version: 37&lt;br/&gt;
 notify_duration_total: 0.000238135&lt;br/&gt;
 notify_duation_max: 0.000238135&lt;br/&gt;
 notify_count: 1&lt;br/&gt;
mgs.MGS.live.params=&lt;br/&gt;
fsname: params&lt;br/&gt;
flags: 0x20 gen: 1&lt;/p&gt;

&lt;p&gt;Secure RPC Config Rules:&lt;/p&gt;

&lt;p&gt;imperative_recovery_state:&lt;br/&gt;
 state: full&lt;br/&gt;
 nonir_clients: 0&lt;br/&gt;
 nidtbl_version: 2&lt;br/&gt;
 notify_duration_total: 0.000000000&lt;br/&gt;
 notify_duation_max: 0.000000000&lt;br/&gt;
 notify_count: 0&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@mds ~&amp;#93;&lt;/span&gt;#&lt;/p&gt;</comment>
                            <comment id="274583" author="raot" created="Tue, 7 Jul 2020 02:13:49 +0000"  >&lt;p&gt;ptlrpc_gss kernel module was not loaded on the client. After fixing this issue, the mount still hangs, I am seeing the below in the client logs, nothing on the server side.&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Jul 6 22:06:31 zabbix01 kernel: Lustre: 22730:0:(gss_svc_upcall.c:1149:gss_init_svc_upcall()) Init channel is not opened by lsvcgssd, following request might be dropped until lsvcgssd is active&lt;br/&gt;
Jul 6 22:06:31 zabbix01 kernel: Key type lgssc registered&lt;br/&gt;
Jul 6 22:06:42 zabbix01 kernel: Lustre: 22734:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1594087591/real 1594087591&amp;#93;&lt;/span&gt; req@ffff8af7b3270000 x1671521988773568/t0(0) o801-&amp;gt;lustre02-MDT0000-mdc-ffff8af635af1000@10.42.41.33@tcp:12/10 lens 224/224 e 0 to 1 dl 1594087602 ref 2 fl Rpc:X/0/ffffffff rc 0/-1&lt;br/&gt;
Jul 6 22:06:42 zabbix01 kernel: LustreError: 22734:0:(gss_keyring.c:1409:gss_kt_update()) negotiation: rpc err -85, gss err 0&lt;br/&gt;
Jul 6 22:06:42 zabbix01 lgss_keyring: &lt;span class=&quot;error&quot;&gt;&amp;#91;22734&amp;#93;&lt;/span&gt;:ERROR:do_nego_rpc(): status: -110 (Connection timed out)&lt;br/&gt;
Jul 6 22:06:42 zabbix01 lgss_keyring: &lt;span class=&quot;error&quot;&gt;&amp;#91;22734&amp;#93;&lt;/span&gt;:ERROR:lgssc_negotiation_manual(): negotiation rpc error -85&lt;br/&gt;
Jul 6 22:06:42 zabbix01 lgss_keyring: &lt;span class=&quot;error&quot;&gt;&amp;#91;22734&amp;#93;&lt;/span&gt;:ERROR:lgssc_kr_negotiate_manual(): key 18d782c4: failed to negotiate&lt;br/&gt;
Jul 6 22:06:42 zabbix01 kernel: Lustre: 22734:0:(sec_gss.c:315:cli_ctx_expire()) ctx ffff8af7b4d19d00(0-&amp;gt;lustre02-MDT0000_UUID) get expired: 1594087791(+189s)&lt;/p&gt;</comment>
                            <comment id="274667" author="jfilizetti" created="Tue, 7 Jul 2020 20:42:19 +0000"  >&lt;p&gt;Can you confirm that lsvcgss service is running on the server side and that you have the &quot;-s&quot; included in /etc/sysconfig/lsvcgss?&lt;/p&gt;</comment>
                            <comment id="274672" author="raot" created="Tue, 7 Jul 2020 21:10:21 +0000"  >&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@starmds01 ~&amp;#93;&lt;/span&gt;# cat /etc/sysconfig/lsvcgss&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;Optional arguments passed to lsvcgssd.&lt;br/&gt;
LSVCGSSDARGS=&apos;-s -m&apos;&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@starmds01 ~&amp;#93;&lt;/span&gt;# systemctl status lsvcgss.service&lt;br/&gt;
&#9679; lsvcgss.service - SYSV: start and stop the lsvcgssd daemon&lt;br/&gt;
 Loaded: loaded (/etc/rc.d/init.d/lsvcgss; bad; vendor preset: disabled)&lt;br/&gt;
 Active: active (running) since Mon 2020-07-06 22:53:25 EDT; 18h ago&lt;br/&gt;
 Docs: man:systemd-sysv-generator(8)&lt;br/&gt;
 Process: 204274 ExecStart=/etc/rc.d/init.d/lsvcgss start (code=exited, status=0/SUCCESS)&lt;br/&gt;
 CGroup: /system.slice/lsvcgss.service&lt;br/&gt;
 &#9492;&#9472;204278 /usr/sbin/lsvcgssd -s -m&lt;/p&gt;

&lt;p&gt;Jul 06 22:53:25 starmds01 systemd&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt;: Starting SYSV: start and stop the lsvcgssd daemon...&lt;br/&gt;
Jul 06 22:53:25 starmds01 lsvcgss&lt;span class=&quot;error&quot;&gt;&amp;#91;204274&amp;#93;&lt;/span&gt;: Starting lsvcgssd&lt;br/&gt;
Jul 06 22:53:25 starmds01 systemd&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt;: Started SYSV: start and stop the lsvcgssd daemon.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@starmds01 ~&amp;#93;&lt;/span&gt;#&lt;/p&gt;</comment>
                            <comment id="274687" author="jfilizetti" created="Tue, 7 Jul 2020 23:46:01 +0000"  >&lt;p&gt;Can you add a &quot;-vv&quot; as well to the LSVCGSSDARGS and then send the output from the syslog for the MDS.&#160; If possible it&apos;d be nice to have the output of &quot;lctl dk&quot; as well.&lt;/p&gt;</comment>
                            <comment id="274698" author="raot" created="Wed, 8 Jul 2020 01:52:13 +0000"  >&lt;p&gt;lctl dk shows below when the client is connecting.&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;00010000:00080000:2.0F:1594173014.717485:0:235886:0:(ldlm_lib.c:1227:target_handle_connect()) MGS: connection from 17390dcc-e9ea-2781-83dc-708eda5b1b60@130.199.148.189@tcp t0 exp (null) cur 539645 last 0&lt;br/&gt;
00000020:00000080:2.0:1594173014.717501:0:235886:0:(genops.c:1417:class_connect()) connect: client 17390dcc-e9ea-2781-83dc-708eda5b1b60, cookie 0x9c7d9c60f6ed33e9&lt;br/&gt;
00000020:01000000:2.0:1594173014.717507:0:235886:0:(lprocfs_status_server.c:491:lprocfs_exp_setup()) using hash ffff99b7cfe94c00&lt;br/&gt;
00000100:00080000:2.0:1594173014.717523:0:235886:0:(import.c:86:import_set_state_nolock()) ffff99e7d97b6000 : changing import state from RECOVER to FULL&lt;br/&gt;
00000100:02000000:2.0:1594173014.717527:0:235886:0:(import.c:1597:ptlrpc_import_recovery_state_machine()) MGS: Connection restored to e3959aaf-45a8-6dda-c117-ced5724d4e0b (at 130.199.148.189@tcp)&lt;br/&gt;
20000000:01000000:2.0:1594173014.776757:0:235886:0:(mgs_nids.c:632:mgs_get_ir_logs()) Reading IR log lustre02-cliir bufsize 1048576.&lt;br/&gt;
20000000:01000000:2.0:1594173014.776763:0:235886:0:(mgs_nids.c:192:mgs_nidtbl_read()) fsname lustre02, entry size 32, pages 4064/1/256/255.&lt;br/&gt;
20000000:01000000:2.0:1594173014.776766:0:235886:0:(mgs_nids.c:204:mgs_nidtbl_read()) Read IR logs lustre02 return with 32, version 64&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Even with the -vv option added, I see only the below line on the MDS syslog.&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Jul 7 21:49:20 starmds01 kernel: Lustre: MGS: Connection restored to e3959aaf-45a8-6dda-c117-ced5724d4e0b (at 130.199.148.189@tcp)&lt;br/&gt;
Jul 7 21:49:20 starmds01 kernel: Lustre: Skipped 14 previous similar messages&lt;/p&gt;</comment>
                            <comment id="274706" author="jfilizetti" created="Wed, 8 Jul 2020 02:37:54 +0000"  >&lt;p&gt;It looks like I can reproduce this with Centos 7.8&#160; is that version you are using?&#160; If you add security to the debug on the MDS do you see something like the following?&lt;/p&gt;

&lt;p&gt;&lt;tt&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@tr-mds-1 ~&amp;#93;&lt;/span&gt;# echo &apos;+sec&apos; &amp;gt; /sys/kernel/debug/lnet/debug&lt;/tt&gt;&lt;br/&gt;
&lt;tt&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@tr-mds-1 ~&amp;#93;&lt;/span&gt;# lctl dk | grep gss&lt;/tt&gt;&lt;br/&gt;
&lt;tt&gt;02000000:08000000:5.0:1594175878.715795:0:31012:0:(sec_gss.c:1991:gss_svc_handle_init()) processing gss init(1) request from 172.26.0.250@tcp&lt;/tt&gt;&lt;br/&gt;
&lt;tt&gt;02000000:08000000:5.0:1594175878.715810:0:31012:0:(gss_svc_upcall.c:960:gss_svc_upcall_handle_init()) cache_check return ENOENT, drop&lt;/tt&gt;&lt;/p&gt;</comment>
                            <comment id="274708" author="raot" created="Wed, 8 Jul 2020 02:45:29 +0000"  >&lt;p&gt;Yes, see below. I am using RHEL 7.8. Is there a workaround or quick fix?&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@starmds01 ~&amp;#93;&lt;/span&gt;# lctl dk | grep gss&lt;br/&gt;
02000000:08000000:22.0F:1594176204.563936:0:236049:0:(sec_gss.c:1991:gss_svc_handle_init()) processing gss init(1) request from 130.199.148.189@tcp&lt;br/&gt;
02000000:08000000:22.0:1594176204.563953:0:236049:0:(gss_svc_upcall.c:960:gss_svc_upcall_handle_init()) cache_check return ENOENT, drop&lt;br/&gt;
02000000:08000000:8.0F:1594176215.608251:0:235969:0:(sec_gss.c:1991:gss_svc_handle_init()) processing gss init(1) request from 130.199.148.189@tcp&lt;br/&gt;
02000000:08000000:8.0:1594176215.608265:0:235969:0:(gss_svc_upcall.c:960:gss_svc_upcall_handle_init()) cache_check return ENOENT, drop&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@starmds01 ~&amp;#93;&lt;/span&gt;#&lt;/p&gt;</comment>
                            <comment id="274740" author="jfilizetti" created="Wed, 8 Jul 2020 13:11:55 +0000"  >&lt;p&gt;There was some additional changes to the sunrpc code incorporated into RHEL 7.8 from upstream that assumes the files for the cache get closed repeatedly.  Since Lustre&apos;s lsvcgssd just opens the file one and keeps it open a last_close variable in the cache_detail struct (rsi_cache) is not being updated so this is why we are seeing the issue.  I&apos;ll see if at some point today I can take a look at how the NFS stuff works to make sure my assumptions aren&apos;t wrong here on how it uses the cache and then create a patch for Lustre.  &lt;/p&gt;</comment>
                            <comment id="274792" author="jfilizetti" created="Wed, 8 Jul 2020 20:13:30 +0000"  >&lt;p&gt;There is a patch that Sebastien has already posted for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13754&quot; title=&quot;GSS-based authentication fails on CentOS/RHEL 7.8&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13754&quot;&gt;&lt;del&gt;LU-13754&lt;/del&gt;&lt;/a&gt; that should work.  I thought there was additional patches needed but it was just something messed up in my test environment.  If you have any issues after testing the patch please file it under the other bug.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="59857">LU-13754</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i0147r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>