<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:51:12 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12279] client got evicted due to network issue.</title>
                <link>https://jira.whamcloud.com/browse/LU-12279</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;soak has been running on master branch version 2.12.53_13_g4191e0c for about 2 days, no crash, but many applications failed 511 fail /956 pass. From the syslog, seems caused by network issue.   The first 24 hours seems good, failure rate is similar to 2.12.1, but as the test went by, applications started to fail a lot.&lt;/p&gt;

&lt;p&gt;Some error msg seems similar as &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12065&quot; title=&quot;Client got evicted when  lock callback timer expired  on OSS &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12065&quot;&gt;&lt;del&gt;LU-12065&lt;/del&gt;&lt;/a&gt; which has already been fixed in this version.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@soak-16 syslog]# grep -r &quot;Async QP&quot;
soak-20.log:May  8 06:43:29 soak-20 kernel: LNetError: 0:0:(o2iblnd_cb.c:3665:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-35.log:May  8 06:42:10 soak-35 kernel: LNetError: 0:0:(o2iblnd_cb.c:3665:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-36.log:May  8 06:41:43 soak-36 kernel: LNetError: 0:0:(o2iblnd_cb.c:3665:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-17.log:May  8 06:42:27 soak-17 kernel: LNetError: 0:0:(o2iblnd_cb.c:3665:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-38.log:May  8 06:42:09 soak-38 kernel: LNetError: 0:0:(o2iblnd_cb.c:3665:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-40.log:May  8 06:41:49 soak-40 kernel: LNetError: 0:0:(o2iblnd_cb.c:3665:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
[root@soak-16 syslog]# 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;many of following errors showed in client syslog&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;May  8 07:24:51 soak-17 kernel: LustreError: 218649:0:(import.c:343:ptlrpc_invalidate_import()) soaked-OST0009_UUID: rc = -110 waiting for callback (6 != 0)
May  8 07:24:51 soak-17 kernel: LustreError: 218649:0:(import.c:369:ptlrpc_invalidate_import()) @@@ still on sending list  req@ffff94340487a400 x1632824181437936/t0(0) o4-&amp;gt;soaked-OST0009-osc-ffff943a9be9a800@192.168.1.105@o2ib:6/4 lens 488/448 e 0 to 0 dl 1557297784 ref 2 fl UnregBULK:ES/0/ffffffff rc -5/-1
May  8 07:24:51 soak-17 kernel: LustreError: 218649:0:(import.c:383:ptlrpc_invalidate_import()) soaked-OST0009_UUID: Unregistering RPCs found (6). Network is sluggish? Waiting them to error out.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>version=2.12.53_13_g4191e0c</environment>
        <key id="55600">LU-12279</key>
            <summary>client got evicted due to network issue.</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ashehata">Amir Shehata</assignee>
                                    <reporter username="sarah">Sarah Liu</reporter>
                        <labels>
                            <label>soak</label>
                    </labels>
                <created>Thu, 9 May 2019 19:50:45 +0000</created>
                <updated>Fri, 18 Oct 2019 14:17:19 +0000</updated>
                            <resolved>Sat, 25 May 2019 21:48:48 +0000</resolved>
                                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.2</version>
                                    <fixVersion>Lustre 2.13.0</fixVersion>
                    <fixVersion>Lustre 2.12.2</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="247337" author="sarah" created="Fri, 17 May 2019 15:39:00 +0000"  >&lt;p&gt;hit the similar issue when testing 2.12.2-rc1 build with MOFED version=4.6&lt;/p&gt;</comment>
                            <comment id="247456" author="sarah" created="Tue, 21 May 2019 17:34:41 +0000"  >&lt;p&gt;Hit this problem again when testing 2.12.2-rc2.  soak started with 2.12.2-rc2 on 5/18 00:41,  it seems the first 48 hours running was fine (checked the status on Monday 5/20, fail/pass rate is 153 fail/988 pass), then it started to show the LNetError and application fail rate rises to 1311 fail/1348 pass&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@soak-16 syslog]# grep -r &quot;Async QP&quot;
soak-20.log:May 20 17:19:36 soak-20 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-29.log:May 20 17:18:01 soak-29 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-31.log:May 20 17:17:55 soak-31 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-36.log:May 20 17:17:21 soak-36 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-37.log:May 20 17:17:42 soak-37 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-38.log:May 20 17:17:55 soak-38 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.105@o2ib: Async QP event type 1
soak-8.log:May 21 13:38:22 soak-8 kernel: LNetError: 0:0:(o2iblnd_cb.c:3660:kiblnd_qp_event()) 192.168.1.110@o2ib: Async QP event type 1
[root@soak-16 syslog]#

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;on 1 client, shows following trace&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 9590.178380] Lustre: Lustre: Build Version: 2.12.2_RC2
[ 9590.299163] LNet: Using FMR for registration
[ 9590.313035] LNet: Added LNI 192.168.1.117@o2ib [8/256/0/180]
[21874.380134] Lustre: Mounted soaked-client
[22954.170140] LNetError: 11470:0:(o2iblnd_cb.c:3335:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds
[22954.181512] LNetError: 11470:0:(o2iblnd_cb.c:3410:kiblnd_check_conns()) Timed out RDMA with 192.168.1.108@o2ib (7): c: 5, oc: 0, rc: 8
[22954.195335] Lustre: 11496:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1558140941/real 1558140947]  req@ffff9930f11
0ba80 x1633815204885120/t0(0) o103-&amp;gt;soaked-MDT0000-mdc-ffff9937de2da000@192.168.1.108@o2ib:17/18 lens 328/224 e 0 to 1 dl 1558140986 ref 1 fl Rpc:eX/0/ffffffff rc 0/-1
[22954.195349] Lustre: soaked-MDT0000-mdc-ffff9937de2da000: Connection to soaked-MDT0000 (at 192.168.1.108@o2ib) was lost; in progress operations using this service will wait 
for recovery to complete
[22954.195351] LustreError: 166-1: MGC192.168.1.108@o2ib: Connection to MGS (at 192.168.1.108@o2ib) was lost; in progress operations using this service will fail
[22954.264070] Lustre: 11496:0:(client.c:2134:ptlrpc_expire_one_request()) Skipped 2 previous similar messages
[23160.754222] INFO: task simul:13725 blocked for more than 120 seconds.
[23160.761437] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
[23160.770188] simul           D ffff992ff5801040     0 13725  13719 0x00000080
[23160.778148] Call Trace:
[23160.780916]  [&amp;lt;ffffffffab051b92&amp;gt;] ? path_lookupat+0x122/0x8b0
[23160.787364]  [&amp;lt;ffffffffab569b69&amp;gt;] schedule_preempt_disabled+0x29/0x70
[23160.794576]  [&amp;lt;ffffffffab567ab7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
[23160.801508]  [&amp;lt;ffffffffab566e9f&amp;gt;] mutex_lock+0x1f/0x2f
[23160.807255]  [&amp;lt;ffffffffab052465&amp;gt;] filename_create+0x85/0x180
[23160.813595]  [&amp;lt;ffffffffab05319f&amp;gt;] ? getname_flags+0x4f/0x1a0
[23160.819941]  [&amp;lt;ffffffffab053214&amp;gt;] ? getname_flags+0xc4/0x1a0
[23160.826274]  [&amp;lt;ffffffffab0534c1&amp;gt;] user_path_create+0x41/0x60
[23160.832610]  [&amp;lt;ffffffffab054888&amp;gt;] SyS_mkdirat+0x48/0x100
[23160.838562]  [&amp;lt;ffffffffab575d15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[23160.845972]  [&amp;lt;ffffffffab575d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[23160.853376]  [&amp;lt;ffffffffab054959&amp;gt;] SyS_mkdir+0x19/0x20
[23160.859034]  [&amp;lt;ffffffffab575ddb&amp;gt;] system_call_fastpath+0x22/0x27
[23160.865769]  [&amp;lt;ffffffffab575d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[23160.873175] INFO: task mdtest:16087 blocked for more than 120 seconds.
[23160.880480] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
[23160.889240] mdtest          D ffff9937e7e75140     0 16087  16080 0x00000080
[23160.897164] Call Trace:
[23160.899914]  [&amp;lt;ffffffffaaed6530&amp;gt;] ? try_to_wake_up+0x190/0x390
[23160.906444]  [&amp;lt;ffffffffab569b69&amp;gt;] schedule_preempt_disabled+0x29/0x70
[23160.913665]  [&amp;lt;ffffffffab567ab7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
[23160.920592]  [&amp;lt;ffffffffab566e9f&amp;gt;] mutex_lock+0x1f/0x2f
[23160.926346]  [&amp;lt;ffffffffab0536b5&amp;gt;] do_rmdir+0x165/0x220
[23160.932100]  [&amp;lt;ffffffffab575d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[23160.939493]  [&amp;lt;ffffffffab575d15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[23160.946896]  [&amp;lt;ffffffffab575d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[23160.954298]  [&amp;lt;ffffffffab575d15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[23160.961706]  [&amp;lt;ffffffffab575d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[23160.969108]  [&amp;lt;ffffffffab575d15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[23160.976518]  [&amp;lt;ffffffffab054976&amp;gt;] SyS_rmdir+0x16/0x20
[23160.982174]  [&amp;lt;ffffffffab575ddb&amp;gt;] system_call_fastpath+0x22/0x27
[23160.988895]  [&amp;lt;ffffffffab575d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[23205.167707] LNet: 11470:0:(o2iblnd_cb.c:3381:kiblnd_check_conns()) Timed out tx for 192.168.1.108@o2ib: 20 seconds
[23230.167462] LNet: 11470:0:(o2iblnd_cb.c:3381:kiblnd_check_conns()) Timed out tx for 192.168.1.108@o2ib: 45 seconds
[23257.167312] LNet: 11470:0:(o2iblnd_cb.c:3381:kiblnd_check_conns()) Timed out tx for 192.168.1.108@o2ib: 22 seconds
[23280.995078] INFO: task simul:13725 blocked for more than 120 seconds.
[23281.002285] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="247459" author="ashehata" created="Tue, 21 May 2019 17:55:27 +0000"  >&lt;p&gt;The QP is receiving IB_EVENT_QP_FATAL event. The connection gets closed leading to the issues.&lt;/p&gt;

&lt;p&gt;Can we take a look at the MLX stats to see what kind of errors are being reported where this event is being received and on 192.168.1.105&lt;/p&gt;

&lt;p&gt;I&apos;ll have to look at the MOFED code to see if I can see something more&lt;/p&gt;</comment>
                            <comment id="247460" author="ashehata" created="Tue, 21 May 2019 18:03:28 +0000"  >&lt;p&gt;can you attach the config you&apos;re using?&lt;/p&gt;</comment>
                            <comment id="247478" author="sarah" created="Tue, 21 May 2019 19:53:39 +0000"  >&lt;p&gt;There is no special config for lnet, following is the info Amir requested&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@soak-17 ~]# lnetctl net show -v 4
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
          statistics:
              send_count: 0
              recv_count: 0
              drop_count: 0
          sent_stats:
              put: 0
              get: 0
              reply: 0
              ack: 0
              hello: 0
          received_stats:
              put: 0
              get: 0
              reply: 0
              ack: 0
              hello: 0
          dropped_stats:
              put: 0
              get: 0
              reply: 0
              ack: 0
              hello: 0
          health stats:
              health value: 0
              interrupts: 0
              dropped: 0
              aborted: 0
              no route: 0
              timeouts: 0
              error: 0
          tunables:
              peer_timeout: 0
              peer_credits: 0
              peer_buffer_credits: 0
              credits: 0
          dev cpt: 0
          tcp bonding: 0
          CPT: &quot;[0,1]&quot;
    - net type: o2ib
      local NI(s):
        - nid: 192.168.1.117@o2ib
          status: up
          interfaces:
              0: ib0
          statistics:
              send_count: 873025729
              recv_count: 713516095
              drop_count: 2
          sent_stats:
              put: 873025581
              get: 148
              reply: 0
              ack: 0
              hello: 0
          received_stats:
              put: 709016231
              get: 140
              reply: 4499716
              ack: 8
              hello: 0
          dropped_stats:
              put: 2
              get: 0
              reply: 0
              ack: 0
              hello: 0
          health stats:
              health value: 1000
              interrupts: 0
              dropped: 1463
              aborted: 0
              no route: 0
              timeouts: 824
              error: 0
          tunables:
              peer_timeout: 180
              peer_credits: 8
              peer_buffer_credits: 0
              credits: 256
              peercredits_hiw: 4
              map_on_demand: 0
              concurrent_sends: 8
              fmr_pool_size: 512
              fmr_flush_trigger: 384
              fmr_cache: 1
              ntx: 512
              conns_per_peer: 1
          lnd tunables:
          dev cpt: 0
          tcp bonding: 0
          CPT: &quot;[0,1]&quot;
[root@soak-17 ~]# 
[root@soak-5 ~]# lnetctl net show -v 4
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
          statistics:
              send_count: 0
              recv_count: 0
              drop_count: 0
          sent_stats:
              put: 0
              get: 0
              reply: 0
              ack: 0
              hello: 0
          received_stats:
              put: 0
              get: 0
              reply: 0
              ack: 0
              hello: 0
          dropped_stats:
              put: 0
              get: 0
              reply: 0
              ack: 0
              hello: 0
          health stats:
              health value: 0
              interrupts: 0
              dropped: 0
              aborted: 0
              no route: 0
              timeouts: 0
              error: 0
          tunables:
              peer_timeout: 0
              peer_credits: 0
              peer_buffer_credits: 0
              credits: 0
          dev cpt: 0
          tcp bonding: 0
          CPT: &quot;[0,1]&quot;
    - net type: o2ib
      local NI(s):
        - nid: 192.168.1.105@o2ib
          status: up
          interfaces:
              0: ib0
          statistics:
              send_count: 12361
              recv_count: 12349
              drop_count: 1
          sent_stats:
              put: 12356
              get: 5
              reply: 0
              ack: 0
              hello: 0
          received_stats:
              put: 12340
              get: 1
              reply: 4
              ack: 4
              hello: 0
          dropped_stats:
              put: 1
              get: 0
              reply: 0
              ack: 0
              hello: 0
          health stats:
              health value: 1000
              interrupts: 0
              dropped: 16
              aborted: 0
              no route: 0
              timeouts: 0
              error: 0
          tunables:
              peer_timeout: 180
              peer_credits: 8
              peer_buffer_credits: 0
              credits: 256
              peercredits_hiw: 4
              map_on_demand: 0
              concurrent_sends: 8
              fmr_pool_size: 512
              fmr_flush_trigger: 384
              fmr_cache: 1
              ntx: 512
              conns_per_peer: 1
          lnd tunables:
          dev cpt: 0
          tcp bonding: 0
          CPT: &quot;[0,1]&quot;
[root@soak-5 ~]# 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Around the time when &quot;Async QP event type 1&quot; error shows, soak-5 was rebooted of failover test&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2019-05-20 17:19:08,383:fsmgmt.fsmgmt:INFO     triggering fault oss_restart
2019-05-20 17:19:08,386:fsmgmt.fsmgmt:INFO     executing cmd pm -h powerman -c soak-5&amp;gt; /dev/null
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Also, when the ticket was opened, master tag-2.12.53 includes &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11931&quot; title=&quot;RDMA packets sent from client to MGS are timing out &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11931&quot;&gt;&lt;del&gt;LU-11931&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="247490" author="gerrit" created="Tue, 21 May 2019 20:48:43 +0000"  >&lt;p&gt;Amir Shehata (ashehata@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34933&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34933&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12279&quot; title=&quot;client got evicted due to network issue.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12279&quot;&gt;&lt;del&gt;LU-12279&lt;/del&gt;&lt;/a&gt; lnet: use number of wrs to calculate CQEs&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8e3ab1154fbf71f5de5bc28206350466fa8e1776&lt;/p&gt;</comment>
                            <comment id="247491" author="simmonsja" created="Tue, 21 May 2019 21:08:36 +0000"  >&lt;p&gt;I think we need a master patch first.&lt;/p&gt;</comment>
                            <comment id="247492" author="pjones" created="Tue, 21 May 2019 21:11:45 +0000"  >&lt;p&gt;it&apos;s testing a theory not landing at this point&lt;/p&gt;</comment>
                            <comment id="247499" author="ashehata" created="Tue, 21 May 2019 22:53:16 +0000"  >&lt;p&gt;Yes. I asked Sarah if we can turn on crash on eviction on the client and dump on eviction on the server, so we can investigate the dump if we encounter this issue again.&lt;/p&gt;</comment>
                            <comment id="247557" author="gerrit" created="Wed, 22 May 2019 23:25:40 +0000"  >&lt;p&gt;James Nunez (jnunez@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34945&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34945&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12279&quot; title=&quot;client got evicted due to network issue.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12279&quot;&gt;&lt;del&gt;LU-12279&lt;/del&gt;&lt;/a&gt; lnet: use number of wrs to calculate CQEs&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: cae64abc913d6c1e31881b638c05a104579f21be&lt;/p&gt;</comment>
                            <comment id="247731" author="gerrit" created="Sat, 25 May 2019 20:23:23 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/34945/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34945/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12279&quot; title=&quot;client got evicted due to network issue.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12279&quot;&gt;&lt;del&gt;LU-12279&lt;/del&gt;&lt;/a&gt; lnet: use number of wrs to calculate CQEs&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 24294b843f79a1167f19d230ff1ab5c1a5cd88e7&lt;/p&gt;</comment>
                            <comment id="247732" author="gerrit" created="Sat, 25 May 2019 20:25:38 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/34933/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34933/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12279&quot; title=&quot;client got evicted due to network issue.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12279&quot;&gt;&lt;del&gt;LU-12279&lt;/del&gt;&lt;/a&gt; lnet: use number of wrs to calculate CQEs&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 61270a9330e710948ff9097495845692a6450ccd&lt;/p&gt;</comment>
                            <comment id="247735" author="pjones" created="Sat, 25 May 2019 21:48:48 +0000"  >&lt;p&gt;Landed for 2.13&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="54787">LU-11931</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00g1j:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>