<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:18:23 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-15446] Local recovery pings on MR nodes may not exercise all available paths</title>
                <link>https://jira.whamcloud.com/browse/LU-15446</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Typically, LNet peers do not perform discovery on themselves, so it is often the case that there is a non-MR peer entry for each local interface. For example:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@kjcf01n05 ~]# lctl list_nids
10.253.100.9@o2ib
10.253.100.10@o2ib
[root@kjcf01n05 ~]# lnetctl peer show --nid 10.253.100.9@o2ib
peer:
    - primary nid: 10.253.100.9@o2ib
      Multi-Rail: False
      peer ni:
        - nid: 10.253.100.9@o2ib
          state: NA
[root@kjcf01n05 ~]# lnetctl peer show --nid 10.253.100.10@o2ib
peer:
    - primary nid: 10.253.100.10@o2ib
      Multi-Rail: False
      peer ni:
        - nid: 10.253.100.10@o2ib
          state: NA
[root@kjcf01n05 ~]#
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Because of this, LNet sets a &quot;preferred&quot; local NI to use when sending traffic to these non-MR peers. This prevents LNet recovery pings from exercising other paths. e.g. consider a peer with two local interfaces, heth0 and heth1. We have the following paths for sending &lt;em&gt;to&lt;/em&gt; heth0: &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; heth0 -&amp;gt; heth0 heth1 -&amp;gt; heth0 &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt; And paths for sending to heth1: &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; heth0 -&amp;gt; heth1 heth1 -&amp;gt; heth1 &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt; Because of the preferred NI for non-MR peer logic, whichever path is first chosen will then be used for every future send to that NI (unless the peer entry is deleted, then a new path may be chosen). It is not clear whether these local recovery pings are particularly useful in ascertaining the health of local interfaces, but if they are, then it seems we ought to allow LNet to exercise all possible paths.&lt;/p&gt;</description>
                <environment></environment>
        <key id="67997">LU-15446</key>
            <summary>Local recovery pings on MR nodes may not exercise all available paths</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="hornc">Chris Horn</assignee>
                                    <reporter username="hornc">Chris Horn</reporter>
                        <labels>
                    </labels>
                <created>Wed, 12 Jan 2022 20:51:28 +0000</created>
                <updated>Fri, 26 Aug 2022 16:31:13 +0000</updated>
                            <resolved>Mon, 7 Feb 2022 14:55:53 +0000</resolved>
                                                    <fixVersion>Lustre 2.15.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="322511" author="gerrit" created="Wed, 12 Jan 2022 21:29:26 +0000"  >&lt;p&gt;&quot;Chris Horn &amp;lt;chris.horn@hpe.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/46078&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/46078&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15446&quot; title=&quot;Local recovery pings on MR nodes may not exercise all available paths&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15446&quot;&gt;&lt;del&gt;LU-15446&lt;/del&gt;&lt;/a&gt; lnet: Don&apos;t use pref NI for reserved portal&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 011a77e02255925eb29deaf6dddb24c2d969152d&lt;/p&gt;</comment>
                            <comment id="324637" author="hornc" created="Mon, 31 Jan 2022 20:47:59 +0000"  >&lt;p&gt;Test report for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15446&quot; title=&quot;Local recovery pings on MR nodes may not exercise all available paths&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15446&quot;&gt;&lt;del&gt;LU-15446&lt;/del&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;Build/execute test case from patch:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[hornc@ct7-adm lustre-filesystem]$ git fetch https://review.whamcloud.com/fs/lustre-release refs/changes/78/46078/3 &amp;amp;&amp;amp; git checkout FETCH_HEAD
remote: Counting objects: 3726, done
remote: Finding sources: 100% (1/1)
remote: Total 1 (delta 0), reused 1 (delta 0)
Unpacking objects: 100% (1/1), done.
From https://review.whamcloud.com/fs/lustre-release
 * branch                  refs/changes/78/46078/3 -&amp;gt; FETCH_HEAD
Previous HEAD position was 1eecd524de LU-15440 lnet: lnet_peer_data_present() memory leak
HEAD is now at b79f82c23c LU-15446 lnet: Don&apos;t use pref NI for reserved portal
[hornc@ct7-adm lustre-filesystem]$ git reset --soft HEAD^
[hornc@ct7-adm lustre-filesystem]$ git status
HEAD detached from FETCH_HEAD
Changes to be committed:
  (use &quot;git restore --staged &amp;lt;file&amp;gt;...&quot; to unstage)
	modified:   lnet/lnet/lib-move.c
	modified:   lustre/tests/sanity-lnet.sh

Untracked files:
  (use &quot;git add &amp;lt;file&amp;gt;...&quot; to include in what will be committed)
	lustre/tests/lutf/Makefile.in
	lustre/tests/lutf/src/Makefile.in

[hornc@ct7-adm lustre-filesystem]$ git reset HEAD lnet/lnet/lib-move.c
Unstaged changes after reset:
M	lnet/lnet/lib-move.c
[hornc@ct7-adm lustre-filesystem]$ git checkout lnet/lnet/lib-move.c
Updated 1 path from the index
[hornc@ct7-adm lustre-filesystem]$ git --no-pager diff --cached
diff --git a/lustre/tests/sanity-lnet.sh b/lustre/tests/sanity-lnet.sh
index 72e28eb497..c2d6f345e4 100755
--- a/lustre/tests/sanity-lnet.sh
+++ b/lustre/tests/sanity-lnet.sh
@@ -92,6 +92,7 @@ load_lnet() {
 }

 do_lnetctl() {
+	$LCTL mark &quot;$LNETCTL $@&quot;
 	echo &quot;$LNETCTL $@&quot;
 	$LNETCTL &quot;$@&quot;
 }
@@ -2348,6 +2349,59 @@ test_217() {
 }
 run_test 217 &quot;Don&apos;t leak memory when discovering peer with nnis &amp;lt;= 1&quot;

+test_218() {
+	reinit_dlc || return $?
+
+	[[ ${#INTERFACES[@]} -lt 2 ]] &amp;amp;&amp;amp;
+		skip &quot;Need two LNet interfaces&quot;
+
+	add_net &quot;tcp&quot; &quot;${INTERFACES[0]}&quot; || return $?
+
+	local nid1=$($LCTL list_nids | head -n 1)
+
+	do_lnetctl ping $nid1 ||
+		error &quot;ping failed&quot;
+
+	add_net &quot;tcp&quot; &quot;${INTERFACES[1]}&quot; || return $?
+
+	local nid2=$($LCTL list_nids | tail --lines 1)
+
+	do_lnetctl ping $nid2 ||
+		error &quot;ping failed&quot;
+
+	$LCTL net_drop_add -s $nid1 -d $nid1 -e local_error -r 1
+
+	do_lnetctl ping $nid1 &amp;amp;&amp;amp;
+		error &quot;ping should have failed&quot;
+
+	local health_recovered
+	local i
+
+	for i in $(seq 1 5); do
+		health_recovered=$($LNETCTL net show -v 2 |
+				   grep -c &apos;health value: 1000&apos;)
+
+		if [[ $health_recovered -ne 2 ]]; then
+			echo &quot;Wait 1 second for health to recover&quot;
+			sleep 1
+		else
+			break
+		fi
+	done
+
+	health_recovered=$($LNETCTL net show -v 2 |
+			   grep -c &apos;health value: 1000&apos;)
+
+	$LCTL net_drop_del -a
+
+	[[ $health_recovered -ne 2 ]] &amp;amp;&amp;amp;
+		do_lnetctl net show -v 2 | egrep -e nid -e health &amp;amp;&amp;amp;
+		error &quot;Health hasn&apos;t recovered&quot;
+
+	return 0
+}
+run_test 218 &quot;Local recovery pings should exercise all available paths&quot;
+
 test_230() {
 	# LU-12815
 	echo &quot;Check valid values; Should succeed&quot;
[hornc@ct7-adm lustre-filesystem]$ make -j 32
...
[root@ct7-adm tests]# cat /etc/modprobe.d/lustre.conf
options lnet networks=tcp(eth0,eth1)
[root@ct7-adm tests]# ./auster -N -v sanity-lnet --only 218
Started at Sat Jan 29 03:06:42 UTC 2022
ct7-adm: executing check_logdir /tmp/test_logs/2022-01-29/030642
Logging to shared log directory: /tmp/test_logs/2022-01-29/030642
ct7-adm: executing yml_node
IOC_LIBCFS_GET_NI error 22: Invalid argument
Client: 2.14.57.60
MDS: 2.14.57.60
OSS: 2.14.57.60
running: sanity-lnet ONLY=218
run_suite sanity-lnet /home/hornc/lustre-filesystem/lustre/tests/sanity-lnet.sh
-----============= acceptance-small: sanity-lnet ============----- Sat Jan 29 03:06:44 UTC 2022
Running: bash /home/hornc/lustre-filesystem/lustre/tests/sanity-lnet.sh
excepting tests:
opening /dev/obd failed: No such file or directory
hint: the kernel modules may not be loaded
Stopping clients: ct7-adm /mnt/lustre (opts:-f)
Stopping clients: ct7-adm /mnt/lustre2 (opts:-f)
modules unloaded.
ip netns exec test_ns ip addr add 10.1.2.3/31 dev test1pg
ip netns exec test_ns ip link set test1pg up
Loading modules from /home/hornc/lustre-filesystem/lustre
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2&apos;
../lnet/lnet/lnet options: &apos;networks=tcp(eth0,eth1) accept=all&apos;
ptlrpc/ptlrpc options: &apos;lbug_on_grant_miscount=1&apos;
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl net show
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: tcp
      local NI(s):
        - nid: 10.0.2.15@tcp
          status: up
          interfaces:
              0: eth0
        - nid: 10.73.10.10@tcp
          status: up
          interfaces:
              0: eth1
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 69598sec preferred_lft 69598sec
    inet6 fe80::5054:ff:fe4d:77d3/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:27:de:86 brd ff:ff:ff:ff:ff:ff
    inet 10.73.10.10/24 brd 10.73.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe27:de86/64 scope link
       valid_lft forever preferred_lft forever
10: test1pl@if2: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5a:38:d6:b4:bd:2f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5838:d6ff:feb4:bd2f/64 scope link
       valid_lft forever preferred_lft forever
Cleaning up LNet
modules unloaded.

== sanity-lnet test 218: Local recovery pings should exercise all available paths ========================================================== 03:06:49 (1643425609)
Loading LNet and configuring DLC
../lnet/lnet/lnet options: &apos;networks=tcp(eth0,eth1) accept=all&apos;
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl lnet configure
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl net add --net tcp --if eth0
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl ping 10.0.2.15@tcp
ping:
    - primary nid: 10.0.2.15@tcp
      Multi-Rail: False
      peer ni:
        - nid: 10.0.2.15@tcp
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl net add --net tcp --if eth1
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl ping 10.73.10.10@tcp
ping:
    - primary nid: 10.73.10.10@tcp
      Multi-Rail: False
      peer ni:
        - nid: 10.0.2.15@tcp
        - nid: 10.73.10.10@tcp
Added drop rule 10.0.2.15@tcp-&amp;gt;10.0.2.15@tcp (1/1)
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl ping 10.0.2.15@tcp
manage:
    - ping:
          errno: -1
          descr: failed to ping 10.0.2.15@tcp: Input/output error

Wait 1 second for health to recover
Wait 1 second for health to recover
Wait 1 second for health to recover
Wait 1 second for health to recover
Wait 1 second for health to recover
Removed 1 drop rules
        - nid: 0@lo
          health stats:
              health value: 0
        - nid: 10.0.2.15@tcp
          health stats:
              health value: 900
        - nid: 10.73.10.10@tcp
          health stats:
              health value: 1000
 sanity-lnet test_218: @@@@@@ FAIL: Health hasn&apos;t recovered
  Trace dump:
  = /home/hornc/lustre-filesystem/lustre/tests/test-framework.sh:6336:error()
  = /home/hornc/lustre-filesystem/lustre/tests/sanity-lnet.sh:2399:test_218()
  = /home/hornc/lustre-filesystem/lustre/tests/test-framework.sh:6640:run_one()
  = /home/hornc/lustre-filesystem/lustre/tests/test-framework.sh:6687:run_one_logged()
  = /home/hornc/lustre-filesystem/lustre/tests/test-framework.sh:6513:run_test()
  = /home/hornc/lustre-filesystem/lustre/tests/sanity-lnet.sh:2403:main()
Dumping lctl log to /tmp/test_logs/2022-01-29/030642/sanity-lnet.test_218.*.1643425617.log
Dumping logs only on local client.
FAIL 218 (9s)
Cleaning up LNet
opening /dev/obd failed: No such file or directory
hint: the kernel modules may not be loaded
modules unloaded.
sanity-lnet returned 1
Finished at Sat Jan 29 03:06:59 UTC 2022 in 17s
./auster: completed with rc 0
[root@ct7-adm tests]#
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Apply fix and re-test:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[hornc@ct7-adm lustre-filesystem]$ git fetch https://review.whamcloud.com/fs/lustre-release refs/changes/78/46078/3 &amp;amp;&amp;amp; git reset --hard FETCH_HEAD
From https://review.whamcloud.com/fs/lustre-release
 * branch                  refs/changes/78/46078/3 -&amp;gt; FETCH_HEAD
HEAD is now at b79f82c23c LU-15446 lnet: Don&apos;t use pref NI for reserved portal
[hornc@ct7-adm lustre-filesystem]$ make -j 32
...
[root@ct7-adm tests]# ./auster -N -v sanity-lnet --only 218
Started at Sat Jan 29 03:08:13 UTC 2022
ct7-adm: executing check_logdir /tmp/test_logs/2022-01-29/030812
Logging to shared log directory: /tmp/test_logs/2022-01-29/030812
ct7-adm: executing yml_node
IOC_LIBCFS_GET_NI error 22: Invalid argument
Client: 2.14.57.60
MDS: 2.14.57.60
OSS: 2.14.57.60
running: sanity-lnet ONLY=218
run_suite sanity-lnet /home/hornc/lustre-filesystem/lustre/tests/sanity-lnet.sh
-----============= acceptance-small: sanity-lnet ============----- Sat Jan 29 03:08:15 UTC 2022
Running: bash /home/hornc/lustre-filesystem/lustre/tests/sanity-lnet.sh
excepting tests:
opening /dev/obd failed: No such file or directory
hint: the kernel modules may not be loaded
Stopping clients: ct7-adm /mnt/lustre (opts:-f)
Stopping clients: ct7-adm /mnt/lustre2 (opts:-f)
modules unloaded.
ip netns exec test_ns ip addr add 10.1.2.3/31 dev test1pg
ip netns exec test_ns ip link set test1pg up
Loading modules from /home/hornc/lustre-filesystem/lustre
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2&apos;
../lnet/lnet/lnet options: &apos;networks=tcp(eth0,eth1) accept=all&apos;
ptlrpc/ptlrpc options: &apos;lbug_on_grant_miscount=1&apos;
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl net show
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: tcp
      local NI(s):
        - nid: 10.0.2.15@tcp
          status: up
          interfaces:
              0: eth0
        - nid: 10.73.10.10@tcp
          status: up
          interfaces:
              0: eth1
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 69508sec preferred_lft 69508sec
    inet6 fe80::5054:ff:fe4d:77d3/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:27:de:86 brd ff:ff:ff:ff:ff:ff
    inet 10.73.10.10/24 brd 10.73.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe27:de86/64 scope link
       valid_lft forever preferred_lft forever
11: test1pl@if2: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8e:e0:41:a6:d6:1c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::8ce0:41ff:fea6:d61c/64 scope link
       valid_lft forever preferred_lft forever
Cleaning up LNet
modules unloaded.

== sanity-lnet test 218: Local recovery pings should exercise all available paths ========================================================== 03:08:20 (1643425700)
Loading LNet and configuring DLC
../lnet/lnet/lnet options: &apos;networks=tcp(eth0,eth1) accept=all&apos;
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl lnet configure
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl net add --net tcp --if eth0
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl ping 10.0.2.15@tcp
ping:
    - primary nid: 10.0.2.15@tcp
      Multi-Rail: False
      peer ni:
        - nid: 10.0.2.15@tcp
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl net add --net tcp --if eth1
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl ping 10.73.10.10@tcp
ping:
    - primary nid: 10.73.10.10@tcp
      Multi-Rail: False
      peer ni:
        - nid: 10.0.2.15@tcp
        - nid: 10.73.10.10@tcp
Added drop rule 10.0.2.15@tcp-&amp;gt;10.0.2.15@tcp (1/1)
/home/hornc/lustre-filesystem/lustre/../lnet/utils/lnetctl ping 10.0.2.15@tcp
manage:
    - ping:
          errno: -1
          descr: failed to ping 10.0.2.15@tcp: Input/output error

Removed 1 drop rules
PASS 218 (2s)
== sanity-lnet test complete, duration 7 sec ============= 03:08:22 (1643425702)
Cleaning up LNet
opening /dev/obd failed: No such file or directory
hint: the kernel modules may not be loaded
modules unloaded.
sanity-lnet returned 0
Finished at Sat Jan 29 03:08:25 UTC 2022 in 13s
./auster: completed with rc 0
[root@ct7-adm tests]#
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="325399" author="gerrit" created="Mon, 7 Feb 2022 04:43:30 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/46078/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/46078/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15446&quot; title=&quot;Local recovery pings on MR nodes may not exercise all available paths&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15446&quot;&gt;&lt;del&gt;LU-15446&lt;/del&gt;&lt;/a&gt; lnet: Don&apos;t use pref NI for reserved portal&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: a2815441381cb6cee8eb9865d9279541ea04828e&lt;/p&gt;</comment>
                            <comment id="325445" author="pjones" created="Mon, 7 Feb 2022 14:55:53 +0000"  >&lt;p&gt;Landed for 2.15&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i02euf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>