<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:32:35 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3286] recovery-double-scale test_pairwise_fail: FAIL: Restart of ost2 failed!</title>
                <link>https://jira.whamcloud.com/browse/LU-3286</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running recovery-double-scale test with FSTYPE=zfs and FAILURE_MODE=HARD to verify patch &lt;a href=&quot;http://review.whamcloud.com/6258&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6258&lt;/a&gt;, the test failed as follows:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;==== START === test 1: failover MDS, then OST ==========
==== Checking the clients loads BEFORE failover -- failure NOT OK
&amp;lt;snip&amp;gt;
Done checking client loads. Failing type1=MDS item1=mds1 ... 
CMD: wtm-82 /usr/sbin/lctl dl
Failing mds1 on wtm-82
CMD: wtm-82 zpool set cachefile=none lustre-mdt1; sync
+ pm -h powerman --reset wtm-82
Command completed successfully
reboot facets: mds1
+ pm -h powerman --on wtm-82
Command completed successfully
Failover mds1 to wtm-83
21:37:40 (1367901460) waiting for wtm-83 network 900 secs ...
21:37:40 (1367901460) network interface is UP
CMD: wtm-83 hostname
mount facets: mds1
CMD: wtm-83 zpool list -H lustre-mdt1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
			zpool import -f -o cachefile=none lustre-mdt1
Starting mds1:   lustre-mdt1/mdt1 /mnt/mds1
CMD: wtm-83 mkdir -p /mnt/mds1; mount -t lustre   		                   lustre-mdt1/mdt1 /mnt/mds1
CMD: wtm-83 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/openmpi/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin::/sbin:/bin:/usr/sbin: NAME=ncli sh rpc.sh set_default_debug \&quot;-1\&quot; \&quot;all -lnet -lnd -pinger\&quot; 256 
CMD: wtm-83 zfs get -H -o value lustre:svname 		                           lustre-mdt1/mdt1 2&amp;gt;/dev/null
Started lustre-MDT0000
                            Failing type2=OST item2=ost4 ... 
CMD: wtm-85 /usr/sbin/lctl dl
CMD: wtm-85 /usr/sbin/lctl dl
CMD: wtm-85 /usr/sbin/lctl dl
CMD: wtm-85 zpool set cachefile=none lustre-ost4; sync
CMD: wtm-85 zpool set cachefile=none lustre-ost6; sync
Failing ost2,ost4,ost6 on wtm-85
CMD: wtm-85 zpool set cachefile=none lustre-ost2; sync
+ pm -h powerman --reset wtm-85
Command completed successfully
reboot facets: ost2,ost4,ost6
+ pm -h powerman --on wtm-85
Command completed successfully
Failover ost2 to wtm-84
Failover ost4 to wtm-84
Failover ost6 to wtm-84
21:38:19 (1367901499) waiting for wtm-84 network 900 secs ...
21:38:19 (1367901499) network interface is UP
CMD: wtm-84 hostname
mount facets: ost2,ost4,ost6
CMD: wtm-84 zpool list -H lustre-ost2 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
			zpool import -f -o cachefile=none lustre-ost2
Starting ost2:   lustre-ost2/ost2 /mnt/ost2
CMD: wtm-84 mkdir -p /mnt/ost2; mount -t lustre   		                   lustre-ost2/ost2 /mnt/ost2
wtm-84: mount.lustre: mount lustre-ost2/ost2 at /mnt/ost2 failed: Input/output error
wtm-84: Is the MGS running?
Start of lustre-ost2/ost2 on ost2 failed 5
 recovery-double-scale test_pairwise_fail: @@@@@@ FAIL: Restart of ost2 failed! 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Dmesg on OSS wtm-84 showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LustreError: 9681:0:(obd_mount_server.c:1123:server_register_target()) lustre-OST0001: error registering with the MGS: rc = -5 (not fatal)
LustreError: 6180:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff88062faed400 x1434348208262360/t0(0) o101-&amp;gt;MGC10.10.18.253@tcp@10.10.18.253@tcp:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
LustreError: 6180:0:(client.c:1052:ptlrpc_import_delay_req()) Skipped 1 previous similar message
LustreError: 15c-8: MGC10.10.18.253@tcp: The configuration from log &apos;lustre-OST0001&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
LustreError: 9681:0:(obd_mount_server.c:1257:server_start_targets()) failed to start server lustre-OST0001: -5
LustreError: 9681:0:(obd_mount_server.c:1699:server_fill_super()) Unable to start targets: -5
LustreError: 9681:0:(obd_mount_server.c:844:lustre_disconnect_lwp()) lustre-MDT0000-lwp-OST0001: Can&apos;t end config log lustre-client.
LustreError: 9681:0:(obd_mount_server.c:1426:server_put_super()) lustre-OST0001: failed to disconnect lwp. (rc=-2)
LustreError: 9681:0:(obd_mount_server.c:1456:server_put_super()) no obd lustre-OST0001
Lustre: server umount lustre-OST0001 complete
LustreError: 9681:0:(obd_mount.c:1267:lustre_fill_super()) Unable to mount  (-5)
Lustre: DEBUG MARKER: /usr/sbin/lctl mark  recovery-double-scale test_pairwise_fail: @@@@@@ FAIL: Restart of ost2 failed!
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Dmesg on MDS wtm-83 showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: Failing type2=OST item2=ost4 ...
Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 4 clients reconnect
Lustre: lustre-MDT0000: Recovery over after 0:08, of 4 clients 4 recovered and 0 were evicted.
Lustre: 5225:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1367901499/real 1367901499]  req@ffff880c17898400 x1434348659147084/t0(0) o400-&amp;gt;lustre-OST0001-osc-MDT0000@10.10.19.26@tcp:28/4 lens 224/224 e 0 to 1 dl 1367901543 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: lustre-OST0003-osc-MDT0000: Connection to lustre-OST0003 (at 10.10.19.26@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: 5225:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Lustre: lustre-OST0005-osc-MDT0000: Connection to lustre-OST0005 (at 10.10.19.26@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: DEBUG MARKER: /usr/sbin/lctl mark  recovery-double-scale test_pairwise_fail: @@@@@@ FAIL: Restart of ost2 failed! 
Lustre: DEBUG MARKER: recovery-double-scale test_pairwise_fail: @@@@@@ FAIL: Restart of ost2 failed!
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/ebe1f318-b6e0-11e2-b6f1-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/ebe1f318-b6e0-11e2-b6f1-52540035b04c&lt;/a&gt;&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
FSTYPE=zfs&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
TEST_GROUP=failover&lt;br/&gt;
</environment>
        <key id="18728">LU-3286</key>
            <summary>recovery-double-scale test_pairwise_fail: FAIL: Restart of ost2 failed!</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Tue, 7 May 2013 07:19:55 +0000</created>
                <updated>Tue, 31 Dec 2013 15:52:21 +0000</updated>
                            <resolved>Fri, 29 Nov 2013 14:15:03 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                    <version>Lustre 2.4.1</version>
                                    <fixVersion>Lustre 2.6.0</fixVersion>
                    <fixVersion>Lustre 2.5.1</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="57840" author="jlevi" created="Tue, 7 May 2013 17:34:29 +0000"  >&lt;p&gt;Lai,&lt;br/&gt;
Could you please comment on this one?&lt;br/&gt;
Thank you!&lt;/p&gt;</comment>
                            <comment id="57908" author="laisiyao" created="Wed, 8 May 2013 15:31:28 +0000"  >&lt;p&gt;Yujian, I saw &lt;a href=&quot;http://review.whamcloud.com/#change,6258&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,6258&lt;/a&gt; passed in HARD failover mode, does it mean this one is fixed?&lt;/p&gt;</comment>
                            <comment id="58002" author="yujian" created="Thu, 9 May 2013 10:12:50 +0000"  >&lt;blockquote&gt;&lt;p&gt;Yujian, I saw &lt;a href=&quot;http://review.whamcloud.com/#change,6258&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,6258&lt;/a&gt; passed in HARD failover mode, does it mean this one is fixed?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Hi Lai, the recovery-double-scale test with FSTYPE=zfs still failed with this issue.&lt;/p&gt;

&lt;p&gt;I&apos;m performing the test with FSTYPE=ldiskfs under the same configuration and will update the ticket with Maloo report.&lt;/p&gt;</comment>
                            <comment id="58020" author="yujian" created="Thu, 9 May 2013 14:14:02 +0000"  >&lt;blockquote&gt;&lt;p&gt;I&apos;m performing the test with FSTYPE=ldiskfs under the same configuration and will update the ticket with Maloo report.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;recovery-double-scale test passed with FSTYPE=ldiskfs and FAILURE_MODE=HARD:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/3ce1b7bc-b8b2-11e2-8742-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/3ce1b7bc-b8b2-11e2-8742-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="58151" author="laisiyao" created="Fri, 10 May 2013 16:16:57 +0000"  >&lt;p&gt;The log shows that the failed OSTs kept connecting to old MGS nid, and never tried the failover nid. It&apos;s a bit strange, because some OST can connect to failover nid.&lt;/p&gt;

&lt;p&gt;I&apos;ll need more time to analyse the logs.&lt;/p&gt;</comment>
                            <comment id="58575" author="laisiyao" created="Wed, 15 May 2013 15:34:52 +0000"  >&lt;p&gt;For ldiskfs test:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;10000000:01000000:20.0:1368528132.031673:0:14745:0:(mgc_request.c:1763:mgc_process_cfg_log()) Failed to get MGS log lustre-OST0001, using local copy for now, will try to update later.
...
10000000:01000000:20.0:1368528132.040180:0:14745:0:(mgc_request.c:1871:mgc_process_log()) MGC10.10.18.253@tcp: configuration from log &apos;lustre-OST0001&apos; succeeded (0).
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;While zfs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;10000000:00000001:16.0:1367901542.341862:0:9681:0:(mgc_request.c:1774:mgc_process_cfg_log()) Process leaving via out_pop (rc=18446744073709551611 : -5 : 0xfffffffffffffffb)
...
10000000:00000001:16.0:1367901542.341879:0:9681:0:(mgc_request.c:1982:mgc_process_config()) Process leaving (rc=18446744073709551611 : -5 : fffffffffffffffb)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That is, in ldiskfs test it could find a local copy of OST config upon MGS connection failure, and use it to start OST, while zfs not, and start failed. I&apos;ll see why zfs doesn&apos;t have a copy tomorrow.&lt;/p&gt;</comment>
                            <comment id="58640" author="laisiyao" created="Thu, 16 May 2013 06:16:38 +0000"  >&lt;p&gt;The root cause is that MGC llog local copy is done in lvfs context, thus currently only ldiskfs backend filesystem is supported. So for zfs based server, upon double failure, OST can&apos;t get its config log, and failed to mount.&lt;/p&gt;

&lt;p&gt;I can&apos;t find the original zfs support design doc, and this should be a known issue.&lt;/p&gt;</comment>
                            <comment id="58855" author="yujian" created="Mon, 20 May 2013 05:03:03 +0000"  >&lt;p&gt;Lustre Tag: v2_4_0_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-master/1501/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-master/1501/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.4/x86_64&lt;br/&gt;
FSTYPE=zfs&lt;br/&gt;
TEST_GROUP=failover&lt;/p&gt;

&lt;p&gt;recovery-double-scale hit the same issue: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/e72da246-c102-11e2-8854-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/e72da246-c102-11e2-8854-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="58953" author="laisiyao" created="Tue, 21 May 2013 05:08:18 +0000"  >&lt;p&gt;Hi Alex, any opinion on this?&lt;/p&gt;</comment>
                            <comment id="59352" author="bzzz" created="Mon, 27 May 2013 05:33:05 +0000"  >&lt;p&gt;supposed to be fixed with &lt;a href=&quot;http://review.whamcloud.com/#change,5049&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,5049&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="59364" author="yujian" created="Mon, 27 May 2013 10:06:23 +0000"  >&lt;blockquote&gt;&lt;p&gt;supposed to be fixed with &lt;a href=&quot;http://review.whamcloud.com/#change,5049&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,5049&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;I submitted &lt;a href=&quot;http://review.whamcloud.com/6459&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6459&lt;/a&gt; to verify this patch together with &lt;a href=&quot;http://review.whamcloud.com/6429&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6429&lt;/a&gt; under failover configuration. Let&apos;s wait for the test result.&lt;/p&gt;</comment>
                            <comment id="59396" author="yujian" created="Tue, 28 May 2013 02:28:58 +0000"  >&lt;blockquote&gt;&lt;p&gt;supposed to be fixed with &lt;a href=&quot;http://review.whamcloud.com/#change,5049&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,5049&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;recovery-double-scale still failed after failing over MDS and then OST:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mount facets: ost1,ost2,ost3,ost4,ost5,ost6,ost7
CMD: wtm-14vm8 zpool list -H lustre-ost1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
			zpool import -f -o cachefile=none -d /dev/lvm-OSS lustre-ost1
Starting ost1:   lustre-ost1/ost1 /mnt/ost1
CMD: wtm-14vm8 mkdir -p /mnt/ost1; mount -t lustre   		                   lustre-ost1/ost1 /mnt/ost1
wtm-14vm8: mount.lustre: mount lustre-ost1/ost1 at /mnt/ost1 failed: Input/output error
wtm-14vm8: Is the MGS running?
Start of lustre-ost1/ost1 on ost1 failed 5
 recovery-double-scale test_pairwise_fail: @@@@@@ FAIL: Restart of ost1 failed!
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/285b58c2-c6ed-11e2-be75-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/285b58c2-c6ed-11e2-be75-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="61362" author="tappro" created="Wed, 26 Jun 2013 12:54:51 +0000"  >&lt;p&gt;I can be wrong here but probably the problem is support of all lsi_flags on ZFS?&lt;/p&gt;</comment>
                            <comment id="64386" author="yujian" created="Fri, 16 Aug 2013 07:26:47 +0000"  >&lt;p&gt;Lustre build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_4/32/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_4/32/&lt;/a&gt;&lt;br/&gt;
FSTYPE=zfs&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;recovery-double-scale still failed after failing over MDS and then OST:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/c55d6c84-05e8-11e3-b811-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/c55d6c84-05e8-11e3-b811-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="66065" author="yujian" created="Mon, 9 Sep 2013 15:39:48 +0000"  >&lt;p&gt;Lustre Tag: v2_4_1_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_4/44/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_4/44/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.4/x86_64&lt;br/&gt;
Testgroup: failover&lt;br/&gt;
FSTYPE=zfs&lt;/p&gt;

&lt;p&gt;recovery-double-scale hit the same failure:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/2864e15c-1757-11e3-aa87-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/2864e15c-1757-11e3-aa87-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="67236" author="laisiyao" created="Mon, 23 Sep 2013 14:49:36 +0000"  >&lt;p&gt;As Alex pointed out in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2059&quot; title=&quot;mgc to backup configuration on osd-based llogs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2059&quot;&gt;&lt;del&gt;LU-2059&lt;/del&gt;&lt;/a&gt;, lsi_srv_mnt is NULL for zfs osd, so that llog local copy is not supported.&lt;/p&gt;

&lt;p&gt;osd_conf_get() suggests to introduce a new fs abstraction layer other than reading from vfsmount structure directly, this looks reasonable because zfs-osd doesn&apos;t do a full mount, but use DMU interface directly, which means zfs-osd doesn&apos;t have vfsmount and superblock object.&lt;/p&gt;

&lt;p&gt;But this looks to a big project, I need understand more zfs related code to continue.&lt;/p&gt;</comment>
                            <comment id="67240" author="bzzz" created="Mon, 23 Sep 2013 14:58:26 +0000"  >&lt;p&gt;there is a patch to support local copies of llog using OSD API. I can&apos;t find it, please talk to Mike.&lt;/p&gt;</comment>
                            <comment id="67309" author="laisiyao" created="Tue, 24 Sep 2013 02:31:15 +0000"  >&lt;p&gt;The patch is &lt;a href=&quot;http://review.whamcloud.com/#/c/5049/19&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/5049/19&lt;/a&gt;, but it doesn&apos;t solve this issue, because for zfs-osd, vfsmount object is NULL, as a result server_mgc_set_fs() is not called, and llog local copy can not be done.&lt;/p&gt;</comment>
                            <comment id="67330" author="bzzz" created="Tue, 24 Sep 2013 09:46:19 +0000"  >&lt;p&gt;sorry, can you explain with details? vfsmount isn&apos;t valid notion in the server code anymore (except osd-ldiskfs/). have you contacted Mike?&lt;/p&gt;</comment>
                            <comment id="67331" author="laisiyao" created="Tue, 24 Sep 2013 10:35:52 +0000"  >&lt;p&gt;Mike is on the watching list.&lt;/p&gt;

&lt;p&gt;server_start_targets() calls server_mgc_set_fs() only when lsi-&amp;gt;lsi_srv_mnt is not NULL, because server_mgc_set_fs() has argument of superblock. server_mgc_set_fs() then calls mgc_fs_setup() to setup local configs dir.&lt;/p&gt;</comment>
                            <comment id="71619" author="laisiyao" created="Fri, 15 Nov 2013 13:22:55 +0000"  >&lt;p&gt;Patch is on &lt;a href=&quot;http://review.whamcloud.com/#/c/8286/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/8286/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="72507" author="yujian" created="Fri, 29 Nov 2013 04:59:21 +0000"  >&lt;p&gt;Patch landed on master branch for Lustre 2.6.0.&lt;/p&gt;

&lt;p&gt;Hi Lai,&lt;br/&gt;
Could you please back-port the patch to Lustre b2_4 branch? Thanks.&lt;/p&gt;</comment>
                            <comment id="72513" author="laisiyao" created="Fri, 29 Nov 2013 08:00:41 +0000"  >&lt;p&gt;Yujian, this patch depends on &lt;a href=&quot;http://review.whamcloud.com/#/c/5049/19&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/5049/19&lt;/a&gt; which is not backported to 2.4 yet, should I backport both?&lt;/p&gt;</comment>
                            <comment id="72522" author="pjones" created="Fri, 29 Nov 2013 14:15:03 +0000"  >&lt;p&gt;Thanks Lai. That is probably too big a change to include in a maintenance release so let&apos;s close this as fixed in 2.6&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvq93:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8129</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>