<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:18:20 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1632] FID sequence numbers not working properly with filesystems formatted using 1.8?</title>
                <link>https://jira.whamcloud.com/browse/LU-1632</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;On a 2.1 filesystem - server and client both running 2.1.2, and a filesystem created with Lustre 2.1.x, many files are created on the same client using the same FID sequence number. That&apos;s how I expect it to work.&lt;/p&gt;

&lt;p&gt;With servers and clients running 2.1.2, but with a filesystem that was originally created with Lustre 1.8, only one file is created per sequence number before the client requests another one from the MDS. For example, for several consecutive files created on the same client, I get FIDs like this:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c0e251df8:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c0e254b37:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c0e257876:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c0e25a5b5:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c0e25d2f4:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c0e260033:0x1:0x0&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;lctl get_param &apos;seq.cli-srv-nbp*.* shows a space of &lt;span class=&quot;error&quot;&gt;&amp;#91;0x0 - 0x0&amp;#93;&lt;/span&gt;:0:0 for the filesystems that were formatted under Lustre 1.8.&lt;/p&gt;

&lt;p&gt;Is this the way it&apos;s supposed to work?&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Jason&lt;/p&gt;</description>
                <environment>Lustre 2.1.2</environment>
        <key id="15225">LU-1632</key>
            <summary>FID sequence numbers not working properly with filesystems formatted using 1.8?</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="di.wang">Di Wang</assignee>
                                    <reporter username="rappleye">jason.rappleye@nasa.gov</reporter>
                        <labels>
                            <label>fid</label>
                    </labels>
                <created>Fri, 13 Jul 2012 19:22:13 +0000</created>
                <updated>Thu, 7 Nov 2013 22:22:24 +0000</updated>
                            <resolved>Fri, 21 Dec 2012 13:38:00 +0000</resolved>
                                    <version>Lustre 2.3.0</version>
                    <version>Lustre 2.4.0</version>
                    <version>Lustre 2.1.2</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                    <fixVersion>Lustre 2.1.4</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="41838" author="adilger" created="Fri, 13 Jul 2012 19:55:01 +0000"  >&lt;p&gt;Definitely seems unusual, and not what should be happening.&lt;/p&gt;</comment>
                            <comment id="41841" author="di.wang" created="Fri, 13 Jul 2012 20:15:43 +0000"  >&lt;p&gt;Hmm, definitely not right, once you are using 2.x client. Both client and server are little-endian nodes? Could you please collect some -1 debug log on the client side, when you create these file?&lt;/p&gt;</comment>
                            <comment id="41846" author="rappleye" created="Fri, 13 Jul 2012 21:03:32 +0000"  >&lt;p&gt;files.txt: ls -l including fid for each file&lt;br/&gt;
dk.seq.debug.gz: logs collected while creating each file, with all debug flags enabled&lt;/p&gt;</comment>
                            <comment id="41847" author="rappleye" created="Fri, 13 Jul 2012 21:05:48 +0000"  >&lt;p&gt;Interesting - this only happens when touching a file, e.g.&lt;/p&gt;

&lt;p&gt;$ for i in &lt;/p&gt;
{1..100}; do touch foo-${i}; done&lt;br/&gt;
&lt;br/&gt;
If I write data to the file, it behaves as expected:&lt;br/&gt;
&lt;br/&gt;
$ for i in {1..100}
&lt;p&gt;; do echo foo &amp;gt; foo-${i}; done&lt;br/&gt;
$ for i in $(ls foo*); do echo -n &quot;$(ls -l $i) &quot;; lfs path2fid $i; done&lt;/p&gt;

&lt;p&gt;&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jrappley cstaff 4 Jul 13 18:04 foo-1 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c5f3bb04a:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jrappley cstaff 4 Jul 13 18:04 foo-10 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c5f3bb04a:0xa:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jrappley cstaff 4 Jul 13 18:04 foo-100 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c5f3bb04a:0x64:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jrappley cstaff 4 Jul 13 18:04 foo-11 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c5f3bb04a:0xb:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jrappley cstaff 4 Jul 13 18:04 foo-12 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c5f3bb04a:0xc:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
&lt;del&gt;rw-r&lt;/del&gt;&lt;del&gt;r&lt;/del&gt;- 1 jrappley cstaff 4 Jul 13 18:04 foo-13 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x33c5f3bb04a:0xd:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
...&lt;/p&gt;

&lt;p&gt;Still smells like a bug, though.&lt;/p&gt;</comment>
                            <comment id="41850" author="adilger" created="Fri, 13 Jul 2012 22:07:02 +0000"  >&lt;p&gt;Definitely shouldn&apos;t be happening this way.  This causes many orders of magnitude (10^5) too many sequences to be allocated by the MDS.  While it isn&apos;t fatal, it isn&apos;t what we expect and would cause some long-term overhead on the clients to have to fetch so many new FLDB entries.&lt;/p&gt;</comment>
                            <comment id="41858" author="di.wang" created="Sat, 14 Jul 2012 22:29:12 +0000"  >&lt;p&gt;Hmm, it seem very strange. Did you do umount/mount between touch and echo test. Did you do that in the same client. It seems seq width of this client, which is exposed under /proc, was reset to 0 somehow. Could you please try this&lt;/p&gt;

&lt;p&gt;lctl get_param seq.*.width&lt;/p&gt;

&lt;p&gt;And post the result here. Thanks.&lt;/p&gt;
</comment>
                            <comment id="41909" author="rappleye" created="Mon, 16 Jul 2012 17:29:45 +0000"  >&lt;p&gt;No umount/mount, and the client didn&apos;t disconnect/reconnect to the MDS, either.&lt;/p&gt;

&lt;p&gt;Width is fine, but space isn&apos;t what I&apos;d expect:&lt;/p&gt;

&lt;p&gt;$ lctl get_param &apos;seq.cli-srv-nbp*.*&apos;seq.cli-srv-nbp6-MDT0000-mdc-ffff88040990cc00.fid=&lt;span class=&quot;error&quot;&gt;&amp;#91;0x0:0x0:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
seq.cli-srv-nbp6-MDT0000-mdc-ffff88040990cc00.server=nbp6-MDT0000_UUID&lt;br/&gt;
seq.cli-srv-nbp6-MDT0000-mdc-ffff88040990cc00.space=&lt;span class=&quot;error&quot;&gt;&amp;#91;0x0 - 0x0&amp;#93;&lt;/span&gt;:0:0&lt;br/&gt;
seq.cli-srv-nbp6-MDT0000-mdc-ffff88040990cc00.width=131072&lt;/p&gt;

&lt;p&gt;Note that I can reproduce this on multiple filesystems and from different clients.&lt;/p&gt;

&lt;p&gt;We have six production filesystem; one was created with 2.1.x, the rest were 1.8 before upgrading to 2.1. We did not use the Xyratex migration patch.&lt;/p&gt;

&lt;p&gt;Since upgrading from 1.8 to 2.1.1, and now 2.1.2, we&apos;ve had several incidents of high load average on our MDSes that are apparently due to a large number of SEQ_QUERY RPCs. They might be related to this issue. We see many mdss threads with this stack trace:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0577a13&amp;gt;&amp;#93;&lt;/span&gt; ? cfs_alloc+0x63/0x90 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff815221f5&amp;gt;&amp;#93;&lt;/span&gt; schedule_timeout+0x215/0x2e0&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa079b6c4&amp;gt;&amp;#93;&lt;/span&gt; ? sptlrpc_svc_alloc_rs+0x74/0x2d0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa076d2b4&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_msg_add_version+0x94/0x110 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81523112&amp;gt;&amp;#93;&lt;/span&gt; __down+0x72/0xb0&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81095e11&amp;gt;&amp;#93;&lt;/span&gt; down+0x41/0x50&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a2531&amp;gt;&amp;#93;&lt;/span&gt; seq_server_alloc_meta+0x41/0x720 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa063c830&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_swab_lu_seq_range+0x0/0x30 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a2fc8&amp;gt;&amp;#93;&lt;/span&gt; seq_query+0x3b8/0x680 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa076c004&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_msg_get_opc+0x94/0x100 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0bdfc65&amp;gt;&amp;#93;&lt;/span&gt; mdt_handle_common+0x8d5/0x1810 &lt;span class=&quot;error&quot;&gt;&amp;#91;mdt&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa076c004&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_msg_get_opc+0x94/0x100 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0be0c15&amp;gt;&amp;#93;&lt;/span&gt; mdt_mdss_handle+0x15/0x20 &lt;span class=&quot;error&quot;&gt;&amp;#91;mdt&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;and one like this:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff810902de&amp;gt;&amp;#93;&lt;/span&gt; ? prepare_to_wait+0x4e/0x80&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0a79785&amp;gt;&amp;#93;&lt;/span&gt; jbd2_log_wait_commit+0xc5/0x140 &lt;span class=&quot;error&quot;&gt;&amp;#91;jbd2&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8108fff0&amp;gt;&amp;#93;&lt;/span&gt; ? autoremove_wake_function+0x0/0x40&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0a79836&amp;gt;&amp;#93;&lt;/span&gt; ? __jbd2_log_start_commit+0x36/0x40 &lt;span class=&quot;error&quot;&gt;&amp;#91;jbd2&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0a71b4b&amp;gt;&amp;#93;&lt;/span&gt; jbd2_journal_stop+0x2cb/0x320 &lt;span class=&quot;error&quot;&gt;&amp;#91;jbd2&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0aca048&amp;gt;&amp;#93;&lt;/span&gt; __ldiskfs_journal_stop+0x68/0xa0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0c448f8&amp;gt;&amp;#93;&lt;/span&gt; osd_trans_stop+0xb8/0x290 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a3b06&amp;gt;&amp;#93;&lt;/span&gt; ? seq_store_write+0xc6/0x2b0 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a3867&amp;gt;&amp;#93;&lt;/span&gt; seq_store_trans_stop+0x57/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a3d8c&amp;gt;&amp;#93;&lt;/span&gt; seq_store_update+0x9c/0x1e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a299a&amp;gt;&amp;#93;&lt;/span&gt; seq_server_alloc_meta+0x4aa/0x720 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa063c830&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_swab_lu_seq_range+0x0/0x30 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa08a2fc8&amp;gt;&amp;#93;&lt;/span&gt; seq_query+0x3b8/0x680 &lt;span class=&quot;error&quot;&gt;&amp;#91;fid&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa076c004&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_msg_get_opc+0x94/0x100 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0bdfc65&amp;gt;&amp;#93;&lt;/span&gt; mdt_handle_common+0x8d5/0x1810 &lt;span class=&quot;error&quot;&gt;&amp;#91;mdt&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa076c004&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_msg_get_opc+0x94/0x100 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0be0c15&amp;gt;&amp;#93;&lt;/span&gt; mdt_mdss_handle+0x15/0x20 &lt;span class=&quot;error&quot;&gt;&amp;#91;mdt&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;I haven&apos;t looked into how the sequence allocation works on the server side, but my first guess is that we&apos;re bound by the time it takes to commit the latest (sequence number, width) to disk (in the OI)? Of course, if we had fewer SEQ_QUERY RPCs being issued by clients, there might not be a problem!&lt;/p&gt;

&lt;p&gt;I&apos;m going to use SystemTap to see if I can understand what&apos;s going on. I&apos;ll report back what I find.&lt;/p&gt;</comment>
                            <comment id="41911" author="di.wang" created="Mon, 16 Jul 2012 19:57:45 +0000"  >&lt;p&gt;Ah, I know where is the problem. You did not erase the config log when you upgrade 1.8 to 2.1 right? So the problem is &lt;/p&gt;

&lt;p&gt;void ll_delete_inode(struct inode *inode)&lt;br/&gt;
{&lt;br/&gt;
        struct ll_sb_info *sbi = ll_i2sbi(inode);&lt;br/&gt;
        int rc;&lt;br/&gt;
        ENTRY;&lt;/p&gt;

&lt;p&gt;        rc = obd_fid_delete(sbi-&amp;gt;ll_md_exp, ll_inode2fid(inode));&lt;br/&gt;
        if (rc)&lt;br/&gt;
                CERROR(&quot;fid_delete() failed, rc %d\n&quot;, rc);&lt;/p&gt;

&lt;p&gt;it will call obd_fid_delete to delete the lmv object, but since you did not erase the config log, so it did not create the lmv layer at all. then it goes to mdc layer directly, and it will do sth obsolete there, which is completely wrong in 2.1 structure, I will cook a fix right now.&lt;/p&gt;
</comment>
                            <comment id="41912" author="rappleye" created="Mon, 16 Jul 2012 19:58:09 +0000"  >&lt;p&gt;My investigation with SystemTap so far has shown that every time seq_client_alloc_fid is called, the seq parameter is zeroed out. Some time after the call to seq_client_alloc_fid, mdc_fid_delete is called. It calls seq_client_flush, which zeros out the FID.&lt;/p&gt;

&lt;p&gt;On the filesystem that was formatted with Lustre 2.1, this call sequence does not happen.&lt;/p&gt;</comment>
                            <comment id="41913" author="di.wang" created="Mon, 16 Jul 2012 20:07:51 +0000"  >&lt;p&gt;Btw: it seems pretty serious problem, since it will cause 1 extra RPC for every create.&lt;/p&gt;</comment>
                            <comment id="41914" author="rappleye" created="Mon, 16 Jul 2012 20:13:15 +0000"  >&lt;p&gt;Yup, I see that in the `lctl dl` output - on the clients, the filesystem that&apos;s OK has an lmv device, while the rest don&apos;t.&lt;/p&gt;

&lt;p&gt;Will unmounting the MGS, removing the client config log, and remounting the MGS cause the client config log to be regenerated, presumably allowing new clients mounts to pick up the correct config log? Or do we need to do the usual `tunefs.lustre --writeconf` procedure? Though will only be slightly less intrusive for us than a client upgrade.&lt;/p&gt;

&lt;p&gt;We&apos;ve definitely seen operational issues due to the extra RPCs - see my previous comment with stack traces. We&apos;ve seen load averages of ~500 while all of the mdss threads are processing requests.&lt;/p&gt;</comment>
                            <comment id="41916" author="rappleye" created="Mon, 16 Jul 2012 20:17:42 +0000"  >&lt;p&gt;Also, I don&apos;t see anything in the manual regarding deleting the config logs as part of the 1.8 -&amp;gt; 2.x upgrade procedure.&lt;/p&gt;</comment>
                            <comment id="41918" author="di.wang" created="Mon, 16 Jul 2012 20:34:29 +0000"  >&lt;p&gt;No, it should not need to erase the config log indeed. But this problem will only happen if you do not erase the log, so it is a bug. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/sad.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;. Probably only delete the client config log would not work right now. And you probably need tunefs.lustre --writeconf procedure here.&lt;/p&gt;</comment>
                            <comment id="41920" author="di.wang" created="Mon, 16 Jul 2012 21:32:59 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/#change,3422&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3422&lt;/a&gt;   Here is the fix based on b2_1.&lt;/p&gt;</comment>
                            <comment id="41934" author="rappleye" created="Tue, 17 Jul 2012 11:59:18 +0000"  >&lt;p&gt;Would it be sufficient to use this patch (after it&apos;s been reviewed, of course) to fix this problem, at least until we can take the filesystem down to regenerate the client config logs? Might there be other unintended consequences of not having the lmv layer in place?&lt;/p&gt;

&lt;p&gt;I ask because it&apos;s relatively easy to update the Lustre clients, versus taking each filesystem down.&lt;/p&gt;</comment>
                            <comment id="41935" author="adilger" created="Tue, 17 Jul 2012 12:22:55 +0000"  >&lt;p&gt;Jason, Di can answer authoritatively, but I believe the fix on the client should be enough to resolve the problem.  The LMV layer is only needed on the client when DNE is enabled on the server.  This means you have at least until 2.4 to regenerate the config.&lt;/p&gt;</comment>
                            <comment id="41941" author="di.wang" created="Tue, 17 Jul 2012 15:52:13 +0000"  >&lt;p&gt;Yes, this patch, which is only on client side, should be enough to fix the problem. As Andreas said, LMV layer is only needed when you enable DNE on your system, which will not happen until 2.4.&lt;/p&gt;</comment>
                            <comment id="41942" author="jaylan" created="Tue, 17 Jul 2012 16:09:12 +0000"  >&lt;p&gt;What is DNE?&lt;br/&gt;
Also, on filesystems we did not erase the config log when we upgraded and thus no lmv device. However, we will still do not see the lmv device but it would work correctly with this client side patch. Is my understanding correct? Thanks!&lt;/p&gt;</comment>
                            <comment id="41943" author="di.wang" created="Tue, 17 Jul 2012 16:27:27 +0000"  >&lt;p&gt;DNE means Distributed NamespacE, with which you can have multiple Metadata targets, &lt;a href=&quot;http://wiki.whamcloud.com/display/PUB/Remote+Directories+Solution+Architecture&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://wiki.whamcloud.com/display/PUB/Remote+Directories+Solution+Architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Yes, if you apply this patch on client side, you do not need erase the config log. it will work correctly even there are no lmv device.&lt;/p&gt;</comment>
                            <comment id="47410" author="jaylan" created="Mon, 5 Nov 2012 14:34:11 +0000"  >&lt;p&gt;I tried to forward port the 2.1 patch to 2.3, but it seems some of the routines involved got changed.&lt;/p&gt;

&lt;p&gt;Is 2.3 client still need the patch?&lt;/p&gt;</comment>
                            <comment id="47426" author="di.wang" created="Mon, 5 Nov 2012 23:21:36 +0000"  >&lt;p&gt;yes, 2.3 client needs this patch. hmm, the patch seems only land on 2.1 right now.&lt;/p&gt;</comment>
                            <comment id="47692" author="di.wang" created="Mon, 12 Nov 2012 14:17:30 +0000"  >&lt;p&gt;The patch has been landed to 2.1, and I will make a patch for 2.4 soon.&lt;/p&gt;</comment>
                            <comment id="47946" author="di.wang" created="Fri, 16 Nov 2012 13:47:39 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/#change,4606&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4606&lt;/a&gt; patch for current master.&lt;/p&gt;</comment>
                            <comment id="49558" author="jlevi" created="Fri, 21 Dec 2012 13:38:00 +0000"  >&lt;p&gt;Landed to Master&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="18881">LU-3318</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="11689" name="dk.seq.debug.gz" size="1583915" author="rappleye" created="Fri, 13 Jul 2012 21:03:32 +0000"/>
                            <attachment id="11688" name="files.txt" size="7492" author="rappleye" created="Fri, 13 Jul 2012 21:03:32 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv347:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4004</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>