<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:06:47 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-414] error looking up logfile</title>
                <link>https://jira.whamcloud.com/browse/LU-414</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Our admins tried to add 8 new OSS nodes to an existing lustre server cluster running 1.8.5.0-5chaos.  There were 16 exiting OSS with 15 OSTs each, for a total of 240 old OSTs.  There are also 15 OSTs on each of the new OSS, for a total of 120 new OSTs.&lt;/p&gt;

&lt;p&gt;When the new OSTs were brought up, it looks like at least 54 of the OSTs failed to be configured correctly on the MDS, and are stuck in the IN (inactive) state according to &quot;lctl dl&quot;.  I don&apos;t see a pattern to which OSTs one which new OSS failed.&lt;/p&gt;

&lt;p&gt;This looks similar to bug 22658 that we have seen in the past:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2011-06-14 11:57:17 LustreError: 1432:0:(llog_lvfs.c:612:llog_lvfs_create()) error looking up logfile 0x10612404:0x0: rc -2
2011-06-14 11:57:17 LustreError: 1432:0:(llog_obd.c:200:llog_setup()) obd lsd-OST012e-osc ctxt 2 lop_setup=ffffffff885b3dc0 failed -2
2011-06-14 11:57:17 LustreError: 1432:0:(osc_request.c:4242:osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT
2011-06-14 11:57:17 LustreError: 1432:0:(osc_request.c:4258:osc_llog_init()) osc &apos;lsd-OST012e-osc&apos; tgt &apos;lsd-MDT0000&apos; rc=-2
2011-06-14 11:57:17 LustreError: 1432:0:(osc_request.c:4260:osc_llog_init()) logid 0x10612404:0x0
2011-06-14 11:57:17 LustreError: 1432:0:(lov_log.c:253:lov_llog_init()) error osc_llog_init idx 302 osc &apos;lsd-OST012e-osc&apos; tgt &apos;lsd-MDT0000&apos; (rc=-2)
2011-06-14 11:57:17 LustreError: 1432:0:(llog_lvfs.c:612:llog_lvfs_create()) error looking up logfile 0x62800000028:0x10612404: rc -2
2011-06-14 11:57:17 LustreError: 1432:0:(llog_obd.c:200:llog_setup()) obd lsd-OST0130-osc ctxt 2 lop_setup=ffffffff885b3dc0 failed -2
2011-06-14 11:57:17 LustreError: 1444:0:(lov_log.c:161:lov_llog_origin_connect()) error osc_llog_connect tgt 302 (-107)
2011-06-14 11:57:17 LustreError: 1444:0:(mds_lov.c:1044:__mds_lov_synchronize()) lsd-MDT0000: lsd-OST012e_UUID failed at llog_origin_connect: -107
2011-06-14 11:57:17 Lustre: lsd-OST012e_UUID: Sync failed deactivating: rc -107
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The admins decided to reboot the MDS, the MDS is still unable to activate those OSTs (at least, I assume that it is the same set of OSTs):&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2011-06-14 12:49:32 LustreError: 9611:0:(lov_log.c:161:lov_llog_origin_connect()) error osc_llog_connect tgt 258 (-107)
2011-06-14 12:49:32 LustreError: 9611:0:(mds_lov.c:1044:__mds_lov_synchronize()) lsd-MDT0000: lsd-OST0102_UUID failed at llog_origin_connect: -107
2011-06-14 12:49:32 Lustre: lsd-OST0102_UUID: Sync failed deactivating: rc -107
2011-06-14 12:49:32 LustreError: 9612:0:(lov_log.c:161:lov_llog_origin_connect()) error osc_llog_connect tgt 259 (-107)
2011-06-14 12:49:32 LustreError: 9646:0:(mds_lov.c:1044:__mds_lov_synchronize()) lsd-MDT0000: lsd-OST0125_UUID failed at llog_origin_connect: -107
2011-06-14 12:49:32 LustreError: 9646:0:(mds_lov.c:1044:__mds_lov_synchronize()) Skipped 20 previous similar messages
2011-06-14 12:49:32 Lustre: lsd-OST0125_UUID: Sync failed deactivating: rc -107
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Notice there is no warning about &quot;error looking up logfile&quot;, but the lov_llog_origin_connect() is still failing.&lt;/p&gt;

&lt;p&gt;I suspect that lov_llog_origin_connect() is getting error code 107, ENOTCONN, from llog_obd2ops(), meaning that the llog_ctxt *ctxt is NULL.  I say that, because watching the logs, I see an RPC between the MDS and OSS nodes complete successfully, but I can&apos;t see an RPC being sent after the &quot;lov_llog_origin_connect()) connect 256/360&quot; lines in the log.&lt;/p&gt;

&lt;p&gt;It appears that at the ptlrpc level, the mdt and ost are in fact fully connected.  The import/export appear to be set up.&lt;/p&gt;

&lt;p&gt;I am beginning to suspect that the &quot;fix&quot; for bug 22658 that allows the mds to start up when there are missing log files just lets the server get stuck at this next point in the code.&lt;/p&gt;

&lt;p&gt;Also, I think there is pretty clearly some bug in Lustre&apos;s initial creation of ost llog files on the mds.&lt;/p&gt;

&lt;p&gt;I am attaching the mds console log for now.  I can package up some more detailed lustre logs tomorrow.&lt;/p&gt;</description>
                <environment>CHAOS4.4 (RHEL5.4), lustre 1.8.5.0-5chaos</environment>
        <key id="11167">LU-414</key>
            <summary>error looking up logfile</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="morrone">Christopher Morrone</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Tue, 14 Jun 2011 20:47:17 +0000</created>
                <updated>Wed, 11 Oct 2017 19:55:38 +0000</updated>
                            <resolved>Wed, 11 Oct 2017 19:55:38 +0000</resolved>
                                                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="16368" author="pjones" created="Wed, 15 Jun 2011 00:07:01 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please look at this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="16381" author="hongchao.zhang" created="Wed, 15 Jun 2011 08:21:50 +0000"  >&lt;p&gt;this error is a little different from the 22658, &lt;br/&gt;
In this issue, the CATLOG of some OSC doesn&apos;t exist, but there is a record in the CATLOG of the MDT, then it cause &quot;llog_setup&quot; to fail in &quot;osc_llog_init&quot;.&lt;br/&gt;
in 22658, the CATLOG of the OSC does exist, but the actual plain log files(containing the real operation log info) doesn&apos;t exist.&lt;/p&gt;

&lt;p&gt;and there is still two &quot;error looking up logfile&quot; after the MDS reboot in your attached console log,&lt;br/&gt;
...&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(llog_lvfs.c:612:llog_lvfs_create()) error looking up logfile 0x10612404:0x0: rc -2&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(llog_obd.c:200:llog_setup()) obd lsd-OST012e-osc ctxt 2 lop_setup=ffffffff885aedc0 failed -2&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(osc_request.c:4242:osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(osc_request.c:4258:osc_llog_init()) osc &apos;lsd-OST012e-osc&apos; tgt &apos;lsd-MDT0000&apos; rc=-2&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(osc_request.c:4260:osc_llog_init()) logid 0x10612404:0x0&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(lov_log.c:253:lov_llog_init()) error osc_llog_init idx 302 osc &apos;lsd-OST012e-osc&apos; tgt &apos;lsd-MDT0000&apos; (rc=-2)&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(llog_lvfs.c:612:llog_lvfs_create()) error looking up logfile 0x62800000028:0x10612404: rc -2&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(llog_obd.c:200:llog_setup()) obd lsd-OST0130-osc ctxt 2 lop_setup=ffffffff885aedc0 failed -2&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(osc_request.c:4242:osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(osc_request.c:4258:osc_llog_init()) osc &apos;lsd-OST0130-osc&apos; tgt &apos;lsd-MDT0000&apos; rc=-2&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(osc_request.c:4260:osc_llog_init()) logid 0x62800000028:0x10612404&lt;br/&gt;
2011-06-14 12:46:52 LustreError: 8762:0:(lov_log.c:253:lov_llog_init()) error osc_llog_init idx 304 osc &apos;lsd-OST0130-osc&apos; tgt &apos;lsd-MDT0000&apos; (rc=-2)&lt;br/&gt;
...&lt;/p&gt;

&lt;p&gt;Are the same count of OST stuck in the IN (inactive) state after you reboot the MDS?&lt;/p&gt;


&lt;p&gt;for the llog, I&apos;m afraid whether there is a problem which some uninitialized data is read in the MDS&apos;s  CATLOG? &lt;br/&gt;
Are the index of these newly added OSTs larger than the existing ones?&lt;/p&gt;</comment>
                            <comment id="16419" author="morrone" created="Wed, 15 Jun 2011 17:13:32 +0000"  >&lt;p&gt;Yes, as far as I can tell they are all larger than existing ones.  OSTs 0000-00ef are on the original 16 nodes, and 00f0-0167 are on the new 8 nodes.&lt;/p&gt;

&lt;p&gt;I do believe that it was the same inactive OSTs after reboot.  But the logs squash some messages, making it a little hard to verify 100%.  The sysadmin is pretty certain that it was the same count of OSTs that was inactive before and after reboot, and some spot checking of the logs verifies at least those OST messages that are not squashed are the same.&lt;/p&gt;</comment>
                            <comment id="16422" author="morrone" created="Wed, 15 Jun 2011 19:02:00 +0000"  >&lt;p&gt;See the new attachment.  I translated the MDS&apos;s CATALOGS file into a readable form, and it is pretty clear which entries are incorrect.  Any that look like &quot;10612404:0&quot; or &quot;64000000028:1580334&quot; are bad.  They have no matching file in the OBJECTS directory, and they correspond with the OSTs that are stuck in the inactive state.&lt;/p&gt;</comment>
                            <comment id="16424" author="hongchao.zhang" created="Wed, 15 Jun 2011 20:17:00 +0000"  >&lt;p&gt;yes, the MDS&apos;s CATALOGS is corrupted! and I suspect it is caused by the disorderly call of mds_lov_update_desc for the&lt;br/&gt;
newly added OSTs, if one OST with larger index is initialized before those with small index, there will leave an &lt;br/&gt;
uninitialized area in the log file, which cause this issue.&lt;/p&gt;</comment>
                            <comment id="16445" author="hongchao.zhang" created="Thu, 16 Jun 2011 08:06:28 +0000"  >&lt;p&gt;Hi Chris,&lt;br/&gt;
    Could you please try to delete these insane records in the MDS&apos;s CATALOGS to check whether the problem is fixed? Thanks!&lt;/p&gt;

&lt;p&gt;btw, I have written the following patch and will push it to Gerrit after testing it locally,&lt;/p&gt;

&lt;p&gt;diff --git a/lustre/obdclass/llog_lvfs.c b/lustre/obdclass/llog_lvfs.c&lt;br/&gt;
index 43d88a3..e6d6534 100644&lt;br/&gt;
&amp;#8212; a/lustre/obdclass/llog_lvfs.c&lt;br/&gt;
+++ b/lustre/obdclass/llog_lvfs.c&lt;br/&gt;
@@ -833,9 +833,10 @@ int llog_put_cat_list(struct obd_device *obd, struct obd_device *disk_obd,&lt;/p&gt;
 {
         struct lvfs_run_ctxt saved;
         struct l_file *file;
+        void *buf = NULL;
         int rc, rc1 = 0;
         int size = sizeof(*idarray) * count;
-        loff_t off = idx * sizeof(*idarray);
+        loff_t filesize, off = idx * sizeof(*idarray);
 
         if (!count)
                 GOTO(out1, rc = 0);
@@ -856,6 +857,17 @@ int llog_put_cat_list(struct obd_device *obd, struct obd_device *disk_obd,
                 GOTO(out, rc = -ENOENT);
         }

&lt;p&gt;+        filesize = i_size_read(file-&amp;gt;f_dentry-&amp;gt;d_inode);&lt;br/&gt;
+        if (filesize &amp;lt; off) &lt;/p&gt;
{
+                loff_t count = off - filesize;
+                OBD_ALLOC(buf, count);
+                if (buf == NULL)
+                        GOTO(out, rc = -ENOMEM);
+
+                fsfilt_write_record(disk_obd, file, buf, count, &amp;amp;filesize, 1);
+                OBD_FREE(buf, count);
+        }
&lt;p&gt;+&lt;br/&gt;
         rc = fsfilt_write_record(disk_obd, file, idarray, size, &amp;amp;off, 1);&lt;br/&gt;
         if (rc) {&lt;br/&gt;
                 CDEBUG(D_INODE,&quot;OBD filter: error writeing %s: rc %d\n&quot;,&lt;/p&gt;</comment>
                            <comment id="16495" author="morrone" created="Thu, 16 Jun 2011 21:31:23 +0000"  >&lt;p&gt;Yes, I&apos;ll zero the records and see if it will work. We have a downtime scheduled for Tuesday, June 21, so I will try it then.&lt;/p&gt;</comment>
                            <comment id="16611" author="hongchao.zhang" created="Mon, 20 Jun 2011 09:24:36 +0000"  >&lt;p&gt;the rhel5 kernel version for b1_8 has been updated recently(update to 2.6.18-238.12.1), and some time is spent on&lt;br/&gt;
preparing the new build&amp;amp;test environment, then the local test for the patch is delayed, and it could be complete tomorrow.&lt;/p&gt;</comment>
                            <comment id="16657" author="hongchao.zhang" created="Tue, 21 Jun 2011 02:51:00 +0000"  >&lt;p&gt;the patch is at &lt;a href=&quot;http://review.whamcloud.com/#change,987&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,987&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="16712" author="morrone" created="Tue, 21 Jun 2011 18:33:46 +0000"  >&lt;p&gt;Zeroing the CATALOGS file got everything working this morning.&lt;/p&gt;</comment>
                            <comment id="18795" author="hongchao.zhang" created="Mon, 8 Aug 2011 05:57:45 +0000"  >&lt;p&gt;as per the code line, the newly allocated block in the CATLOG file is initialized with zero in the current b1_8 and master,&lt;br/&gt;
and is also the case for v1_8_5, then it should not a use-uninitialized-data issue, but the content of the CATLOG does &lt;br/&gt;
indicate it is, and the first insane data appears just beyond 8192 (the 256th entry). &lt;/p&gt;

&lt;p&gt;Hi Chris,&lt;br/&gt;
Is the block size in MDT&apos;s device 8192? and could you please help to paste your source code tree here? Thanks!&lt;/p&gt;</comment>
                            <comment id="18798" author="morrone" created="Mon, 8 Aug 2011 08:41:44 +0000"  >&lt;p&gt;&lt;a href=&quot;https://github.com/chaos/lustre/tree/1.8.5.0-5chaos&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/chaos/lustre/tree/1.8.5.0-5chaos&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will need to check the block size.&lt;/p&gt;</comment>
                            <comment id="19030" author="hongchao.zhang" created="Wed, 10 Aug 2011 07:29:07 +0000"  >&lt;p&gt;in osc_llog_init, &quot;llog_get_cat_list&quot; and &quot;llog_put_cat_list&quot; is protected by the mutex &quot;obd_llog_cat_process&quot; of MDT,&lt;br/&gt;
then the problem should not be related to the race between OSCs, the remaining possible case is the problem during &quot;fsfilt&quot;&lt;br/&gt;
writing the record into log file.&lt;/p&gt;

&lt;p&gt;Hi Chris,&lt;br/&gt;
Is your rhel5.4 the official one? I want to do some tests on lustre 1.8.5.0-5chaos.&lt;/p&gt;</comment>
                            <comment id="19257" author="morrone" created="Mon, 15 Aug 2011 21:44:12 +0000"  >&lt;p&gt;No, it is CHAOS4.4, so some changes from RHEL5.4.&lt;/p&gt;

&lt;p&gt;The MDS is using a 3ware card and the array is configured as RAID10.  Stripe size is 64k.&lt;/p&gt;</comment>
                            <comment id="19535" author="hongchao.zhang" created="Tue, 23 Aug 2011 11:37:53 +0000"  >&lt;p&gt;Hi Chris,&lt;/p&gt;

&lt;p&gt;could you please tell me where can I get the CHAOS4.4? Thanks!&lt;/p&gt;</comment>
                            <comment id="19536" author="hongchao.zhang" created="Tue, 23 Aug 2011 12:31:45 +0000"  >&lt;p&gt;I have tested the 1.8.5.0-5chaos against 2.6.18-194.17.1, it used 2 OSS(one has 252 OSTs, the other has 50 OSTs) and&lt;br/&gt;
no such problem was triggered.&lt;/p&gt;

&lt;p&gt;FYI, I have encountered an error &quot;EXTRA_DIST: variable &apos;bin_SCRIPTS&apos; is used but &apos;bin_SCRIPTS&apos; is undefined&quot; during &lt;br/&gt;
running &quot;sh autogen.sh&quot;, and I skipped it and no other issues are shown up in following compile&amp;amp;test.&lt;/p&gt;</comment>
                            <comment id="19665" author="morrone" created="Fri, 26 Aug 2011 19:47:36 +0000"  >&lt;p&gt;&lt;a href=&quot;ftp://gdo-lc.ucllnl.org/pub/projects/chaos/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;ftp://gdo-lc.ucllnl.org/pub/projects/chaos/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Can you tell me more about your test?  Why did you have the OST count so imbalanced?&lt;/p&gt;

&lt;p&gt;We saw this with only 15 OSTs on each of 8 OSS, so I wouldn&apos;t think such a high count would be necessary.  Did you statically allocate the index numbers to the osts, and alternate them between nodes?  Did you issue the mount commands in parallel?&lt;/p&gt;</comment>
                            <comment id="19764" author="hongchao.zhang" created="Tue, 30 Aug 2011 04:57:02 +0000"  >&lt;p&gt;the test is run on my two virtual nodes using loop device, one node runs MDT+(252 OSTs), the other runs 50 OSTs&lt;br/&gt;
the I don&apos;t specify the index value of these OSTs and don&apos;t alternate them, the mount commands aren&apos;t in parallel.&lt;/p&gt;

&lt;p&gt;the reason of testing it in this case is the operations of CATALOGS &quot;llog_get_cat_list&quot; and &quot;llog_put_cat_list&quot;, &lt;br/&gt;
both are protected by the mutex &quot;obd_llog_cat_process&quot; of MDT, then the problem is the &quot;fsfilt&quot; layer during writing&lt;br/&gt;
the record into log file.&lt;/p&gt;</comment>
                            <comment id="19775" author="morrone" created="Tue, 30 Aug 2011 19:28:20 +0000"  >&lt;p&gt;Ah, I think thats part of your problem.&lt;/p&gt;

&lt;p&gt;We DO specify our index values.  If you aren&apos;t doing that, then the index numbers will just be sequentially allocated as the OSTs connect the first time.  And if you are not mounting the first time in parallel, then there is no chance to reproduce the issue we are seeing.&lt;/p&gt;</comment>
                            <comment id="19887" author="hongchao.zhang" created="Fri, 2 Sep 2011 08:39:46 +0000"  >&lt;p&gt;I specify the index of the OSTs and mount them not sequentially, the problem still not occurs. and I can&apos;t mount these OSTs&lt;br/&gt;
in parallel in my test environment, and could you please help to test it?&lt;/p&gt;</comment>
                            <comment id="78300" author="jfc" created="Tue, 4 Mar 2014 01:16:36 +0000"  >&lt;p&gt;Chris &amp;#8211; I&apos;m doing some cleanup work on JIRA issues. &lt;br/&gt;
Would you prefer that I keep this open and unresolved? &lt;br/&gt;
Or may I mark it as &apos;resolved &amp;#8211; cannot reproduce&apos;?&lt;br/&gt;
Many thanks,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="78378" author="morrone" created="Tue, 4 Mar 2014 19:25:12 +0000"  >&lt;p&gt;I think it is very likely that this is still a problem.  We seem to hit it in every version of Lustre.  Our admins now have a permanent rule of manually adding OSTs sequentially to Lustre because this code is so broken.  That is not acceptable.  It turns an operation that should take under a minute into one that takes well over an hour.&lt;/p&gt;</comment>
                            <comment id="89471" author="hongchao.zhang" created="Fri, 18 Jul 2014 08:38:16 +0000"  >&lt;p&gt;As of b2_4 (starting to use LOD/OSP instead of LOV/OSC at MDT), this issue was fixed in &quot;osp_sync_llog_init&quot;, it will recreate&lt;br/&gt;
the log file if it didn&apos;t exist previously (previously it only create the log file if the log ID is zero)&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; &lt;span class=&quot;code-object&quot;&gt;int&lt;/span&gt; osp_sync_llog_init(&lt;span class=&quot;code-keyword&quot;&gt;const&lt;/span&gt; struct lu_env *env, struct osp_device *d)
{
        ...
        ctxt = llog_get_context(obd, LLOG_MDS_OST_ORIG_CTXT);
        LASSERT(ctxt);

        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (likely(logid_id(&amp;amp;osi-&amp;gt;osi_cid.lci_logid) != 0)) {
                rc = llog_open(env, ctxt, &amp;amp;lgh, &amp;amp;osi-&amp;gt;osi_cid.lci_logid, NULL,
                               LLOG_OPEN_EXISTS);
                &lt;span class=&quot;code-comment&quot;&gt;/* re-create llog &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; it is missing */&lt;/span&gt;
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (rc == -ENOENT)
                        logid_set_id(&amp;amp;osi-&amp;gt;osi_cid.lci_logid, 0);
                &lt;span class=&quot;code-keyword&quot;&gt;else&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (rc &amp;lt; 0)
                        GOTO(out_cleanup, rc);
        }

        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (unlikely(logid_id(&amp;amp;osi-&amp;gt;osi_cid.lci_logid) == 0)) {
                rc = llog_open_create(env, ctxt, &amp;amp;lgh, NULL, NULL);
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (rc &amp;lt; 0)
                        GOTO(out_cleanup, rc);
                osi-&amp;gt;osi_cid.lci_logid = lgh-&amp;gt;lgh_id;
        }

        LASSERT(lgh != NULL);
        ctxt-&amp;gt;loc_handle = lgh;

        rc = llog_cat_init_and_process(env, lgh);
        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (rc)
                GOTO(out_close, rc);

        rc = llog_osd_put_cat_list(env, d-&amp;gt;opd_storage, d-&amp;gt;opd_index, 1,
                                   &amp;amp;osi-&amp;gt;osi_cid);
        ...
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;  </comment>
                    </comments>
                    <attachments>
                            <attachment id="10267" name="CATALOGS_translated.txt" size="12238" author="morrone" created="Wed, 15 Jun 2011 18:41:59 +0000"/>
                            <attachment id="10259" name="console.momus-mds1.gz" size="281549" author="morrone" created="Tue, 14 Jun 2011 20:47:17 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Fri, 18 Jul 2014 20:47:17 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw0vb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>10221</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Tue, 14 Jun 2011 20:47:17 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>