<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:33:31 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3392] filter_do_bio()) ASSERTION(rw == OBD_BRW_READ)</title>
                <link>https://jira.whamcloud.com/browse/LU-3392</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We have experienced an LBUG today on one of our OSS servers which I have not seen before.&lt;/p&gt;

&lt;p&gt;LustreError: 16066:0:(filter_io_26.c:344:filter_do_bio()) ASSERTION(rw == OBD_BRW_READ) failed&lt;br/&gt;
LustreError: 16066:0:(filter_io_26.c:344:filter_do_bio()) LBUG&lt;br/&gt;
Pid: 16066, comm: ll_ost_io_126&lt;/p&gt;

&lt;p&gt;Now after rebooting that OSS same LBUG is triggered as soon as OSTs finish recovery and start servicing their data. Has anyone seen this before ?&lt;/p&gt;

&lt;p&gt;Our environment:&lt;br/&gt;
servers:  RHEL6 2.6.32-220.17.1.el6_lustre.x86_64 Lustre-2.1.2&lt;br/&gt;
clients:  RHEL6 2.6.32-358.6.2.el6.x86_64 Lustre-2.1.5 patchless&lt;/p&gt;</description>
                <environment>servers:  RHEL6 2.6.32-220.17.1.el6_lustre.x86_64 Lustre-2.1.2&lt;br/&gt;
clients:  RHEL6 2.6.32-358.6.2.el6.x86_64 Lustre-2.1.5 patchless</environment>
        <key id="19137">LU-3392</key>
            <summary>filter_do_bio()) ASSERTION(rw == OBD_BRW_READ)</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="4">Incomplete</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="wjt27">Wojciech Turek</reporter>
                        <labels>
                    </labels>
                <created>Fri, 24 May 2013 10:39:39 +0000</created>
                <updated>Fri, 3 Feb 2017 05:59:58 +0000</updated>
                            <resolved>Mon, 28 Sep 2015 17:29:53 +0000</resolved>
                                    <version>Lustre 2.1.2</version>
                                                        <due></due>
                            <votes>1</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="59512" author="wjt27" created="Wed, 29 May 2013 11:06:06 +0000"  >&lt;p&gt;The problem seem to be caused by one particular OST. I identify that OST by unmounting OSTs and  I have run fsck on that OST and it found some errors. After fixing them I was able to mount  OST and and LBUG was not reoccurring. I can not explain how the corruption of the inode size has crept in in the first place which is concerning. &lt;/p&gt;

&lt;p&gt;fsck from util-linux-ng 2.17.2&lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
lustre1-OST0003: recovering journal&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Inode 9289789, i_size is 17592186044416, should be 17592184995840.  Fix&amp;lt;y&amp;gt;? yes&lt;br/&gt;
Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
Free blocks count wrong (447184151, counted=447797885).&lt;br/&gt;
Fix&amp;lt;y&amp;gt;? yes&lt;br/&gt;
Free inodes count wrong (16879337, counted=16879523).&lt;br/&gt;
Fix&amp;lt;y&amp;gt;? yes&lt;/p&gt;

&lt;p&gt;lustre1-OST0003: ***** FILE SYSTEM WAS MODIFIED *****&lt;/p&gt;

&lt;p&gt;     6008797 inodes used (26.25%, out of 22888320)&lt;br/&gt;
      455104 non-contiguous files (0.4%)&lt;br/&gt;
          32 non-contiguous directories (0.0%)&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;of inodes with ind/dind/tind blocks: 0/0/0&lt;br/&gt;
             Extent depth histogram: 5919318/89344/127&lt;br/&gt;
  5411611248 blocks used (92.36%, out of 5859409133)&lt;br/&gt;
           0 bad blocks&lt;br/&gt;
        1726 large files&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;     6008751 regular files&lt;br/&gt;
          37 directories&lt;br/&gt;
           0 character device files&lt;br/&gt;
           0 block device files&lt;br/&gt;
           0 fifos&lt;br/&gt;
           0 links&lt;br/&gt;
           0 symbolic links (0 fast symbolic links)&lt;br/&gt;
           0 sockets&lt;br/&gt;
------------&lt;br/&gt;
     6008788 files&lt;/p&gt;
</comment>
                            <comment id="59897" author="wjt27" created="Mon, 3 Jun 2013 16:37:29 +0000"  >&lt;p&gt;This LBUG has hit us again, and I found that fsck alone does not actually fix it but aborting recovery does. So after being hit by this lbug one needs to restart OSS and then fsck the OST to fix the i_size error then mount the OST with abort recovery option. If we did not abort recovery LBUG hits again and the i_size is corrupted again. We found that after recovering filesystem one client would not come back. This is most likely the client that creates the problem in the first place. It sounds like a serious bug as it seem that a client operation can bring down the server. Next step for us is to update the server side to 2.1.5 and see if we still see this problem.&lt;/p&gt;
</comment>
                            <comment id="128644" author="jfc" created="Mon, 28 Sep 2015 17:29:53 +0000"  >&lt;p&gt;Marking this as resolved/incomplete.&lt;/p&gt;

&lt;p&gt;If this is still a live issue on newer release, just let us know and we&apos;ll move the ticket to the correct Project.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="174944" author="gerrit" created="Thu, 24 Nov 2016 07:32:39 +0000"  >&lt;p&gt;Wang Shilong (wshilong@ddn.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/23931&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/23931&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3392&quot; title=&quot;filter_do_bio()) ASSERTION(rw == OBD_BRW_READ)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3392&quot;&gt;&lt;del&gt;LU-3392&lt;/del&gt;&lt;/a&gt; obdfilter: handle large file writting gracefully&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_1&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d03ea264ec0aa273e3e91ef81070a66adeced965&lt;/p&gt;</comment>
                            <comment id="174972" author="gerrit" created="Thu, 24 Nov 2016 12:25:30 +0000"  >&lt;p&gt;Wang Shilong (wshilong@ddn.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/23938&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/23938&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3392&quot; title=&quot;filter_do_bio()) ASSERTION(rw == OBD_BRW_READ)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3392&quot;&gt;&lt;del&gt;LU-3392&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: handle large file writting gracefully&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 622fbed8fd4203487c78ec24bdda2bbd0a9ded07&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvrs7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8396</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>