<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:11:30 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14641] per extents bytes allocation stats</title>
                <link>https://jira.whamcloud.com/browse/LU-14641</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Seeing debug messages on the console during large RPC writes:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Apr  3 01:29:08 foss01 kernel: Lustre: 12536:0:(osd_handler.c:1947:osd_trans_start()) work-OST0004: credits 24347 &amp;gt; trans_max 12800
Apr  3 01:29:08 foss01 kernel: Lustre: 12536:0:(osd_handler.c:1876:osd_trans_dump_creds())   create: 0/0/0, destroy: 0/0/0
Apr  3 01:29:08 foss01 kernel: Lustre: 12536:0:(osd_handler.c:1883:osd_trans_dump_creds())   attr_set: 1/1/0, xattr_set: 2/15/0
Apr  3 01:29:08 foss01 kernel: Lustre: 12536:0:(osd_handler.c:1893:osd_trans_dump_creds())   write: 2/24182/0, punch: 0/0/0, quota 5/149/0
Apr  3 01:29:08 foss01 kernel: Lustre: 12536:0:(osd_handler.c:1900:osd_trans_dump_creds())   insert: 0/0/0, delete: 0/0/0
Apr  3 01:29:08 foss01 kernel: Lustre: 12536:0:(osd_handler.c:1907:osd_trans_dump_creds())   ref_add: 0/0/0, ref_del: 0/0/0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;These are likely caused by patches from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14134&quot; title=&quot;reduce credits for new writing potentially&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14134&quot;&gt;&lt;del&gt;LU-14134&lt;/del&gt;&lt;/a&gt; landing that changed the way transaction sizes are calculated (in theory reducing transaction size, but not always).&lt;/p&gt;

&lt;p&gt;We might need an interface to check how extent per bytes going with system filled up, and also debug message to check how many credits calculated in&#160;&#160;osd_declare_write_commit()&lt;/p&gt;</description>
                <environment></environment>
        <key id="63931">LU-14641</key>
            <summary>per extents bytes allocation stats</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="wshilong">Wang Shilong</assignee>
                                    <reporter username="wshilong">Wang Shilong</reporter>
                        <labels>
                    </labels>
                <created>Mon, 26 Apr 2021 03:31:06 +0000</created>
                <updated>Sun, 14 Nov 2021 03:09:26 +0000</updated>
                            <resolved>Wed, 12 May 2021 02:07:10 +0000</resolved>
                                                    <fixVersion>Lustre 2.15.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="299686" author="gerrit" created="Mon, 26 Apr 2021 03:31:42 +0000"  >&lt;p&gt;Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/43446&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43446&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14641&quot; title=&quot;per extents bytes allocation stats&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14641&quot;&gt;&lt;del&gt;LU-14641&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: extents bytes allocation stats&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: ccad628ef9240ac52a8f81d4ab0c873ee8958b0d&lt;/p&gt;</comment>
                            <comment id="299728" author="adilger" created="Mon, 26 Apr 2021 15:18:04 +0000"  >&lt;p&gt;I&apos;m hitting this error pretty regularly on my home server with 2.14.0 (32MB journal, 4MB RPC size), so if there is some data to be collected please let me know. &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;osd_handler.c:1938:osd_trans_start()) myth-OST0001: credits 1327 &amp;gt; trans_max 1024
osd_handler.c:1867:osd_trans_dump_creds())   create: 0/0/0, destroy: 0/0/0
osd_handler.c:1874:osd_trans_dump_creds())   attr_set: 1/1/0, xattr_set: 2/15/0
osd_handler.c:1884:osd_trans_dump_creds())   write: 2/3116/0, punch: 0/0/0, quota 5/149/0
osd_handler.c:1891:osd_trans_dump_creds())   insert: 0/0/0, delete: 0/0/0
osd_handler.c:1898:osd_trans_dump_creds())   ref_add: 0/0/0, ref_del: 0/0/0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;del&gt;It would be useful if the initial &lt;tt&gt;osd_trans_start()&lt;/tt&gt; message included the filesystem label so that it could be seen of this problem is specific to a single OST or not (eg. fragmented).&lt;/del&gt;&lt;/p&gt;</comment>
                            <comment id="299782" author="wshilong" created="Tue, 27 Apr 2021 01:21:34 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=adilger&quot; class=&quot;user-hover&quot; rel=&quot;adilger&quot;&gt;adilger&lt;/a&gt; it has indicated this is &quot;myth-OST0001&quot;, what is space usage of this OST?&lt;/p&gt;</comment>
                            <comment id="299789" author="adilger" created="Tue, 27 Apr 2021 06:47:47 +0000"  >&lt;p&gt;Sorry, I was copying the message on my phone and didn&apos;t see that... &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;

&lt;p&gt;The usage of the OSTs is quite high:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfs df
UUID                   1K-blocks        Used   Available Use% Mounted on
myth-MDT0000_UUID       13523964     7974808     4762248  63% /myth[MDT:0] 
myth-OST0000_UUID     3861381132  3662435612   121198440  97% /myth[OST:0]
myth-OST0001_UUID     3861381132  3684771832    98804312  98% /myth[OST:1]
myth-OST0002_UUID     5795208676  4759666520   918714980  84% /myth[OST:2]
myth-OST0003_UUID     5795078644  5555443412   122789132  98% /myth[OST:3]
myth-OST0004_UUID     5794464848  5434846744    67540536  99% /myth[OST:4]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;but there are still reasonable-sized extents available for 4MB writes:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;e2freefrag /dev/vgmyth/lvmythost1 
Device: /dev/vgmyth/lvmythost1
Blocksize: 4096 bytes
/dev/vgmyth/lvmythost1: Operation not supported while calling fsmap
Total blocks: 973078528
Free blocks: 51836471 (5.3%)

Min. free extent: 4 KB 
Max. free extent: 116528 KB
Avg. free extent: 5348 KB
Num. free extent: 38696

HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range :  Free extents   Free Blocks  Percent
    4K...    8K-  :            77            77    0.00%
    8K...   16K-  :           118           275    0.00%
   16K...   32K-  :           204          1049    0.00%
   32K...   64K-  :          5694         59774    0.12%
   64K...  128K-  :           610         14045    0.03%
  128K...  256K-  :          1048         48236    0.09%
  256K...  512K-  :          1881        175273    0.34%
  512K... 1024K-  :          3585        688220    1.33%
    1M...    2M-  :          5749       1978160    3.82%
    2M...    4M-  :          7553       5375458   10.37%
    4M...    8M-  :          6924       9917254   19.13%
    8M...   16M-  :          1621       4763813    9.19%
   16M...   32M-  :          2530      14947189   28.84%
   32M...   64M-  :           905       9954228   19.20%
   64M...  128M-  :           197       3830417    7.39%
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="299790" author="wshilong" created="Tue, 27 Apr 2021 07:06:22 +0000"  >&lt;p&gt;I think with space usage growing up, eg like 98%? mballoc codes won&apos;t try best to scan block groups to align best free extent as we can. So i am wondering&lt;/p&gt;

&lt;p&gt;maybe in this case, system decay extent bytes to a small value(eg tens of KB)? It will be nice if you could apply above patch to watch extents_bytes_allocation changes with system running.&lt;/p&gt;

&lt;p&gt;At the same time, i am wondering how we could fix the problem,&#160; because even we did not hit this problem, it doesn&apos;t say that old codes are&lt;/p&gt;

&lt;p&gt;correct, it is just because it did not reserve enough credits, because we always calculate extents as number of write fragments..&lt;/p&gt;

&lt;p&gt;And if we consider worst of cases in the new codes:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;    &#160; &#160; /*&#160; &#160;
&#160;&#160; &#160; &#160; &#160; * each extent can go into new leaf causing a split
&#160;&#160; &#160; &#160; &#160; * 5 is max tree depth: inode + 4 index blocks
&#160;&#160; &#160; &#160; &#160; * with blockmaps, depth is 3 at most
&#160;&#160; &#160; &#160; &#160; */
&#160; &#160; &#160; &#160; if (LDISKFS_I(inode)-&amp;gt;i_flags &amp;amp; LDISKFS_EXTENTS_FL) {
&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; /*&#160; &#160;
&#160;&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; * many concurrent threads may grow tree by the time
&#160;&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; * our transaction starts. so, consider 2 is a min depth
&#160;&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; */
&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; depth = ext_depth(inode);
&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; depth = max(depth, 1) + 1;&#160;
&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; newblocks += depth;
&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; credits += depth * 2 * extents;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;For your 4M RPC, extents could be 1024 at the worst,&#160; considering depth as 4, then credits here will be 10240 blocks.....&lt;/p&gt;

&lt;p&gt;I am wondering maybe we should change codes to make it more optimistic, eg, extents did could not exceed(number of fragments, 100)? in the default.&lt;/p&gt;

&lt;p&gt;We could restart if we are really short of credits with this extents?&lt;/p&gt;

&lt;p&gt;What do you think?&lt;/p&gt;</comment>
                            <comment id="299793" author="adilger" created="Tue, 27 Apr 2021 07:31:25 +0000"  >&lt;p&gt;Yes, definitely mballoc will be having a bit of problems with allocations on this filesystem, but I hope that patch &lt;a href=&quot;https://review.whamcloud.com/43232&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43232&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14438&quot; title=&quot;backport ldiskfs mballoc patches&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14438&quot;&gt;LU-14438&lt;/a&gt; ldiskfs: improvements to mballoc&lt;/tt&gt;&quot; can help this in the future.  As the &lt;tt&gt;e2freefrag&lt;/tt&gt; output shows, there are still many extents of 4MB or larger that could be used in this case.&lt;/p&gt;

&lt;p&gt;If the transaction can be restarted in case of credit shortage, then it makes sense to be more optimistic in the credit reservations.  Even in a case like this where the filesystem is nearly full, it is extremely unlikely that the worst case would ever be hit.&lt;/p&gt;</comment>
                            <comment id="299795" author="adilger" created="Tue, 27 Apr 2021 07:46:02 +0000"  >&lt;p&gt;The fragmentation of the allocations can be seen fairly clearly in the &lt;tt&gt;osd-ldiskfs.myth-OST0001.brw_stats&lt;/tt&gt; output:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# dd if=/dev/zero of=/myth/tmp/ost1/f1 bs=4M count=100
# lctl get_param osd-ldiskfs.myth-OST0001.brw_stats | less
osd-ldiskfs.myth-OST0001.brw_stats=
                           read      |     write
pages per bulk r/w     rpcs  % cum % |  rpcs        % cum %
1K:                      0   0   0   |  100  100 100

                           read      |     write
discontiguous pages    rpcs  % cum % |  rpcs        % cum %
0:                       0   0   0   |   93  93  93
1:                       0   0   0   |    7   7 100

                           read      |     write
discontiguous blocks   rpcs  % cum % |  rpcs        % cum %
0:                       0   0   0   |   90  90  90
1:                       0   0   0   |   10   10 100

                           read      |     write
disk fragmented I/Os   ios   % cum % |  ios         % cum %
1:                       0   0   0   |    0   0   0
2:                       0   0   0   |   90  90  90
3:                       0   0   0   |   10   10 100
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;So with 100x 4MB RPC size, 10% of them had to allocate blocks in 2 separate chunks and this generated a console warning:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;osd_handler.c:1938:osd_trans_start()) myth-OST0001: credits 1235 &amp;gt; trans_max 1024
osd_handler.c:1874:osd_trans_dump_creds())   attr_set: 1/1/0, xattr_set: 2/15/0
osd_handler.c:1884:osd_trans_dump_creds())   write: 2/1076/0, punch: 0/0/0, quota 5/149/0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I will try to run a test with the patch tomorrow.&lt;/p&gt;</comment>
                            <comment id="299981" author="wshilong" created="Wed, 28 Apr 2021 14:53:26 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=adilger&quot; class=&quot;user-hover&quot; rel=&quot;adilger&quot;&gt;adilger&lt;/a&gt;&#160;Do you have possibility to apply latest patch to see how it helps?&lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="300478" author="simmonsja" created="Tue, 4 May 2021 20:55:23 +0000"  >&lt;p&gt;This is the problem I&apos;m seeing with the EC code.&lt;/p&gt;</comment>
                            <comment id="300485" author="adilger" created="Tue, 4 May 2021 21:26:32 +0000"  >&lt;p&gt;James, I don&apos;t think these messages have anything to do with EC, it just relates to large RPCs, &quot;smaller&quot; journals, and partly-fragmented free space.&lt;/p&gt;</comment>
                            <comment id="301000" author="adilger" created="Mon, 10 May 2021 08:15:31 +0000"  >&lt;p&gt;Shilong, I updated my server to include your patch but am no longer seeing the journal warnings since upgrading and remounting the OSTs.  I do see some uneven stats, but this may still be an issue with the application:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lctl get_param osd-ldiskfs.myth-*.extent_bytes_allocation
osd-ldiskfs.myth-MDT0000.extent_bytes_allocation=4096
osd-ldiskfs.myth-OST0000.extent_bytes_allocation=921170
osd-ldiskfs.myth-OST0001.extent_bytes_allocation=757631
osd-ldiskfs.myth-OST0002.extent_bytes_allocation=1036399
osd-ldiskfs.myth-OST0003.extent_bytes_allocation=247905
osd-ldiskfs.myth-OST0004.extent_bytes_allocation=1048250
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="301002" author="wshilong" created="Mon, 10 May 2021 08:20:44 +0000"  >&lt;p&gt;Ignoring MDT 4096, OST stats&#160;247905 is most smaller one, which means about 5 extents for 1M write, which was probably ok considering your space usage is high 98%...&lt;/p&gt;</comment>
                            <comment id="301101" author="adilger" created="Mon, 10 May 2021 22:55:13 +0000"  >&lt;p&gt;Even though that OST is almost 98% full, the free space is relatively good (88% of free space is in chunks 4MB or larger):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;e2freefrag /dev/vgmyth/lvmythost3 
Device: /dev/vgmyth/lvmythost3
Blocksize: 4096 bytes
/dev/vgmyth/lvmythost3: Operation not supported while calling fsmap
Total blocks: 1460371456
Free blocks: 91182550 (6.2%)

Min. free extent: 4 KB 
Max. free extent: 3869388 KB
Avg. free extent: 5676 KB
Num. free extent: 63654

HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range :  Free extents   Free Blocks  Percent
    4K...    8K-  :            80            80    0.00%
    8K...   16K-  :           103           247    0.00%
   16K...   32K-  :           180           949    0.00%
   32K...   64K-  :           368          4098    0.00%
   64K...  128K-  :           667         15328    0.02%
  128K...  256K-  :          1177         54808    0.06%
  256K...  512K-  :          1850        175030    0.19%
  512K... 1024K-  :          3523        686778    0.75%
    1M...    2M-  :          7456       2293985    2.52%
    2M...    4M-  :          9451       6161759    6.76%
    4M...    8M-  :         19705      28308110   31.05%
    8M...   16M-  :         19017      43864690   48.11%
   16M...   32M-  :             5         26840    0.03%
   32M...   64M-  :            12        147600    0.16%
   64M...  128M-  :             9        229679    0.25%
  128M...  256M-  :            11        544954    0.60%
  256M...  512M-  :            23       2171087    2.38%
  512M... 1024M-  :             9       1600006    1.75%
    1G...    2G-  :             6       2256410    2.47%
    2G...    4G-  :             2       1799598    1.97%
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;However, that makes me wonder whether the allocator is actually doing a bad job for large writes and using up small chunks of space?  Checking some recently-written files, I see that the initial allocations are relatively small, but once the file is over 4 MB in size it uses good allocations, so that could be related to the way the file is being written by the client (slowly over an hour):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;filefrag -v /myth/tv/2063_20210509184200.ts
Filesystem type is: bd00bd0
File size of /myth/tv/2063_20210509184200.ts is 2535750768 (2476320 blocks of 1024 bytes)
 ext:     device_logical:        physical_offset: length:  dev: flags:
   0:        0..     351:  980128408.. 980128759:    352: 0003: net
   1:      352..     707:  980129392.. 980129747:    356: 0003: net
   2:      708..    2607:  980148224.. 980150123:   1900: 0003: net
   3:     2608..    2855:  980138756.. 980139003:    248: 0003: net
   4:     2856..    2915:  980129748.. 980129807:     60: 0003: net
   5:     2916..    3579:  980139004.. 980139667:    664: 0003: net
   6:     3580..  131071: 1320420716..1320548207: 127492: 0003: net
   7:   131072..  262143: 1320550400..1320681471: 131072: 0003: net
   8:   262144..  393215: 1320681472..1320812543: 131072: 0003: net
   9:   393216..  524287: 1320812544..1320943615: 131072: 0003: net
  10:   524288..  655359: 1320943616..1321074687: 131072: 0003: net
  11:   655360..  786431: 1321074688..1321205759: 131072: 0003: net
  12:   786432..  917503: 1321205760..1321336831: 131072: 0003: net
  13:   917504.. 1048575: 1321336832..1321467903: 131072: 0003: net
  14:  1048576.. 1179647: 1321467904..1321598975: 131072: 0003: net
  15:  1179648.. 1310719: 1321598976..1321730047: 131072: 0003: net
  16:  1310720.. 1441791: 1321730048..1321861119: 131072: 0003: net
  17:  1441792.. 1572863: 1321861120..1321992191: 131072: 0003: net
  18:  1572864.. 1703935: 1321992192..1322123263: 131072: 0003: net
  19:  1703936.. 1835007: 1322123264..1322254335: 131072: 0003: net
  20:  1835008.. 1966079: 1322254336..1322385407: 131072: 0003: net
  21:  1966080.. 2097151: 1322385408..1322516479: 131072: 0003: net
  22:  2097152.. 2228223: 1322516480..1322647551: 131072: 0003: net
  23:  2228224.. 2359295: 1322647552..1322778623: 131072: 0003: net
  24:  2359296.. 2476319: 1322778624..1322895647: 117024: 0003: last,net,eof
/myth/tv/2063_20210509184200.ts: 8 extents found
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="301103" author="adilger" created="Mon, 10 May 2021 22:59:50 +0000"  >&lt;p&gt;I&apos;m wondering if I should try installing &lt;a href=&quot;https://review.whamcloud.com/43232&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43232&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14438&quot; title=&quot;backport ldiskfs mballoc patches&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14438&quot;&gt;LU-14438&lt;/a&gt; ldiskfs: improvements to mballoc&lt;/tt&gt;&quot; to see if that is improving the initial allocations or not, although this is already doing pretty good considering how little free space is available.&lt;/p&gt;</comment>
                            <comment id="301114" author="adilger" created="Tue, 11 May 2021 01:40:54 +0000"  >&lt;p&gt;One thing I realized while running this on my home system is that &lt;tt&gt;osd_ldiskfs_map_write()&lt;/tt&gt; is submitting the IO because of BIO size limits (1MB in my case), and not because the allocation is fragmented.  That means that &lt;tt&gt;osd_extent_bytes()&lt;/tt&gt; is always &amp;lt;= 1MB even when the extents are very large (128MB contiguous allocations in my case with a single-threaded writer, 8MB with multiple writers).  It may be that we need to change how the extent stats are calculated?&lt;/p&gt;</comment>
                            <comment id="301118" author="wshilong" created="Tue, 11 May 2021 02:02:07 +0000"  >&lt;p&gt;Current calculation is like:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
&#160;*raw_cpu_ptr(osd-&amp;gt;od_extent_bytes_percpu) =

&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; DIV_ROUND_UP(old_bytes * (EXTENT_BYTES_DECAY -1) +

&#160;&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; min(new_bytes, OSD_DEFAULT_EXTENT_BYTES),

&#160;&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; EXTENT_BYTES_DECAY);

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Yup, extent per bytes was limiting not exceeding 1M, this kind of calculation is more sensitive to fragments once there was, if&lt;/p&gt;

&lt;p&gt;we tried to change it to new_bytes, extent_per_bytes might increase more quickly once we get one good large extent allocation.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;I had no idea which once is better, maybe increase OSD_DEFAULT_EXTENT_BYTES to 4M might be enough?&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="301280" author="gerrit" created="Tue, 11 May 2021 22:53:55 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/43446/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43446/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14641&quot; title=&quot;per extents bytes allocation stats&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14641&quot;&gt;&lt;del&gt;LU-14641&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: write commit declaring improvement&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 0f81c5ae973bf7fba45b6ba7f9c5f4fb1f6eadcb&lt;/p&gt;</comment>
                            <comment id="317767" author="gerrit" created="Tue, 9 Nov 2021 19:27:04 +0000"  >&lt;p&gt;&quot;Andreas Dilger &amp;lt;adilger@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/45505&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45505&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14641&quot; title=&quot;per extents bytes allocation stats&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14641&quot;&gt;&lt;del&gt;LU-14641&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: write commit declaring improvement&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_14&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 6db11a4e6e3214ac387433252986cd1850609260&lt;/p&gt;</comment>
                            <comment id="318201" author="gerrit" created="Sun, 14 Nov 2021 03:09:26 +0000"  >&lt;p&gt;&quot;Andreas Dilger &amp;lt;adilger@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/45505/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45505/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14641&quot; title=&quot;per extents bytes allocation stats&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14641&quot;&gt;&lt;del&gt;LU-14641&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: write commit declaring improvement&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_14&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 1734c4746d98dfa6fc6559841be8028a22465718&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="61650">LU-14134</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01t2f:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>