<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:10:13 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7592] &#1057;hange force_over_128tb lustre mount option to force_over_256b for ldiskfs</title>
                <link>https://jira.whamcloud.com/browse/LU-7592</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Currently attempts of creating ldisk file system with size &amp;gt;128TB finished with message.&lt;/p&gt;

&lt;p&gt;LDISKFS-fs does not support file systems greater than 128TB and can cause data corruption.Use &quot;force_over_128tb&quot; mount option to override.&lt;/p&gt;

&lt;p&gt;Before using &#8220;force_over_128tb&#8221; parameter in production systems lustre file system software should be analyzed to point possible large disks support issues. This issue is about research of some aspects of Lustre software. Finally patch that change &quot;force_over_128tb&quot; to &quot;force_over_256tb&quot; should be landed. This gives ability use ldiskfs partitions &amp;lt;256tb without options.&lt;/p&gt;</description>
                <environment></environment>
        <key id="33812">LU-7592</key>
            <summary>&#1057;hange force_over_128tb lustre mount option to force_over_256b for ldiskfs</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="artem_blagodarenko">Artem Blagodarenko</reporter>
                        <labels>
                            <label>patch</label>
                    </labels>
                <created>Tue, 22 Dec 2015 08:19:33 +0000</created>
                <updated>Fri, 8 Dec 2017 15:36:32 +0000</updated>
                            <resolved>Tue, 18 Apr 2017 17:40:57 +0000</resolved>
                                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="137164" author="gerrit" created="Tue, 22 Dec 2015 10:15:29 +0000"  >&lt;p&gt;Artem Blagodarenko (artem.blagodarenko@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/17702&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17702&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7592&quot; title=&quot;&#1057;hange force_over_128tb lustre mount option to force_over_256b for ldiskfs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7592&quot;&gt;&lt;del&gt;LU-7592&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: increase supported ldiskfs fs size limit&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1bc3e656ae7096711c2ce6310234a81b989089b8&lt;/p&gt;</comment>
                            <comment id="137397" author="artem_blagodarenko" created="Thu, 24 Dec 2015 10:02:23 +0000"  >&lt;p&gt;Verification steps:&lt;br/&gt;
1. Lustre code check to verify it is ready for &amp;lt;256TB targets&lt;br/&gt;
2. Testing &lt;/p&gt;

&lt;p&gt;Issues verified and tested:&lt;br/&gt;
1. Default ldiskfs parameters for command&lt;br/&gt;
mkfs.lustre --ost  --fsname=testfs --mountfsoptions=&apos;force_over_128tb&apos;  /dev/md1&lt;/p&gt;

&lt;p&gt;-J size=400 -I 256 -i 1048576 -q -O extents,uninit_bg,dir_nlink,huge_file,64bit,flex_bg -G 256 -E lazy_journal_init,lazy_itable_init=0 -F&lt;/p&gt;

&lt;p&gt;2 Inode count limitation	&lt;br/&gt;
There is inode count limitation check in mkfs utility (misc/mke2fs.c file)  &quot;num_inodes &amp;gt; MAX_32_NUM&quot;&lt;br/&gt;
With current option -i 1048576 for 256 TB OST inode count is 256M that is less then 2^32 -1. Smallest bytes per node ratio for 256TB is 32769. If this parameter smaller than value is truncated by mkfs utility to maximum possible.&lt;br/&gt;
&lt;b&gt;Case with with -i &amp;lt; 32769 is successfully tested&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;MDS required inodes count can be calculated. It has to be more than ost inode count * number of ost. This calculations for worst case with 1 stripe. With current option -i 1048576 for 256TB OST inode count is 256M. Maximum inodes count is 2^32-1=4294967296, so this limit is exceeded with 4294967296/256M=16 OSTs.  &lt;br/&gt;
Often this parameter for MDS smaller then default( -i 4096 for example)&lt;br/&gt;
Such rate (-i 4096) can&#8217;t be used for 256TB disk, because exceeding of 4G inode. Currently, because of limitation of inodes count sometimes MDT can&#8217;t be used fully. Probably this is time for extending this limit (Should we add such task?).&lt;/p&gt;

&lt;p&gt;3. Directories format. 32 directories with 64kb files&lt;br/&gt;
OST has 32 object directories. Each of them can store 64kb of files. Thus, the limit of files on OST is&lt;br/&gt;
65536 * 32 = 2097152 this. Such situation fixes option dir_nlink. The pieces of code that enable this option shown above:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; void ext4_inc_count(handle_t *handle, struct inode *inode)
{
        inc_nlink(inode);
        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (is_dx(inode) &amp;amp;&amp;amp; inode-&amp;gt;i_nlink &amp;gt; 1) {
                &lt;span class=&quot;code-comment&quot;&gt;/* limit is 16-bit i_links_count */&lt;/span&gt;
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (inode-&amp;gt;i_nlink &amp;gt;= EXT4_LINK_MAX || inode-&amp;gt;i_nlink == 2) {
                        inode-&amp;gt;i_nlink = 1;
                        EXT4_SET_RO_COMPAT_FEATURE(inode-&amp;gt;i_sb,
                                              EXT4_FEATURE_RO_COMPAT_DIR_NLINK);
                }
        }
}

/*
 * If a directory had nlink == 1, then we should let it be 1. This indicates
 * directory has &amp;gt;LDISKFS_LINK_MAX subdirs.
 */
&lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; void ldiskfs_dec_count(handle_t *handle, struct inode *inode)
{
        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (!S_ISDIR(inode-&amp;gt;i_mode) || inode-&amp;gt;i_nlink &amp;gt; 2)
                drop_nlink(inode);
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are some doubts how this code works when i_nlink become less then EXT4_LINK_MAX. There is sanity run_test 51b &quot;exceed 64k subdirectory nlink limit&quot; but it has some issues:&lt;br/&gt;
a. It tests exceed 64k subdirectory on mds, but ost differ from mds (at least ost use vfs)&lt;br/&gt;
b. Test doesn&#8217;t create 64k files&lt;br/&gt;
The requirements for test improvement added to 3.1.3.&lt;br/&gt;
Test case &lt;br/&gt;
a. Create more than 64k files on ldiskfs partition&lt;br/&gt;
b. Try to delete file so file count less than  EXT4_LINK_MAX&lt;br/&gt;
c. Force file system check with fsck (ost)&lt;br/&gt;
&lt;b&gt;successfully tested&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;4 Performance near first and last block of disk&lt;br/&gt;
Due to large disk size some performance loss at the end of surface is possible. There are mkfs options that move some metadata to the start of disk (flex_bg and -G) . This options are used in some configuration, but numbers should be corrected. &#8220;-G 256&#8221; means that 256 block groups are allocated at the start of disk to store bitmaps and inode tables. This parameter can be adjusted for new size of disk. Patch that adds option &quot;-G&quot; in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6442&quot; title=&quot;mkfs -G &amp;lt;value&amp;gt; parameter is not changed actually and default value is applied&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6442&quot;&gt;&lt;del&gt;LU-6442&lt;/del&gt;&lt;/a&gt; landed.&lt;/p&gt;

&lt;p&gt;5 ldiskfs data structures limitations&lt;br/&gt;
5.1 ext4_map_inode_page function&apos;s parameter blocks should be 64bit long&lt;br/&gt;
There is function with parameter &#8220;unsigned long *blocks&#8221;: &lt;br/&gt;
int ext4_map_inode_page(struct inode *inode, struct page *page,&lt;br/&gt;
unsigned long *blocks, int create)&lt;/p&gt;

&lt;p&gt;But ext4_bmap returns sector_t value.&lt;/p&gt;

&lt;p&gt;static sector_t ext4_bmap(struct address_space *mapping, sector_t block)&lt;br/&gt;
blocks&lt;span class=&quot;error&quot;&gt;&amp;#91;i&amp;#93;&lt;/span&gt; = ext4_bmap(inode-&amp;gt;i_mapping, iblock);&lt;/p&gt;

&lt;p&gt;That depending on macros can be 32 or 64 bit long&lt;br/&gt;
 /**&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;The type used for indexing onto a disc or disc partition.&lt;br/&gt;
 *&lt;/li&gt;
	&lt;li&gt;Linux always considers sectors to be 512 bytes long independently&lt;/li&gt;
	&lt;li&gt;of the devices real block size.&lt;br/&gt;
 *&lt;/li&gt;
	&lt;li&gt;blkcnt_t is th type of the inode&apos;s block count.&lt;br/&gt;
 */&lt;br/&gt;
#ifdef CONFIG_LBDAF&lt;br/&gt;
typedef u64 sector_t;&lt;br/&gt;
typedef u64 blkcnt_t;&lt;br/&gt;
#else&lt;br/&gt;
typedef unsigned long sector_t;&lt;br/&gt;
typedef unsigned long blkcnt_t;&lt;br/&gt;
#endif&lt;/li&gt;
&lt;/ul&gt;


&lt;blockquote&gt;
&lt;p&gt;CONFIG_LBDAF:Enable block devices or files of size 2TB and larger.This option is required to support the full capacity of large (2TB+) block devices, including RAID, disk, Network Block&lt;br/&gt;
Device, Logical Volume Manager (LVM) and loopback.This option also enables support for single files larger than 2TB.The ext4 filesystem requires that this feature be enabled in order to support filesystems that have the huge_file feature enabled.  Otherwise, it will refuse to mount in the read-write mode any filesystems that use the huge_file feature, which is enabled by default by mke2fs.ext4.The GFS2 filesystem also requires this feature.If unsure, say Y.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;So we need to use sector_t for this array of blocks.&lt;br/&gt;
The field dr_blocks in osd_iobuf and its users should be corrected.&lt;br/&gt;
This fix is actual &lt;b&gt;for  x86_32 systems only&lt;/b&gt;, because unsigned long is 64bit long on x86_64 systems. Fix is uploaded to (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6464&quot; title=&quot;ldiskfs: ext4_map_inode_page() ready for large blocks count&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6464&quot;&gt;&lt;del&gt;LU-6464&lt;/del&gt;&lt;/a&gt;, landed)&lt;/p&gt;

&lt;p&gt;6. Obdfilter. Block addressing etc.&lt;br/&gt;
Nothing suspicious.&lt;/p&gt;

&lt;p&gt;7. Extended attribute inode probable overflow&lt;br/&gt;
xattr store 32 bit inode number (as expected). &lt;br/&gt;
__le32  e_value_inum;   /* inode in which the value is stored */&lt;br/&gt;
xattr inode&#8217;s blocks addressed using local block counters.&lt;/p&gt;

&lt;p&gt;8. Quta limits. Sizes and inodes.&lt;br/&gt;
Nothing changed. 32 bit counters are used for inode addressing. Quotes are still ready for such counters.&lt;/p&gt;

&lt;p&gt;9. llog. llog id limitaions&lt;br/&gt;
Llog subsystem uses llog_logid, that has ost_id type inside with 64bit types.&lt;/p&gt;

&lt;p&gt;10 Tools. FSCK - 64 bits block number&lt;br/&gt;
There are 64 bit for addressing blocks by number.&lt;br/&gt;
typedef __u64 __bitwise         blk64_t;&lt;br/&gt;
There is 32 bit version&lt;br/&gt;
typedef __u32 __bitwise         blk_t;&lt;/p&gt;

&lt;p&gt;1) It is used for bad blocks accessing in wrong way. There is patch that chagnes bad blocks numbers to 64bit &lt;a href=&quot;http://patchwork.ozlabs.org/patch/279297/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://patchwork.ozlabs.org/patch/279297/&lt;/a&gt; we could port it or make from scratch. (LU-XXXX)&lt;br/&gt;
2) some functions in bitmap layer uses blk_t. Sometimes blk_t and blk64_t are used in same operation.&lt;br/&gt;
Indeed for large EXT4 file systems extents are used for addressing blocks so bitmap code is not used (LU-XXXX)&lt;br/&gt;
3) Hurd translators&lt;br/&gt;
4) For back compatibility&lt;br/&gt;
(LU-XXXX)&lt;/p&gt;

&lt;p&gt;11. e2fsprogs update&lt;br/&gt;
It looks like all 64-bit related patches are landed to master from (&lt;a href=&quot;http://git.kernel.org/cgit/fs/ext2/e2fsprogs.git/log/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://git.kernel.org/cgit/fs/ext2/e2fsprogs.git/log/&lt;/a&gt; ) &lt;/p&gt;

&lt;p&gt; 12. fsck time&lt;br/&gt;
fsck for full 256GB partition without errors should require reasonable time. Need to be checked.&lt;/p&gt;

&lt;p&gt;13. lfsck&lt;br/&gt;
lfsck doesn&#8217;t use global blocks counters. There are also no other limitations. &lt;/p&gt;

&lt;p&gt;For points there marker will upload patches in near future.  &lt;/p&gt;</comment>
                            <comment id="149193" author="adilger" created="Sun, 17 Apr 2016 09:23:35 +0000"  >&lt;p&gt;Thank you for this detailed analysis. For some reason I don&apos;t recall reading it, maybe because it was posted on Christmas and I was on holidays for a couple of weeks and missed it on my return. In any case it looks very thorough. &lt;/p&gt;

&lt;p&gt;Some issues I think are important in this area to discuss in advance if you plan to keep enhancing ext4 for even larger OSTs:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;badblocks: this is generally unused, which is why the patch was rejected. That said, I don&apos;t know if Darrick went back and audited the badblocks code to properly reject block numbers larger than 2^32 or not.&lt;/li&gt;
	&lt;li&gt;if filesystems get any larger, we will need to force the ext4 meta_bg feature on, because the group descriptors will not be able to fit into the first group after it grows beyond 32767 blocks, ~= 2M groups ~= 256 TB.  The meta_bg option is much more efficient than without, but suffers from a lack of robustness because there is only a single copy of the last group&apos;s description block (unfortunately this feature was implemented and landed in private before such issues could be discussed and resolved).&lt;/li&gt;
	&lt;li&gt;I don&apos;t think there is much value to ldiskfs MDTs with more than 4B inodes. It is always possible to use DNE, which will give better performance and workload isolation, allow parallel e2fsck, and I&apos;m any case there are relatively few systems that are even hitting the 4B limit before seeing problems with performance. If you &lt;em&gt;did&lt;/em&gt; want to go down this route, then it makes sense to use the dirdata feature to allow optionally storing the high 32 bits of the inode number into direntries, which is what the first dirdata bit was reserved for. This would keep compatibility with existing directories, and this feature could be enabled on existing filesystems without the need to rewrite all directories with a 64-bit inode direntry, or have problems adding a 64-bit inode number to an existing directory with only 32-bit dirents.&lt;/li&gt;
	&lt;li&gt;probably a feature like bigalloc would be interesting for OSTs since it can speed up allocation performance, but the drawback is that this is very inefficient for small files. This might be compensated by having larger inodes (e.g. 4KB) and then using the online data feature to store smaller files inside the inode. Another benefit of bigalloc is to avoid fragmentation of the &lt;tt&gt;O/0/d&amp;#42;&lt;/tt&gt; directories&lt;/li&gt;
	&lt;li&gt;e2fsck performance will become an issue at this scale, and it would likely need to be parallelized to be able to complete in a reasonable time. It could reasonably expect multiple disks at this scale, so having larger numbers of IOs in flight would help, as would an event-driven model with aio that generates lists of blocks to check (itable blocks first), submits them to disk, and then processes them as they are read, generating more blocks to read (more itable blocks, indirect/index/xattr/directory blocks, etc), repeat.&lt;/li&gt;
	&lt;li&gt;I&apos;m not sure if the 16TB extent-mapped file size limit will be important for Lustre, since it is always possible (and desirable for many reasons) to stripe a file widely long before this size is hit for a single file. With PFL it is also possible to restripe a file widely at the end to avoid this problem. True, it would be possible to fill the whole Lustre filesystem with a single file, but that has never been a concern in the past and we&apos;ve had OSTs &amp;gt; 16TB for some time.&lt;/li&gt;
	&lt;li&gt;the three-level htree/2GB+ directory patch for e2fsck is relatively well understood and described in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1365&quot; title=&quot;Implement ldiskfs LARGEDIR support for e2fsprogs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1365&quot;&gt;&lt;del&gt;LU-1365&lt;/del&gt;&lt;/a&gt; and seems like a good place to start. The htree limit is relatively easy to test with 1KB blocksize and long filenames with hard links (createmany -l).  This has been discussed many time with the other ext4 devs and would very likely be accepted with little complaint.&lt;/li&gt;
	&lt;li&gt;the large xattr patch needs to be able to store 64KB xattrs directly into blocks, and is described in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-908&quot; title=&quot;multi-block xattr support&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-908&quot;&gt;&lt;del&gt;LU-908&lt;/del&gt;&lt;/a&gt; in detail. Kalpak is also very aware of this, as he worked on it in the past. This might also speed up wide striped file access a bit.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;If you are planning to do further enhancements to ldiskfs, I&apos;d strongly recommend to discuss them on the linux-ext4 mailing list first, so they have a chance to be improved and hopefully landed instead of being for Lustre only. &lt;/p&gt;</comment>
                            <comment id="149194" author="adilger" created="Sun, 17 Apr 2016 09:36:05 +0000"  >&lt;p&gt;More on the MDT side, a couple of interesting possibilities exist:&lt;/p&gt;

&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;the inline_data feature may be of interest on the MDT together with Data-on-MDT, or for small directories.&lt;/li&gt;
	&lt;li&gt;shrinking existing very large but mostly empty directories could be done efficiently. The high bits of the htree logical block pointers are reserved for storing the &quot;fullness&quot; of each leaf block. With the 3-level htree patch, there are 4 bits of space there, which is enough to have 1/16 gradients of fullness. The idea is that when adjacent block become less than, say, 1/3 or 1/4 full they could be merged when deleting files. We don&apos;t want to merge when just below 1/2 full, since this could cause repeated split/merge cycles, so some hysteresis is needed. this is actually a topic of interest for ext4 right now, because of high latency to ls a large-but-mostly-empty directory.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;PS: if you do plan on working on any new features, we should move the discussion to new tickets, if they don&apos;t already have one.&lt;/p&gt;</comment>
                            <comment id="149845" author="gerrit" created="Fri, 22 Apr 2016 15:47:16 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/17702/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17702/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7592&quot; title=&quot;&#1057;hange force_over_128tb lustre mount option to force_over_256b for ldiskfs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7592&quot;&gt;&lt;del&gt;LU-7592&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: increase supported ldiskfs fs size limit&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 5ca1a1e01d456c09d11d8a3409a83e055a7974a1&lt;/p&gt;</comment>
                            <comment id="150168" author="gerrit" created="Tue, 26 Apr 2016 08:22:03 +0000"  >&lt;p&gt;Artem Blagodarenko (artem.blagodarenko@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/19788&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/19788&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7592&quot; title=&quot;&#1057;hange force_over_128tb lustre mount option to force_over_256b for ldiskfs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7592&quot;&gt;&lt;del&gt;LU-7592&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: remove force_over_128 warning&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8f912feba1ce961ab9ba060f7d0674c13968f4a0&lt;/p&gt;</comment>
                            <comment id="184308" author="artem_blagodarenko" created="Fri, 10 Feb 2017 07:28:18 +0000"  >&lt;p&gt;&lt;a href=&quot;https://review.whamcloud.com/#/c/19788/3&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/19788&lt;/a&gt;&#160;is abandoned because its change is landed as part of &lt;a href=&quot;https://review.whamcloud.com/#/c/24524/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/24524&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="192546" author="adilger" created="Tue, 18 Apr 2017 17:40:57 +0000"  >&lt;p&gt;The two patches here were landed for 2.9.0.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="42652">LU-8974</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="38564">LU-8465</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxwk7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>