<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:42:24 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4402] Ldiskfs errors ldiskfs_ext_find_extent, ldiskfs_ext_get_blocks, corruption</title>
                <link>https://jira.whamcloud.com/browse/LU-4402</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Starting with an otherwise operating filesystem, we had a nfs issue on the management server that does nfsroot to the nodes. This caused the nodes to hang on shell probes, ssh, etc, but lustre appeared to work okay, until the mount came back and there was a spew of I/O errors. We had -o errors=panic, so the nodes rebooted and we have a crash dump as well.  A few of the interesting/disturbing messages are below and a complete log capture of the interval is attached. We rebooted every single one of our lustre systems that mounted this nfsroot and started up lustre. At this point, an e2fsck seems prudent given the messages? Please advise.&lt;/p&gt;

&lt;p&gt;And for clarity&apos;s sake, this is from a completely separate system than &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-486&quot; title=&quot;ldiskfs_valid_block_bitmap: Invalid block bitmap&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-486&quot;&gt;&lt;del&gt;LU-486&lt;/del&gt;&lt;/a&gt; that I just updated.&lt;/p&gt;

&lt;p&gt;Dec 19 14:07:28 atlas-oss3b4 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987655.565953&amp;#93;&lt;/span&gt; end_request: I/O error, dev dm-0, sector 2641342080&lt;br/&gt;
... more of those ...&lt;br/&gt;
Dec 19 14:07:34 atlas-oss2f4 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987662.210829&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-8): ldiskfs_ext_find_extent: bad header/extent in inode #395571: invalid magic - magic 5fa6, entries 39658, max 42407(0), depth 37176(0)&lt;br/&gt;
Dec 19 14:09:02 atlas-linkfarm kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987621.160868&amp;#93;&lt;/span&gt; LustreError: 13071:0:(tgt_lastrcvd.c:577:tgt_client_new()) linkfarm-MDT0000: Failed to write client lcd at idx 18888, rc -30&lt;br/&gt;
Dec 19 14:09:11 atlas-mds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1355985.425764&amp;#93;&lt;/span&gt; LustreError: 15139:0:(osp_precreate.c:484:osp_precreate_send()) atlas1-OST0043-osc-MDT0000: can\&apos;t precreate: rc = -30&lt;br/&gt;
Dec 19 14:09:11 atlas-mds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1355985.439222&amp;#93;&lt;/span&gt; LustreError: 15139:0:(osp_precreate.c:484:osp_precreate_send()) Skipped 990 previous similar messages&lt;br/&gt;
Dec 19 14:09:11 atlas-mds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1355985.451122&amp;#93;&lt;/span&gt; LustreError: 15139:0:(osp_precreate.c:989:osp_precreate_thread()) atlas1-OST0043-osc-MDT0000: cannot precreate objects: rc = -30&lt;br/&gt;
Dec 19 14:09:11 atlas-mds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1355985.465640&amp;#93;&lt;/span&gt; LustreError: 15139:0:(osp_precreate.c:989:osp_precreate_thread()) Skipped 990 previous similar messages&lt;br/&gt;
Dec 19 14:09:55 atlas-mds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1356029.474926&amp;#93;&lt;/span&gt; INFO: task mdt00_027:12952 blocked for more than 120 seconds.&lt;br/&gt;
Dec 19 14:10:46 atlas-oss1d1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691120.297183&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-1): ldiskfs_ext_find_extent: bad header/extent in inode #395600: invalid magic - magic a7bc, entries 21131, max 744(0), depth 0(0)&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199569&amp;#93;&lt;/span&gt; Buffer I/O error on device dm-9, logical block 5638&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199571&amp;#93;&lt;/span&gt; lost page write due to I/O error on dm-9&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199578&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): kmmpd:&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199582&amp;#93;&lt;/span&gt; Aborting journal on device dm-9-8.&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199586&amp;#93;&lt;/span&gt; Buffer I/O error on device dm-9, logical block 137&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199588&amp;#93;&lt;/span&gt; Error writing to MMP block&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199589&amp;#93;&lt;/span&gt; lost page write due to I/O error on dm-9&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199590&amp;#93;&lt;/span&gt;&lt;br/&gt;
Dec 19 14:12:42 atlas-oss2d6 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987967.199592&amp;#93;&lt;/span&gt; LDISKFS-fs (dm-9): Remounting filesystem read-only&lt;br/&gt;
Dec 19 14:13:25 atlas-oss2f5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1988014.344558&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-6): ldiskfs_ext_find_extent: bad header/extent in inode #268955: invalid magic - magic e79d, entries 37634, max 32686(0), depth 47774(0)&lt;/p&gt;

&lt;p&gt;Dec 19 14:14:50 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691359.246757&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:14:53 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691362.375749&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:14:57 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691366.367144&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:15:02 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691371.356427&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:15:08 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691377.343445&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:15:15 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691384.328232&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:15:16 atlas-oss2e1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1988136.371625&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-3): ldiskfs_ext_find_extent: bad header/extent in inode #330052: invalid magic - magic 53be, entries 21067, max 517(0),&lt;br/&gt;
 depth 0(0)&lt;br/&gt;
Dec 19 14:15:23 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691392.310962&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:15:32 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691401.291535&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;br/&gt;
Dec 19 14:15:42 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691411.269872&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): file system corruption: inode #591204 logical block 447 mapped to 137004702958273 (size 1)&lt;/p&gt;

&lt;p&gt;Dec 19 14:21:50 atlas-oss1d3 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691780.266369&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-2): ldiskfs_ext_find_extent: bad header/extent in inode #199528: invalid magic - magic 0, entries 0, max 0(0), depth 0(0&lt;br/&gt;
)&lt;/p&gt;


&lt;p&gt;Dec 19 14:22:41 atlas-oss1d3 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691831.895277&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-2): ldiskfs_ext_get_blocks: inode #199528: (comm ll_ost_io02_007) bad extent address iblock: 447, depth: 1 pblock 0&lt;/p&gt;</description>
                <environment>RHEL6.4/distro IB/2.6.32-358.18.1.el6&lt;br/&gt;
</environment>
        <key id="22533">LU-4402</key>
            <summary>Ldiskfs errors ldiskfs_ext_find_extent, ldiskfs_ext_get_blocks, corruption</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bzzz">Alex Zhuravlev</assignee>
                                    <reporter username="blakecaldwell">Blake Caldwell</reporter>
                        <labels>
                    </labels>
                <created>Fri, 20 Dec 2013 04:29:37 +0000</created>
                <updated>Sat, 8 Feb 2014 05:35:02 +0000</updated>
                            <resolved>Sat, 1 Feb 2014 01:55:31 +0000</resolved>
                                    <version>Lustre 2.4.1</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="73909" author="bzzz" created="Fri, 20 Dec 2013 05:02:49 +0000"  >&lt;p&gt;&amp;gt;Dec 19 14:07:25 atlas-mgs2 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1988887.590310&amp;#93;&lt;/span&gt; sd 6:0:8:1: &lt;span class=&quot;error&quot;&gt;&amp;#91;sdw&amp;#93;&lt;/span&gt; Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.&lt;/p&gt;

&lt;p&gt;sounds like at some point underlying device changed?&lt;/p&gt;</comment>
                            <comment id="73910" author="blakecaldwell" created="Fri, 20 Dec 2013 05:21:30 +0000"  >&lt;p&gt;Well I&apos;m going to say that it&apos;s not directly correlated, but there&apos;s more to it. That host is not actually part of the filesystem. It connects to, but does not have any LUNs mounted on the same SAS attached array that the MGS and MDS share, so perhaps it indicates an event on one of the other systems. We&apos;ve grown accustomed to seeing those messages with RHEL hosts when something happens on the storage array (object storage arrays from a different brand do this too) and it has not indicated an actual LUN assignment change.  It could be caused from an event on one of the other systems atlas-mds1, atlas-mds3, atlas-mgs1 such as an IO error after the nfsroot returned.&lt;/p&gt;</comment>
                            <comment id="73912" author="bzzz" created="Fri, 20 Dec 2013 05:38:32 +0000"  >&lt;p&gt;thanks for clarification.&lt;br/&gt;
can it be that some configuration changes on the storage system caused following I/O errors:&lt;br/&gt;
Dec 19 14:07:28 atlas-oss3b4 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987655.605866&amp;#93;&lt;/span&gt; end_request: I/O error, dev dm-0, sector 2641344352&lt;br/&gt;
?&lt;/p&gt;

&lt;p&gt;the confusing thing is that it&apos;s number of the systems started to observe corruptions:&lt;br/&gt;
Dec 19 14:10:46 atlas-oss1d1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691120.297183&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-1): ldiskfs_ext_find_extent: bad header/extent in inode #395600: invalid magic - magic a7bc, entries 21131, max 744(0), depth 0(0)&lt;br/&gt;
Dec 19 14:11:34 atlas-oss1b2 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691163.534948&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-18): ldiskfs_ext_find_extent: bad header/extent in inode #268868: invalid magic - magic 222a, entries 21150, max 9(0), depth 0(0)&lt;br/&gt;
Dec 19 14:11:44 atlas-oss1a5 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691173.077090&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-9): ldiskfs_ext_find_extent: bad header/extent in inode #591204: invalid magic - magic a9bf, entries 21131, max 868(0), depth 0(0)&lt;br/&gt;
Dec 19 14:11:55 atlas-oss1d4 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;691184.758850&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-4): ldiskfs_ext_find_extent: bad header/extent in inode #269305: invalid magic - magic b84, entries 44395, max 21257(0), depth 62507(0)&lt;/p&gt;
</comment>
                            <comment id="73913" author="blakecaldwell" created="Fri, 20 Dec 2013 05:53:46 +0000"  >&lt;p&gt;I agree its strange the number of errors. atlas-oss1d1 and atlas-oss1b2 use physically separate storage arrays. Of those listed only atlas-oss1d1 and atlas-oss1d4 share the same storage device. I checked the logs on the OST storage controllers and all that they saw was the hosts log out and then back in when they were rebooted. &lt;/p&gt;

&lt;p&gt;The common piece that sticks out to me is that all systems had their nfsroot filesystems disrupted. They have recovered from this transparently a hundred times before.&lt;/p&gt;</comment>
                            <comment id="74496" author="jlevi" created="Tue, 7 Jan 2014 18:55:47 +0000"  >&lt;p&gt;Are there any next steps on this ticket given the information that has been posted? ie. should this ticket be closed?&lt;/p&gt;</comment>
                            <comment id="74590" author="blakecaldwell" created="Wed, 8 Jan 2014 19:43:34 +0000"  >&lt;p&gt;What would be the best way of validating what it is saying about this message... assuming the inode still exists? Since this is the ldiskfs layer, how do we correlate to inode #395571? Debugfs? Anything live without a downtime for e2fsck?&lt;/p&gt;

&lt;p&gt;Dec 19 14:07:34 atlas-oss2f4 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;1987662.210829&amp;#93;&lt;/span&gt; LDISKFS-fs error (device dm-8): ldiskfs_ext_find_extent: bad header/extent in inode #395571: invalid magic - magic 5fa6, entries 39658, max 42407(0), depth 37176(0)&lt;/p&gt;</comment>
                            <comment id="74893" author="bzzz" created="Tue, 14 Jan 2014 03:47:09 +0000"  >&lt;p&gt;you can try on the mounted filesystem: debugfs -R &quot;stat &amp;lt;395571&amp;gt;&quot;&lt;/p&gt;</comment>
                            <comment id="75395" author="blakecaldwell" created="Tue, 21 Jan 2014 22:49:18 +0000"  >&lt;p&gt;Thanks. It turns out that since just had a downtime and I was able to run e2fsck across all OSTs. There were only 2 problems found and they did not correlate to the issues in the log messages. 1 problem was with inode 3, which I gather is a user quota file. The 2nd inode had a fid, but could not be found with fid2path. If the i_size for inode 3 difference is tolerable, then I believe we have arrived at the end of the road with this case.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@atlas-oss2e4 ~&amp;#93;&lt;/span&gt;# e2fsck -f /dev/mapper/atlas-ddn2e-l22&lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Inode 3, i_size is 92160, should be 106496.  Fix&amp;lt;y&amp;gt;? yes&lt;br/&gt;
Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@atlas-oss2i1 ~&amp;#93;&lt;/span&gt;# e2fsck -f /dev/mapper/atlas-ddn2i-l2&lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Inode 209, i_size is 25165824, should be 26214400.  Fix&amp;lt;y&amp;gt;? yes&lt;br/&gt;
Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@atlas-oss2i1 ~&amp;#93;&lt;/span&gt;# debugfs -R &quot;stat &amp;lt;209&amp;gt;&quot; /dev/mapper/atlas-ddn2i-l2&lt;br/&gt;
debugfs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Inode: 209   Type: regular    Mode:  0666   Flags: 0x80000&lt;br/&gt;
Generation: 3823026230    Version: 0x00000003:00036bbd&lt;br/&gt;
User:  9032   Group: 18319   Size: 25165824&lt;br/&gt;
File ACL: 0    Directory ACL: 0&lt;br/&gt;
Links: 1   Blockcount: 51208&lt;br/&gt;
Fragment:  Address: 0    Number: 0    Size: 0&lt;br/&gt;
 ctime: 0x52ceeba5:00000000 &amp;#8211; Thu Jan  9 13:34:13 2014&lt;br/&gt;
 atime: 0x52ceeba5:00000000 &amp;#8211; Thu Jan  9 13:34:13 2014&lt;br/&gt;
 mtime: 0x52ceeba5:00000000 &amp;#8211; Thu Jan  9 13:34:13 2014&lt;br/&gt;
crtime: 0x524b4897:d033afac &amp;#8211; Tue Oct  1 18:11:35 2013&lt;br/&gt;
Size of extra inode fields: 28&lt;br/&gt;
Extended attributes stored in inode body: &lt;br/&gt;
  lma = &quot;00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 7c 00 00 00 00 00 00 00 &quot; (24)&lt;br/&gt;
  lma: fid=&lt;span class=&quot;error&quot;&gt;&amp;#91;0x1000000:0x7c000000:0x0&amp;#93;&lt;/span&gt; compat=0 incompat=0&lt;br/&gt;
  fid = &quot;88 14 00 00 02 00 00 00 38 02 00 00 03 00 00 00 &quot; (16)&lt;br/&gt;
  fid: parent=&lt;span class=&quot;error&quot;&gt;&amp;#91;0x200001488:0x238:0x0&amp;#93;&lt;/span&gt; stripe=3&lt;br/&gt;
EXTENTS:&lt;br/&gt;
(ETB0):60304384, (0-255):60295936-60296191, (256-511):60296960-60297215, (512-1023):60298240-60298751, (1024-2047):60302336-60303359, (2048-4095):60305408-60307455, (4096-6143):60317696-60319743, (6144-6399):60327936-60328191&lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@atlas-oss2i1 ~&amp;#93;&lt;/span&gt;# debugfs -R &quot;ncheck 209&quot; /dev/mapper/atlas-ddn2i-l2&lt;br/&gt;
debugfs 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Inode   Pathname&lt;br/&gt;
209     /O/0/d28/124&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@client1 etc&amp;#93;&lt;/span&gt;# lfs fid2path /lustre/atlas1 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x1000000:0x7c000000:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
fid2path error: No such file or directory&lt;/p&gt;</comment>
                            <comment id="76006" author="jamesanunez" created="Fri, 31 Jan 2014 20:37:09 +0000"  >&lt;p&gt;Blake, &lt;/p&gt;

&lt;p&gt;Should we close this ticket or is there something else you need to be resolved?&lt;/p&gt;

&lt;p&gt;Thanks, &lt;br/&gt;
James&lt;/p&gt;</comment>
                            <comment id="76007" author="blakecaldwell" created="Fri, 31 Jan 2014 20:48:33 +0000"  >&lt;p&gt;This can be closed. Thanks James.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="13946" name="atlas_20131219_errors.log.gz" size="251" author="blakecaldwell" created="Fri, 20 Dec 2013 04:29:37 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwbpb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>12084</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>