Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-2902

sanity test_156: NOT IN CACHE: before: , after:

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.4.1, Lustre 2.5.0
    • Lustre 2.4.0
    • 3
    • 6990

    Description

      This issue was created by maloo for Oleg Drokin <green@whamcloud.com>

      This issue relates to the following test suite run: https://maloo.whamcloud.com/test_sets/406900a6-84d3-11e2-9ab1-52540035b04c.

      The sub-test test_156 failed with the following error:

      NOT IN CACHE: before: 16741, after: 16741

      This seems to have an astounding 21% failure rate and nobody filed a ticket for it yet.
      Might be related to older LU-2009 that never was really investigated it seems.

      Info required for matching: sanity 156

      Attachments

        Issue Links

          Activity

            [LU-2902] sanity test_156: NOT IN CACHE: before: , after:

            I opened LU-3094 to track the 132 issues.

            keith Keith Mannthey (Inactive) added a comment - I opened LU-3094 to track the 132 issues.

            Thanks for the links:

            https://maloo.whamcloud.com/test_sets/016cbfcc-9816-11e2-879d-52540035b04c
            sanity test_151 AND test_156: @@@@@@ FAIL: NOT IN CACHE: before: , after:

            This is the "after" call with some spaces added for readability.

            snapshot_time 1364501362.939531 secs.usecs read_bytes 11 samples [bytes] 4096 1048576 1429504 write_bytes 6 samples [bytes] 1910 1048576 1910466 get_info 165 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 75 samples [reqs] setattr 1 samples [reqs] punch 2 samples [reqs] sync 7 samples [reqs] preprw 17 samples [reqs] commitrw 17 samples [reqs] ping 95 samples [reqs]
            snapshot_time 1364501362.939706 secs.usecs read_bytes 770 samples [bytes] 4096 1048576 805322752 write_bytes 770 samples [bytes] 3013 1048576 805321669 get_info 164 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 77 samples [reqs] punch 5 samples [reqs] sync 6 samples [reqs] preprw 1540 samples [reqs] commitrw 1540 samples [reqs] ping 83 samples [reqs]
            snapshot_time 1364501362.939814 secs.usecs read_bytes 3 samples [bytes] 8192 8192 24576 write_bytes 2 samples [bytes] 6096 6096 12192 get_info 157 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 74 samples [reqs] punch 1 samples [reqs] sync 3 samples [reqs] preprw 5 samples [reqs] commitrw 5 samples [reqs] ping 96 samples [reqs]
            snapshot_time 1364501362.939862 secs.usecs read_bytes 2 samples [bytes] 8192 8192 16384 write_bytes 3 samples [bytes] 1916 6096 12108 get_info 153 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 75 samples [reqs] punch 1 samples [reqs] sync 4 samples [reqs] preprw 5 samples [reqs] commitrw 5 samples [reqs] ping 95 samples [reqs]
            snapshot_time 1364501362.939991 secs.usecs read_bytes 2 samples [bytes] 8192 8192 16384 write_bytes 2 samples [bytes] 777 6096 6873 get_info 156 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 76 samples [reqs] setattr 2 samples [reqs] punch 3 samples [reqs] sync 3 samples [reqs] preprw 4 samples [reqs] commitrw 4 samples [reqs] ping 94 samples [reqs]
            snapshot_time 1364501362.940079 secs.usecs read_bytes 2 samples [bytes] 8192 8192 16384 write_bytes 2 samples [bytes] 1916 6096 8012 get_info 154 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 76 samples [reqs] setattr 1 samples [reqs] punch 1 samples [reqs] sync 2 samples [reqs] preprw 4 samples [reqs] commitrw 4 samples [reqs] ping 95 samples [reqs]
            snapshot_time 1364501362.940243 secs.usecs read_bytes 2 samples [bytes] 4096 12288 16384 write_bytes 2 samples [bytes] 12288 50400 62688 get_info 150 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 73 samples [reqs] setattr 1 samples [reqs] sync 2 samples [reqs] preprw 4 samples [reqs] commitrw 4 samples [reqs] ping 97 samples [reqs]
            
            

            There are 7 snapshot_time (7 OSTs) entries and they are all obdfilter.lustre-OST*.stats results. There has been some activity on the filesystem.

            There is no sign of the stats form "osd-.lustre-OST.stats".

            There are zero cache access to report on all 7 OSTs, either the cache is broken or it is just not reporting correctly.

            I noticed in the dmesg of the OST there is really bad things happening on test 132. https://maloo.whamcloud.com/test_logs/2613697e-9817-11e2-879d-52540035b04c/download

            It looks like there is some commuincation breakdown and the OSTs remount a few times?

            There are bad messages that don't appear in other runs of the test. Perhaps the system is in some bad state after test 132.

            Part of the OST dmesg:

            Lustre: DEBUG MARKER: test -b /dev/lvm-OSS/P5
            Lustre: DEBUG MARKER: mkdir -p /mnt/ost5; mount -t lustre   		                   /dev/lvm-OSS/P5 /mnt/ost5
            LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. quota=on. Opts: 
            Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/u
            LustreError: 137-5: UUID 'lustre-OST0005_UUID' is not available for connect (no target)
            LustreError: Skipped 1 previous similar message
            LustreError: 4490:0:(ldlm_resource.c:1161:ldlm_resource_get()) lvbo_init failed for resource 663: rc -2
            LustreError: 4490:0:(ldlm_resource.c:1161:ldlm_resource_get()) Skipped 184 previous similar messages
            
            

            There is alot more in the dmesg but all in all it looks not so good.

            keith Keith Mannthey (Inactive) added a comment - - edited Thanks for the links: https://maloo.whamcloud.com/test_sets/016cbfcc-9816-11e2-879d-52540035b04c sanity test_151 AND test_156: @@@@@@ FAIL: NOT IN CACHE: before: , after: This is the "after" call with some spaces added for readability. snapshot_time 1364501362.939531 secs.usecs read_bytes 11 samples [bytes] 4096 1048576 1429504 write_bytes 6 samples [bytes] 1910 1048576 1910466 get_info 165 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 75 samples [reqs] setattr 1 samples [reqs] punch 2 samples [reqs] sync 7 samples [reqs] preprw 17 samples [reqs] commitrw 17 samples [reqs] ping 95 samples [reqs] snapshot_time 1364501362.939706 secs.usecs read_bytes 770 samples [bytes] 4096 1048576 805322752 write_bytes 770 samples [bytes] 3013 1048576 805321669 get_info 164 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 77 samples [reqs] punch 5 samples [reqs] sync 6 samples [reqs] preprw 1540 samples [reqs] commitrw 1540 samples [reqs] ping 83 samples [reqs] snapshot_time 1364501362.939814 secs.usecs read_bytes 3 samples [bytes] 8192 8192 24576 write_bytes 2 samples [bytes] 6096 6096 12192 get_info 157 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 74 samples [reqs] punch 1 samples [reqs] sync 3 samples [reqs] preprw 5 samples [reqs] commitrw 5 samples [reqs] ping 96 samples [reqs] snapshot_time 1364501362.939862 secs.usecs read_bytes 2 samples [bytes] 8192 8192 16384 write_bytes 3 samples [bytes] 1916 6096 12108 get_info 153 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 75 samples [reqs] punch 1 samples [reqs] sync 4 samples [reqs] preprw 5 samples [reqs] commitrw 5 samples [reqs] ping 95 samples [reqs] snapshot_time 1364501362.939991 secs.usecs read_bytes 2 samples [bytes] 8192 8192 16384 write_bytes 2 samples [bytes] 777 6096 6873 get_info 156 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 76 samples [reqs] setattr 2 samples [reqs] punch 3 samples [reqs] sync 3 samples [reqs] preprw 4 samples [reqs] commitrw 4 samples [reqs] ping 94 samples [reqs] snapshot_time 1364501362.940079 secs.usecs read_bytes 2 samples [bytes] 8192 8192 16384 write_bytes 2 samples [bytes] 1916 6096 8012 get_info 154 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 76 samples [reqs] setattr 1 samples [reqs] punch 1 samples [reqs] sync 2 samples [reqs] preprw 4 samples [reqs] commitrw 4 samples [reqs] ping 95 samples [reqs] snapshot_time 1364501362.940243 secs.usecs read_bytes 2 samples [bytes] 4096 12288 16384 write_bytes 2 samples [bytes] 12288 50400 62688 get_info 150 samples [reqs] connect 1 samples [reqs] disconnect 1 samples [reqs] statfs 84 samples [reqs] create 2 samples [reqs] destroy 73 samples [reqs] setattr 1 samples [reqs] sync 2 samples [reqs] preprw 4 samples [reqs] commitrw 4 samples [reqs] ping 97 samples [reqs] There are 7 snapshot_time (7 OSTs) entries and they are all obdfilter.lustre-OST*.stats results. There has been some activity on the filesystem. There is no sign of the stats form "osd- .lustre-OST .stats". There are zero cache access to report on all 7 OSTs, either the cache is broken or it is just not reporting correctly. I noticed in the dmesg of the OST there is really bad things happening on test 132. https://maloo.whamcloud.com/test_logs/2613697e-9817-11e2-879d-52540035b04c/download It looks like there is some commuincation breakdown and the OSTs remount a few times? There are bad messages that don't appear in other runs of the test. Perhaps the system is in some bad state after test 132. Part of the OST dmesg: Lustre: DEBUG MARKER: test -b /dev/lvm-OSS/P5 Lustre: DEBUG MARKER: mkdir -p /mnt/ost5; mount -t lustre /dev/lvm-OSS/P5 /mnt/ost5 LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. quota=on. Opts: Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/u LustreError: 137-5: UUID 'lustre-OST0005_UUID' is not available for connect (no target) LustreError: Skipped 1 previous similar message LustreError: 4490:0:(ldlm_resource.c:1161:ldlm_resource_get()) lvbo_init failed for resource 663: rc -2 LustreError: 4490:0:(ldlm_resource.c:1161:ldlm_resource_get()) Skipped 184 previous similar messages There is alot more in the dmesg but all in all it looks not so good.
            sebastien.buisson Sebastien Buisson (Inactive) added a comment - It seems we hit two new occurrences of this problem on master yesterday: https://maloo.whamcloud.com/test_sets/016cbfcc-9816-11e2-879d-52540035b04c https://maloo.whamcloud.com/test_sets/ab3b7452-97f4-11e2-a652-52540035b04c

            I thought I would update. The issue has not failed since the debug patch was landed (test 151 or 156) (I spent some time looking in maloo). There are failures on openSFS with old code but nothing in the current master space.

            I am all for leaving the debug patch in (Andreas considered ok for a longer term leave) and we will see what happens.

            keith Keith Mannthey (Inactive) added a comment - I thought I would update. The issue has not failed since the debug patch was landed (test 151 or 156) (I spent some time looking in maloo). There are failures on openSFS with old code but nothing in the current master space. I am all for leaving the debug patch in (Andreas considered ok for a longer term leave) and we will see what happens.

            Reducing in priority until this occurs again.
            Should review this in the future to see if this is occurring again.

            jlevi Jodi Levi (Inactive) added a comment - Reducing in priority until this occurs again. Should review this in the future to see if this is occurring again.

            It would be nice if we could land http://review.whamcloud.com/5648 so we can see the /proc state when we are in autotest and getting these errors.

            I agree it "seeems" like things should be well and good before we get here but perhaps there is some reset or fail over that resets could fail stats. I will wait for info from autotest as the issue will hopefully be clearer. For the larger issue it could save memory/space and only registrar values that are relevant to a given lu object type, but I am sure that is out of the scope of this LU.

            keith Keith Mannthey (Inactive) added a comment - It would be nice if we could land http://review.whamcloud.com/5648 so we can see the /proc state when we are in autotest and getting these errors. I agree it "seeems" like things should be well and good before we get here but perhaps there is some reset or fail over that resets could fail stats. I will wait for info from autotest as the issue will hopefully be clearer. For the larger issue it could save memory/space and only registrar values that are relevant to a given lu object type, but I am sure that is out of the scope of this LU.

            I think the intent is to not print all of the stats for operations that are not relevant for a particular device (e.g. "rename" for OSTs, or "read" for MDTs). One would think that by the time test_156 rolls around that there would be read operations on the device?

            adilger Andreas Dilger added a comment - I think the intent is to not print all of the stats for operations that are not relevant for a particular device (e.g. "rename" for OSTs, or "read" for MDTs). One would think that by the time test_156 rolls around that there would be read operations on the device?
            keith Keith Mannthey (Inactive) added a comment - - edited

            Well the real code has been around for a long time.

            d62360b7 (nathan                 2008-09-22 22:20:42 +0000 1505)        if (ret.lc_count == 0)
            d62360b7 (nathan                 2008-09-22 22:20:42 +0000 1506)                goto out;
            

            Any input in appreciated. I will play with removing those lines tomorrow.

            keith Keith Mannthey (Inactive) added a comment - - edited Well the real code has been around for a long time. d62360b7 (nathan 2008-09-22 22:20:42 +0000 1505) if (ret.lc_count == 0) d62360b7 (nathan 2008-09-22 22:20:42 +0000 1506) goto out; Any input in appreciated. I will play with removing those lines tomorrow.
            keith Keith Mannthey (Inactive) added a comment - - edited

            Well I have been looking into the lproc subsystem for a bit.

            I see in lustre/obdclass/lprocfs_status.c (this seems to be the correct spot for the values in question)

            static int lprocfs_stats_seq_show(struct seq_file *p, void *v)
            {
                    struct lprocfs_stats            *stats  = p->private;
                    struct lprocfs_counter          *cntr   = v;
                    struct lprocfs_counter          ret;
                    struct lprocfs_counter_header   *header;
                    int                             entry_size;
                    int                             idx;
                    int                             rc      = 0;
            
                    if (cntr == &(stats->ls_percpu[0])->lp_cntr[0]) {
                            struct timeval now;
                            cfs_gettimeofday(&now);
                            rc = seq_printf(p, "%-25s %lu.%lu secs.usecs\n",
                                            "snapshot_time", now.tv_sec, now.tv_usec);
                            if (rc < 0)
                                    return rc;
                    }
                    entry_size = sizeof(*cntr);
                    if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
                            entry_size += sizeof(__s64);
                    idx = ((void *)cntr - (void *)&(stats->ls_percpu[0])->lp_cntr[0]) /
                            entry_size;
            
                    header = &stats->ls_cnt_header[idx];
                    lprocfs_stats_collect(stats, idx, &ret);
            
                    if (ret.lc_count == 0)    <======  Can someone please explain this?  It is a source of the no reads from roc_hit 
                            goto out;
            ...
            

            Git blame says I need to mail BobiJam.

            ca461f0f (Bobi Jam               2013-01-19 00:54:32 +0800 1542)        if (ret.lc_count == 0)
            ca461f0f (Bobi Jam               2013-01-19 00:54:32 +0800 1543)                goto out;
            

            But he just did the tabs.

            keith Keith Mannthey (Inactive) added a comment - - edited Well I have been looking into the lproc subsystem for a bit. I see in lustre/obdclass/lprocfs_status.c (this seems to be the correct spot for the values in question) static int lprocfs_stats_seq_show(struct seq_file *p, void *v) { struct lprocfs_stats *stats = p->private; struct lprocfs_counter *cntr = v; struct lprocfs_counter ret; struct lprocfs_counter_header *header; int entry_size; int idx; int rc = 0; if (cntr == &(stats->ls_percpu[0])->lp_cntr[0]) { struct timeval now; cfs_gettimeofday(&now); rc = seq_printf(p, "%-25s %lu.%lu secs.usecs\n", "snapshot_time", now.tv_sec, now.tv_usec); if (rc < 0) return rc; } entry_size = sizeof(*cntr); if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE) entry_size += sizeof(__s64); idx = ((void *)cntr - (void *)&(stats->ls_percpu[0])->lp_cntr[0]) / entry_size; header = &stats->ls_cnt_header[idx]; lprocfs_stats_collect(stats, idx, &ret); if (ret.lc_count == 0) <====== Can someone please explain this? It is a source of the no reads from roc_hit goto out; ... Git blame says I need to mail BobiJam. ca461f0f (Bobi Jam 2013-01-19 00:54:32 +0800 1542) if (ret.lc_count == 0) ca461f0f (Bobi Jam 2013-01-19 00:54:32 +0800 1543) goto out; But he just did the tabs.

            I spent a bit more time looking and working with the no info case and I found this

            [root@ost lustre-OST0000]# /usr/sbin/lctl get_param -n osd-*.lustre-OST*.stats obdfilter.lustre-OST*.stats
            snapshot_time             1363657474.555425 secs.usecs
            get_page                  1 samples [usec] 1 1 1 1
            snapshot_time             1363657474.555530 secs.usecs
            write_bytes               1 samples [bytes] 18 18 18
            get_info                  9 samples [reqs]
            set_info_async            1 samples [reqs]
            connect                   3 samples [reqs]
            disconnect                1 samples [reqs]
            statfs                    243 samples [reqs]
            create                    2 samples [reqs]
            destroy                   3 samples [reqs]
            sync                      2 samples [reqs]
            preprw                    1 samples [reqs]
            commitrw                  1 samples [reqs]
            ping                      225 samples [reqs]
            

            This was pretty much a fresh mount with no IO.

            Did a little io and a sync and a read.

            root@ost lustre-OST0000]# /usr/sbin/lctl get_param -n osd-*.lustre-OST*.stats obdfilter.lustre-OST*.stats
            snapshot_time             1363657550.849448 secs.usecs
            get_page                  44 samples [usec] 0 2 25 29
            cache_access              1 samples [pages] 1 1 1
            cache_hit                 1 samples [pages] 1 1 1
            snapshot_time             1363657550.849551 secs.usecs
            read_bytes                1 samples [bytes] 4096 4096 4096
            write_bytes               43 samples [bytes] 18 1048576 42510794
            get_info                  16 samples [reqs]
            set_info_async            1 samples [reqs]
            connect                   3 samples [reqs]
            disconnect                1 samples [reqs]
            statfs                    258 samples [reqs]
            create                    2 samples [reqs]
            destroy                   6 samples [reqs]
            sync                      3 samples [reqs]
            preprw                    44 samples [reqs]
            commitrw                  44 samples [reqs]
            ping                      236 samples [reqs]
            

            It appears we don't ever display "0" for some proc values. The proc interface should be static so tools can be built around them. I am looking into why "0" was not displayed. You can see "cache_miss" (it should report 0 is not displayed at this time on my system.

            keith Keith Mannthey (Inactive) added a comment - I spent a bit more time looking and working with the no info case and I found this [root@ost lustre-OST0000]# /usr/sbin/lctl get_param -n osd-*.lustre-OST*.stats obdfilter.lustre-OST*.stats snapshot_time 1363657474.555425 secs.usecs get_page 1 samples [usec] 1 1 1 1 snapshot_time 1363657474.555530 secs.usecs write_bytes 1 samples [bytes] 18 18 18 get_info 9 samples [reqs] set_info_async 1 samples [reqs] connect 3 samples [reqs] disconnect 1 samples [reqs] statfs 243 samples [reqs] create 2 samples [reqs] destroy 3 samples [reqs] sync 2 samples [reqs] preprw 1 samples [reqs] commitrw 1 samples [reqs] ping 225 samples [reqs] This was pretty much a fresh mount with no IO. Did a little io and a sync and a read. root@ost lustre-OST0000]# /usr/sbin/lctl get_param -n osd-*.lustre-OST*.stats obdfilter.lustre-OST*.stats snapshot_time 1363657550.849448 secs.usecs get_page 44 samples [usec] 0 2 25 29 cache_access 1 samples [pages] 1 1 1 cache_hit 1 samples [pages] 1 1 1 snapshot_time 1363657550.849551 secs.usecs read_bytes 1 samples [bytes] 4096 4096 4096 write_bytes 43 samples [bytes] 18 1048576 42510794 get_info 16 samples [reqs] set_info_async 1 samples [reqs] connect 3 samples [reqs] disconnect 1 samples [reqs] statfs 258 samples [reqs] create 2 samples [reqs] destroy 6 samples [reqs] sync 3 samples [reqs] preprw 44 samples [reqs] commitrw 44 samples [reqs] ping 236 samples [reqs] It appears we don't ever display "0" for some proc values. The proc interface should be static so tools can be built around them. I am looking into why "0" was not displayed. You can see "cache_miss" (it should report 0 is not displayed at this time on my system.

            It is not safe to close this issue as it represents the 3 issues seen by "roc_hit" users.

            The only one there is a handle on is the early evictions due to memory pressure. There may be other improper evictions but there is not enough data from the autotest runs to say what has happened.

            For sure
            NOT IN CACHE: before: 16749, after: 16755
            It seems there were 2 successful reads on the system. A single read is +3 and this is +6 such the data may have been re-read.

            Is not covered by any other LU. Debug data from autotest is needed.

            Perhaps we want to split this up into a few tickets but they all come down to roc_hit having trouble and cache behavior not being deterministic (hopefully it is just due to memory pressure but there there is no way to tell at this point the tests needs to be improved)

            I am actively working on this issue as a blocker. If this is meant to close down the HB blocker and punt this to a minor issue please close again and I will stop working on the issue.

            keith Keith Mannthey (Inactive) added a comment - It is not safe to close this issue as it represents the 3 issues seen by "roc_hit" users. The only one there is a handle on is the early evictions due to memory pressure. There may be other improper evictions but there is not enough data from the autotest runs to say what has happened. For sure NOT IN CACHE: before: 16749, after: 16755 It seems there were 2 successful reads on the system. A single read is +3 and this is +6 such the data may have been re-read. Is not covered by any other LU. Debug data from autotest is needed. Perhaps we want to split this up into a few tickets but they all come down to roc_hit having trouble and cache behavior not being deterministic (hopefully it is just due to memory pressure but there there is no way to tell at this point the tests needs to be improved) I am actively working on this issue as a blocker. If this is meant to close down the HB blocker and punt this to a minor issue please close again and I will stop working on the issue.

            People

              keith Keith Mannthey (Inactive)
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: