Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11205

Failure to clear the changelog for user 1 on MDT

Details

    • Bug
    • Resolution: Duplicate
    • Major
    • None
    • Lustre 2.12.0, Lustre 2.10.4, Lustre 2.10.6
    • CentOS 7.4 (3.10.0-693.2.2.el7_lustre.pl1.x86_64)
    • 3
    • 9223372036854775807

    Description

      Hello,

      We're seeing the following messages on Oak's MDT in 2.10.4:

      Aug 03 09:21:39 oak-md1-s2 kernel: Lustre: 11137:0:(mdd_device.c:1577:mdd_changelog_clear()) oak-MDD0000: Failure to clear the changelog for user 1: -22
      Aug 03 09:31:38 oak-md1-s2 kernel: Lustre: 11271:0:(mdd_device.c:1577:mdd_changelog_clear()) oak-MDD0000: Failure to clear the changelog for user 1: -22
      

      Robinhood (also running 2.10.4) shows this:

      2018/08/03 10:00:47 [13766/22] ChangeLog | ERROR: llapi_changelog_clear("oak-MDT0000", "cl1", 13975842301) returned -22
      2018/08/03 10:00:47 [13766/22] EntryProc | Error -22 performing callback at stage STAGE_CHGLOG_CLR.
      2018/08/03 10:00:47 [13766/16] llapi | cannot purge records for 'cl1'
      2018/08/03 10:00:47 [13766/16] ChangeLog | ERROR: llapi_changelog_clear("oak-MDT0000", "cl1", 13975842303) returned -22
      2018/08/03 10:00:47 [13766/16] EntryProc | Error -22 performing callback at stage STAGE_CHGLOG_CLR.
      2018/08/03 10:00:47 [13766/4] llapi | cannot purge records for 'cl1'
      2018/08/03 10:00:47 [13766/4] ChangeLog | ERROR: llapi_changelog_clear("oak-MDT0000", "cl1", 13975842304) returned -22
      2018/08/03 10:00:47 [13766/4] EntryProc | Error -22 performing callback at stage STAGE_CHGLOG_CLR.
      

      Oak's MDT usage is as follow:

      [root@oak-md1-s2 ~]# df -h -t lustre
      Filesystem                  Size  Used Avail Use% Mounted on
      /dev/mapper/md1-rbod1-mdt0  1.3T  131G 1022G  12% /mnt/oak/mdt/0
      [root@oak-md1-s2 ~]# df -i -t lustre
      Filesystem                    Inodes     IUsed     IFree IUse% Mounted on
      /dev/mapper/md1-rbod1-mdt0 873332736 266515673 606817063   31% /mnt/oak/mdt/0
      

      I'm concerned that the MDT might fill up with changelogs. Could you please assist in troubleshooting this issue?
      Thanks!
      Stephane

      Attachments

        1. changelog-reader.tgz
          0.8 kB
        2. dk_ornl_20190328_1216.gz
          4.61 MB
        3. dk_ornl_20190328.gz
          4.59 MB
        4. dk.1547747365.gz
          8.56 MB
        5. dk.1547747668.gz
          8.58 MB
        6. dk.1547828521.gz
          7.96 MB
        7. f2_llog_reader_20190328.gz
          1.03 MB
        8. lu-11205-ssec.log.gz
          647 kB
        9. ornl_0x1_0xd8_0x0.gz
          1.19 MB

        Issue Links

          Activity

            [LU-11205] Failure to clear the changelog for user 1 on MDT

            We're also seeing this with the following combination - sles11 sp4 client
            > lustre-client-2.12.4-1.x86_64
            > robinhood-lustre-3.1.5-1.lustre2.12.x86_64

            against a 2.10.8 RHEL 7.6 server
            > kernel-3.10.0-957.1.3.el7_lustre.x86_64
            > kmod-lustre-2.10.8-1.el7.x86_64
            > kmod-lustre-osd-ldiskfs-2.10.8-1.el7.x86_64
            > lustre-2.10.8-1.el7.x86_64
            > lustre-osd-ldiskfs-mount-2.10.8-1.el7.x86_64
            > lustre-resource-agents-2.10.8-1.el7.x86_64

            Robinhood will run fine for so long, then stop clearing changelogs:

            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ==================== Dumping stats at 2020/04/16 00:15:54 =====================
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ======== General statistics =========
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Daemon start time: 2020/04/15 12:45:52
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Started modules: log_reader
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ChangeLog reader #0:
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    fs_name    =   pgfs
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    mdt_name   =   MDT0000
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    reader_id  =   cl1
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    records read        = 15682180
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    interesting records = 4444583
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    suppressed records  = 11175370
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    records pending     = 451
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    last received: rec_id=161344242, rec_time=2020/04/16 00:15:53.691544, received at 2020/04/16 00:15:53.738479
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |        receive speed: 409.11 rec/sec, log/real time ratio: 1.00
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    last pushed: rec_id=161343461, rec_time=2020/04/16 00:15:48.603088, pushed at 2020/04/16 00:15:53.737393
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |        push speed: 419.50 rec/sec, log/real time ratio: 1.00
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    last committed: rec_id=161343461, rec_time=2020/04/16 00:15:48.603088, committed at 2020/04/16 00:15:53.7417
            04
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |        commit speed: 419.50 rec/sec, log/real time ratio: 1.00
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    last cleared: rec_id=161343461, rec_time=2020/04/16 00:15:48.603088, cleared at 2020/04/16 00:15:53.742430
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |        clear speed: 419.50 rec/sec, log/real time ratio: 1.00
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    ChangeLog stats:
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    MARK: 0, CREAT: 882918, MKDIR: 128738, HLINK: 501, SLINK: 22345, MKNOD: 0, UNLNK: 493670
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    RMDIR: 54927, RENME: 64817, RNMTO: 0, OPEN: 0, CLOSE: 13427698, LYOUT: 78, TRUNC: 64903
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    SATTR: 456295, XATTR: 48510, HSM: 0, MTIME: 36766, CTIME: 14, ATIME: 0, MIGRT: 0
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |    FLRW: 0, RESYNC: 0, GXATR: 0, NOPEN: 0
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ==== EntryProcessor Pipeline Stats ===
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Idle threads: 24
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Id constraints count: 0 (hash min=0/max=0/avg=0.0)
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Name constraints count: 0 (hash min=0/max=0/avg=0.0)
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Stage              | Wait | Curr | Done |     Total | ms/op |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  0: GET_FID        |    0 |    0 |    0 |         0 |  0.00 |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  1: GET_INFO_DB    |    0 |    0 |    0 |     40627 |  0.36 |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  2: GET_INFO_FS    |    0 |    0 |    0 |     40475 |  0.07 |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  3: PRE_APPLY      |    0 |    0 |    0 |     40597 |  0.00 |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  4: DB_APPLY       |    0 |    0 |    0 |     40597 |  0.13 | 98.15% batched (avg batch size: 28.7)
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  5: CHGLOG_CLR     |    0 |    0 |    0 |     40627 |  0.02 |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS |  6: RM_OLD_ENTRIES |    0 |    0 |    0 |         0 |  0.00 |
            2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | DB ops: get=3538391/ins=959210/upd=2832420/rm=566905
            2020/04/16 00:16:00 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            2020/04/16 00:16:01 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            2020/04/16 00:16:02 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            2020/04/16 00:16:03 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            2020/04/16 00:16:04 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            2020/04/16 00:16:05 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            2020/04/16 00:16:06 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error
            

            but nothing untoward in the MDS logs at the same time although there was a load peak and CPUs in 'wait' (grafana plots from collectd can be attached). LNet traffic from the MDS wasn't anything particularly impressive.

            The robinhood server is one of the few machines we've tested with a 2.12 client - we're still mostly 2.10.8 or 2.7 (Cray) as we've still got a 2.5.x filesystem (sonnexion) and don't want to move too far ahead with clients.

            Elwell Andrew Elwell added a comment - We're also seeing this with the following combination - sles11 sp4 client > lustre-client-2.12.4-1.x86_64 > robinhood-lustre-3.1.5-1.lustre2.12.x86_64 against a 2.10.8 RHEL 7.6 server > kernel-3.10.0-957.1.3.el7_lustre.x86_64 > kmod-lustre-2.10.8-1.el7.x86_64 > kmod-lustre-osd-ldiskfs-2.10.8-1.el7.x86_64 > lustre-2.10.8-1.el7.x86_64 > lustre-osd-ldiskfs-mount-2.10.8-1.el7.x86_64 > lustre-resource-agents-2.10.8-1.el7.x86_64 Robinhood will run fine for so long, then stop clearing changelogs: 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ==================== Dumping stats at 2020/04/16 00:15:54 ===================== 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ======== General statistics ========= 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Daemon start time: 2020/04/15 12:45:52 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Started modules: log_reader 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ChangeLog reader #0: 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | fs_name = pgfs 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | mdt_name = MDT0000 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | reader_id = cl1 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | records read = 15682180 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | interesting records = 4444583 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | suppressed records = 11175370 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | records pending = 451 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | last received: rec_id=161344242, rec_time=2020/04/16 00:15:53.691544, received at 2020/04/16 00:15:53.738479 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | receive speed: 409.11 rec/sec, log/real time ratio: 1.00 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | last pushed: rec_id=161343461, rec_time=2020/04/16 00:15:48.603088, pushed at 2020/04/16 00:15:53.737393 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | push speed: 419.50 rec/sec, log/real time ratio: 1.00 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | last committed: rec_id=161343461, rec_time=2020/04/16 00:15:48.603088, committed at 2020/04/16 00:15:53.7417 04 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | commit speed: 419.50 rec/sec, log/real time ratio: 1.00 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | last cleared: rec_id=161343461, rec_time=2020/04/16 00:15:48.603088, cleared at 2020/04/16 00:15:53.742430 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | clear speed: 419.50 rec/sec, log/real time ratio: 1.00 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ChangeLog stats: 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | MARK: 0, CREAT: 882918, MKDIR: 128738, HLINK: 501, SLINK: 22345, MKNOD: 0, UNLNK: 493670 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | RMDIR: 54927, RENME: 64817, RNMTO: 0, OPEN: 0, CLOSE: 13427698, LYOUT: 78, TRUNC: 64903 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | SATTR: 456295, XATTR: 48510, HSM: 0, MTIME: 36766, CTIME: 14, ATIME: 0, MIGRT: 0 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | FLRW: 0, RESYNC: 0, GXATR: 0, NOPEN: 0 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | ==== EntryProcessor Pipeline Stats === 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Idle threads: 24 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Id constraints count: 0 (hash min=0/max=0/avg=0.0) 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Name constraints count: 0 (hash min=0/max=0/avg=0.0) 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | Stage | Wait | Curr | Done | Total | ms/op | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 0: GET_FID | 0 | 0 | 0 | 0 | 0.00 | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 1: GET_INFO_DB | 0 | 0 | 0 | 40627 | 0.36 | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 2: GET_INFO_FS | 0 | 0 | 0 | 40475 | 0.07 | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 3: PRE_APPLY | 0 | 0 | 0 | 40597 | 0.00 | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 4: DB_APPLY | 0 | 0 | 0 | 40597 | 0.13 | 98.15% batched (avg batch size: 28.7) 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 5: CHGLOG_CLR | 0 | 0 | 0 | 40627 | 0.02 | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | 6: RM_OLD_ENTRIES | 0 | 0 | 0 | 0 | 0.00 | 2020/04/16 00:15:54 robinhood@bullion[14485/1] STATS | DB ops: get=3538391/ins=959210/upd=2832420/rm=566905 2020/04/16 00:16:00 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error 2020/04/16 00:16:01 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error 2020/04/16 00:16:02 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error 2020/04/16 00:16:03 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error 2020/04/16 00:16:04 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error 2020/04/16 00:16:05 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error 2020/04/16 00:16:06 robinhood@bullion[14485/2] ChangeLog | Error in llapi_changelog_recv(): -5: Input/output error but nothing untoward in the MDS logs at the same time although there was a load peak and CPUs in 'wait' (grafana plots from collectd can be attached). LNet traffic from the MDS wasn't anything particularly impressive. The robinhood server is one of the few machines we've tested with a 2.12 client - we're still mostly 2.10.8 or 2.7 (Cray) as we've still got a 2.5.x filesystem (sonnexion) and don't want to move too far ahead with clients.

            Still a problem with 2.12.3, robinhood hangs when reading the changelogs, and when this happens, an error appears on the console. I need to restart robinhood almost every day due to this problem:

            Nov 28 14:58:11 fir-rbh01 kernel: LustreError: 80708:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0002-mdc-ffff95cae9e38800: fail to process llog: rc = -5
            Nov 28 20:08:16 fir-rbh01 kernel: LustreError: 70617:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0003-mdc-ffff95cae9e38800: fail to process llog: rc = -5
            Nov 29 02:17:01 fir-rbh01 kernel: LustreError: 123662:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0001-mdc-ffff95cae9e38800: fail to process llog: rc = -5
            Dec 01 14:06:55 fir-rbh01 kernel: LustreError: 78509:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0000-mdc-ffff95cae9e38800: fail to process llog: rc = -2
            Dec 02 15:39:14 fir-rbh01 kernel: LustreError: 887:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0002-mdc-ffff95cae9e38800: fail to process llog: rc = -2
            Dec 05 00:12:07 fir-rbh01 kernel: LustreError: 12066:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0003-mdc-ffff95cae9e38800: fail to process llog: rc = -5
            

            Note that this isn't a problem with 2.10.8

            And Mike, I don't know regarding the changelog growing issue, we had never noticed this specific problem (that doesn't mean it doesn't happen, I was concerned too about that), I think LLNL reported that at some point. Because we're DoM-ready, our MDTs are very large.

            sthiell Stephane Thiell added a comment - Still a problem with 2.12.3, robinhood hangs when reading the changelogs, and when this happens, an error appears on the console. I need to restart robinhood almost every day due to this problem: Nov 28 14:58:11 fir-rbh01 kernel: LustreError: 80708:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0002-mdc-ffff95cae9e38800: fail to process llog: rc = -5 Nov 28 20:08:16 fir-rbh01 kernel: LustreError: 70617:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0003-mdc-ffff95cae9e38800: fail to process llog: rc = -5 Nov 29 02:17:01 fir-rbh01 kernel: LustreError: 123662:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0001-mdc-ffff95cae9e38800: fail to process llog: rc = -5 Dec 01 14:06:55 fir-rbh01 kernel: LustreError: 78509:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0000-mdc-ffff95cae9e38800: fail to process llog: rc = -2 Dec 02 15:39:14 fir-rbh01 kernel: LustreError: 887:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0002-mdc-ffff95cae9e38800: fail to process llog: rc = -2 Dec 05 00:12:07 fir-rbh01 kernel: LustreError: 12066:0:(mdc_changelog.c:249:chlg_load()) fir-MDT0003-mdc-ffff95cae9e38800: fail to process llog: rc = -5 Note that this isn't a problem with 2.10.8 And Mike, I don't know regarding the changelog growing issue, we had never noticed this specific problem (that doesn't mean it doesn't happen, I was concerned too about that), I think LLNL reported that at some point. Because we're DoM-ready, our MDTs are very large.

            Stephane, does it cause changelog growing as original issue did?

            tappro Mikhail Pershin added a comment - Stephane, does it cause changelog growing as original issue did?

            We just upgraded our Lustre 2.12 servers and the Robinhood client to 2.12.3 RC1 and we're still seeing these log messages:

            fir-md1-s3: Oct 16 21:40:09 fir-md1-s3 kernel: Lustre: 18584:0:(mdd_device.c:1807:mdd_changelog_clear()) fir-MDD0002: Failure to clear the changelog for user 1: -22
            

            Not sure about the real impact though.

            sthiell Stephane Thiell added a comment - We just upgraded our Lustre 2.12 servers and the Robinhood client to 2.12.3 RC1 and we're still seeing these log messages: fir-md1-s3: Oct 16 21:40:09 fir-md1-s3 kernel: Lustre: 18584:0:(mdd_device.c:1807:mdd_changelog_clear()) fir-MDD0002: Failure to clear the changelog for user 1: -22 Not sure about the real impact though.
            pjones Peter Jones added a comment -

            The consensus seems to be that this can be closed as a duplicate of LU-11426

            pjones Peter Jones added a comment - The consensus seems to be that this can be closed as a duplicate of LU-11426
            qian_wc Qian Yingjin added a comment -

            From my understanding of current changelog mechanism from the source code, the chagelog records will not delete if there are more than one changelog user using it.

            qian_wc Qian Yingjin added a comment - From my understanding of current changelog mechanism from the source code, the chagelog records will not delete if there are more than one changelog user using it.

            Agreed that the proposed fix for LU-11426 should also resolve this.

            Related to the "orphan records" that James mentioned, in my testing of this patch it did not (fully) resolve that issue: the only way I could get them to go away was to rebuild the changelog by deregistering all readers.

            olaf Olaf Weber (Inactive) added a comment - Agreed that the proposed fix for LU-11426 should also resolve this. Related to the "orphan records" that James mentioned, in my testing of this patch it did not (fully) resolve that issue: the only way I could get them to go away was to rebuild the changelog by deregistering all readers.

            LU-11426 would fix the ordering. So error -22 wouldn't happen.

            aboyko Alexander Boyko added a comment - LU-11426 would fix the ordering. So error -22 wouldn't happen.

            @Olaf Weber, the patch allows clear of unordered records. The processing is different case. I do think that software should take care. Read a number of records and process it in order. In most cases it can process unordered too. We should not see unordered operation for a same file or directory cause they are synchronized by a parent lock.

            aboyko Alexander Boyko added a comment - @Olaf Weber, the patch allows clear of unordered records. The processing is different case. I do think that software should take care. Read a number of records and process it in order. In most cases it can process unordered too. We should not see unordered operation for a same file or directory cause they are synchronized by a parent lock.

            It is hard to say whether it really reduces the lost records issue, as too much changed in the cluster I tested on to make it truly apples/apples.

            olaf Olaf Weber (Inactive) added a comment - It is hard to say whether it really reduces the lost records issue, as too much changed in the cluster I tested on to make it truly apples/apples.

            People

              tappro Mikhail Pershin
              sthiell Stephane Thiell
              Votes:
              3 Vote for this issue
              Watchers:
              26 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: