[LU-8465] parallel e2fsck performance at scale Created: 02/Aug/16 Updated: 26/Nov/21 Resolved: 30/Sep/20 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | Lustre 2.14.0 |
| Type: | Improvement | Priority: | Major |
| Reporter: | Artem Blagodarenko (Inactive) | Assignee: | Wang Shilong (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
||||||||||||||||||||||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||||||||||||||||||||||||||||||||||||||
| Description |
|
Mentioned in e2fsck performance will become an issue at this scale, and it would likely need to be parallelized to be able to complete in a reasonable time. It could reasonably expect multiple disks at this scale, so having larger numbers of IOs in flight would help, as would an event-driven model with aio that generates lists of blocks to check (itable blocks first), submits them to disk, and then processes them as they are read, generating more blocks to read (more itable blocks, indirect/index/xattr/directory blocks, etc), repeat. |
| Comments |
| Comment by Andreas Dilger [ 08/Dec/17 ] |
|
I think that before you start looking at format changes to improve e2fsck, There was work done in 1.42 or 1.43 to do async readahead of inode tables Firstly, an example e2fsck run on my home OST and MDT filesystem: myth-OST0000: clean, 359496/947456 files, 781092161/970194944 blocks e2fsck 1.42.13.wc5 (15-Apr-2016) Pass 1: Checking inodes, blocks, and sizes Pass 1: Memory used: 5116k/932k (4903k/214k), time: 181.93/ 4.12/ 1.84 Pass 1: I/O read: 160MB, write: 0MB, rate: 0.88MB/s Pass 2: Checking directory structure Pass 2: Memory used: 11532k/1084k (4614k/6919k), time: 3.51/ 2.59/ 0.12 Pass 2: I/O read: 37MB, write: 0MB, rate: 10.53MB/s Pass 3: Checking directory connectivity Pass 3A: Memory used: 11532k/1084k (4617k/6916k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 11532k/1084k (4612k/6921k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 225.43MB/s Pass 4: Checking reference counts Pass 4: Memory used: 5196k/932k (3932k/1265k), time: 0.09/ 0.09/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 7680k/932k (3917k/3764k), time: 227.05/ 1.72/ 3.03 Pass 5: I/O read: 187MB, write: 0MB, rate: 0.82MB/s myth-MDT0000: clean, 1872215/3932160 files, 824210/3932160 blocks e2fsck 1.42.13.wc5 (15-Apr-2016) Pass 1: Checking inodes, blocks, and sizes Pass 1: Memory used: 13092k/17500k (12876k/217k), time: 19.98/ 5.52/ 1.72 Pass 1: I/O read: 1104MB, write: 0MB, rate: 55.26MB/s Pass 2: Checking directory structure Pass 2: Memory used: 21012k/8468k (13732k/7281k), time: 23.28/ 7.54/ 2.25 Pass 2: I/O read: 1113MB, write: 0MB, rate: 47.81MB/s Pass 3: Checking directory connectivity Pass 3A: Memory used: 21624k/8468k (14212k/7413k), time: 0.00/ 0.01/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 21624k/5572k (13731k/7893k), time: 0.06/ 0.05/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 15.44MB/s Pass 4: Checking reference counts Pass 4: Memory used: 21624k/484k (2638k/18987k), time: 0.42/ 0.45/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 21624k/0k (2637k/18988k), time: 1.30/ 0.38/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 0.77MB/s It shows that a large amount of time is spent in pass1 (inode/block scan). Pass2 (directory structure) is trivial on an OST because it has so few Pass 5 is slow on the OST filesystem because it is HDD based and is not As for how to approach development, I'd start with a simple threading Then, a producer/consumer model for processing inode table blocks could Finally, a separate producer/consumer model for processing directories With the addition of larger directory support, this would benefit from Cheers, Andreas |
| Comment by Andreas Dilger [ 19/Mar/19 ] |
|
Artem, I know this is something you were interested in previously, have you done any work in this area? |
| Comment by Artem Blagodarenko (Inactive) [ 22/Mar/19 ] |
|
Andreas, not so far we faced with e2fsck slow work on one of our system. And finally make |
| Comment by Li Xi [ 02/Aug/19 ] |
|
Patch: https://review.whamcloud.com/#/c/35597/ cleanup e2fsck_pass1 |
| Comment by Gerrit Updater [ 02/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35671 |
| Comment by Gerrit Updater [ 02/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35676 |
| Comment by Gerrit Updater [ 02/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35678 |
| Comment by Gerrit Updater [ 03/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35684 |
| Comment by Gerrit Updater [ 04/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35689 |
| Comment by Gerrit Updater [ 05/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35690 |
| Comment by Gerrit Updater [ 05/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35691 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35696 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35697 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35698 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35699 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35701 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35702 |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35703 |
| Comment by Gerrit Updater [ 07/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35710 |
| Comment by Gerrit Updater [ 07/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35712 |
| Comment by Gerrit Updater [ 07/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35713 |
| Comment by Li Xi [ 08/Aug/19 ] |
|
The imeplementation of parallel fsck is going to very hard, not because of the new codes are hard, infact, new codes and logics are easy. The hardest part is how to add these codes without breaking any existing codes that exist and grow for decades. I am using several ways to confirm the patches don't break anything: TEST #1. The regression tests under "tests" directory. TEST #2. "valgrind" test to eliminate memory leak. TEST #3. e2fsck on a huge Ext4 with hundreds of millions inodes to confirm no performance regression. I am running all of these three tests to make sure every patch looks good. If you have any idea of other ways, please let me know and thank you so much! The design is to copy the fsck context for each pass1 thread. And each context is seperated with other ones. When the fsck thread finishes, the global context will merge all contexts together, and continue run the pass2...5 tests. And the plan is doing following steps one by one: #1. Cleanup the codes if possible. Unfortunately, it is hard. According to my experience, it is so easy to break something. For example, https://review.whamcloud.com/#/c/35597/ passed all regression tests, but the fsck is slown down dramatically. #2. Copy the context and its sub-fields to a new one, and run the same process like before. No thread will be created, and same functions are used even context is new one. #3. Copy the context to multiple ones, and split the pass1 scanning of inodes to multiple steps. Each scanning step works on a range of inodes. And this will still be done without multi-thread. Each step uses different context, and these contexts will be merged into the global one. The global context will be used to run the pass2...5 check. #4. Run all kinds of tests for step 3 by splitting the pass1 one scanning to different numbers of loops, from one to as large as the core number. This will make sure the context copying and merging process is correct. #5. Create only one thread to run the pass1 check in a new context. This should still be able to pass all above tests including TEST #1. And it is impossible to have race problem of disk read/write. #6. Create multiple threads to run the pass1 check in different threads. Each thread uses its own context, and works on a given range of inodes. This step is going to be very very hards because of races, including disk read/write conflict, global variable conflict and so on. So it will be necessary to split it into multiple steps: #6.1 Roughly implement the multi-thread fsck and run it on a clean file system to make sure the checking works well. Performance numbers can be got after this step is finished. #6.2 Implement the fully fsck and fix. After step #6, TEST #1 isn't going to pass because the output of fsck is going to be changed. But still, we can change the fsck tests to: FSCK TEST #1. Run fsck with multi-thread, but don't compare the result with the expected result of single thread. FSCK TEST #2. Run fsck without multi-thread, compare the result with the expected second result. All TEST #1 tests should be passed (maybe not for a few ones?). And this will make sure multi-thread fsck at least fixes the file system problems as expected. |
| Comment by Gerrit Updater [ 08/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35726 |
| Comment by Gerrit Updater [ 08/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35731 |
| Comment by Gerrit Updater [ 09/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35748 |
| Comment by Gerrit Updater [ 09/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35753 |
| Comment by James A Simmons [ 09/Aug/19 ] |
|
Is this being pushed upstream as well? |
| Comment by Li Xi [ 09/Aug/19 ] |
|
Hi James, not yet. Will do when the performance numbers confirm the idea. |
| Comment by Gerrit Updater [ 10/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35763 |
| Comment by Gerrit Updater [ 12/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35770 |
| Comment by Gerrit Updater [ 12/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35771 |
| Comment by Gerrit Updater [ 12/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35772 |
| Comment by Gerrit Updater [ 13/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35780 |
| Comment by Gerrit Updater [ 15/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35793 |
| Comment by Gerrit Updater [ 21/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35846 |
| Comment by Gerrit Updater [ 22/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35855 |
| Comment by Gerrit Updater [ 26/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35922 |
| Comment by Li Xi [ 27/Aug/19 ] |
|
On a 1PB Ext4 file system, I created 105M inodes. And run e2fsck: # df | grep sda Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda 1099637648128 13563264 1088621733412 1% /mnt # df -i | grep sda Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda 1074397184 105887860 968509324 10% /mnt With original e2fsck: e2fsck 1.45.2.wc1 (27-May-2019)
MMP interval is 5 seconds and total wait time is 22 seconds. Please wait...
Pass 1: Checking inodes, blocks, and sizes
Pass 1: Memory used: 75116k/193852k (70544k/4573k), time: 3771.03/1472.81/31.63
Pass 1: I/O read: 52117MB, write: 0MB, rate: 13.82MB/s
Time used: 3771 seconds
Time used: 0 seconds
Pass 2: Checking directory structure
Pass 2: Memory used: 75116k/337408k (27172k/47945k), time: 98.10/86.34/ 3.37
Pass 2: I/O read: 2897MB, write: 0MB, rate: 29.53MB/s
Time used: 98 seconds
Pass 3: Checking directory connectivity
Peak memory: Memory used: 75116k/337408k (27172k/47945k), time: 3879.43/1568.92/35.05
Pass 3: Memory used: 75116k/337408k (25930k/49187k), time: 0.02/ 0.03/ 0.00
Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s
Time used: 0 seconds
Pass 4: Checking reference counts
Pass 4: Memory used: 75116k/0k (21993k/53124k), time: 12.94/12.95/ 0.00
Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s
Time used: 13 seconds
Pass 5: Checking group summary information
Pass 5: Memory used: 75116k/0k (20108k/55009k), time: 66.93/ 4.12/ 0.13
Pass 5: I/O read: 206MB, write: 0MB, rate: 3.08MB/s
Time used: 67 seconds
105887860 inodes used (9.86%, out of 1074397184)
1 non-contiguous file (0.0%)
0 non-contiguous directories (0.0%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 105887853/1
139657888 blocks used (0.05%, out of 275045679104)
0 bad blocks
1 large file
105781892 regular files
105959 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
------------
105887851 files
Memory used: 75116k/0k (20108k/55009k), time: 3959.37/1586.01/35.18
I/O read: 55235MB, write: 0MB, rate: 13.95MB/s
Using the e2fsck with patches https://review.whamcloud.com/#/c/35922/4 e2fsck 1.45.2.wc1 (27-May-2019) MMP interval is 5 seconds and total wait time is 22 seconds. Please wait... iii thread 0 is going to scan group [0, 32788) [Thread 0] Pass 1: Checking inodes, blocks, and sizes iii thread 1 is going to scan group [32788, 65576) [Thread 1] Pass 1: Checking inodes, blocks, and sizes iii thread 2 is going to scan group [65576, 98364) [Thread 2] Pass 1: Checking inodes, blocks, and sizes iii thread 3 is going to scan group [98364, 131152) [Thread 3] Pass 1: Checking inodes, blocks, and sizes iii thread 4 is going to scan group [131152, 163940) [Thread 4] Pass 1: Checking inodes, blocks, and sizes iii thread 5 is going to scan group [163940, 196728) [Thread 5] Pass 1: Checking inodes, blocks, and sizes iii thread 6 is going to scan group [196728, 229516) [Thread 6] Pass 1: Checking inodes, blocks, and sizes iii thread 7 is going to scan group [229516, 262304) [Thread 7] Pass 1: Checking inodes, blocks, and sizes [Thread 4] Pass 1: Memory used: 205208k/1349632k (204375k/834k), time: 1407.46/11259.11/ 0.78 [Thread 4] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 6] Pass 1: Memory used: 205276k/1349632k (203804k/1473k), time: 1407.57/11260.05/ 0.63 [Thread 6] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 7] Pass 1: Memory used: 205308k/1349632k (203232k/2077k), time: 1407.66/11260.62/ 0.55 [Thread 7] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 3] Pass 1: Memory used: 205376k/1349632k (202660k/2717k), time: 1407.81/11261.19/ 0.82 [Thread 3] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 1] Pass 1: Memory used: 206752k/1349632k (202812k/3941k), time: 1408.34/11263.09/ 0.89 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 2] Pass 1: Memory used: 206820k/1349632k (202244k/4577k), time: 1408.45/11263.34/ 0.86 [Thread 2] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 5] Pass 1: Memory used: 206892k/1349632k (201674k/5219k), time: 1408.51/11263.37/ 0.72 [Thread 5] Pass 1: I/O read: 1MB, write: 0MB, rate: 0.00MB/s [Thread 0] Pass 1: Memory used: 254040k/1374780k (245271k/8770k), time: 3805.86/11436.56/10.64 [Thread 0] Pass 1: I/O read: 52117MB, write: 0MB, rate: 13.69MB/s iii 0 scaned group [0, 32788) 105887861 inodes iii 1 scaned group [32788, 65576) 0 inodes iii 2 scaned group [65576, 98364) 0 inodes iii 3 scaned group [98364, 131152) 0 inodes iii 4 scaned group [131152, 163940) 0 inodes iii 5 scaned group [163940, 196728) 0 inodes iii 6 scaned group [196728, 229516) 0 inodes iii 7 scaned group [229516, 262304) 0 inodes Pass 2: Checking directory structure Pass 2: Memory used: 254040k/1543484k (108778k/145263k), time: 0.10/ 0.03/ 0.08 Pass 2: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 254040k/1543484k (108778k/145263k), time: 3816.89/11446.48/10.78 ... |
| Comment by Andreas Dilger [ 27/Aug/19 ] |
|
In most filesystems the inode distribution is uneven. Groups at the start of the filesystem have more inodes allocated than at the end. In the absence of dynamic group allocation, it probably makes more sense to do interleaved group allocation, like one flexbg of inodes (128-256 groups) to each thread in round-robin order. That can still be done without coordination between threads, but is more likely to load the threads evenly. It also is more likely to reduce seeking between threads. It would be useful for your testing to capture blocktrace data and plot with seekwatcher to see the IO patterns of the parallel e2fsck, as it may be the parallel COU improvement is being offset by worse IO patterns (pass1 was previously perfectly linear IO). |
| Comment by Li Xi [ 28/Aug/19 ] |
|
Thank you Andreas for the advices. We found the root cause of the long time costs for the threads that had actually no work to do because of empty block groups. The patch https://review.whamcloud.com/#/c/35659/ can fix the problem. I am going to check whether there is anyway to get a good number with that patch. And then optimize the group blance between threads as you suggested. |
| Comment by Andreas Dilger [ 28/Aug/19 ] |
|
It isn't clear to me how patch 35659 ("libext2fs: optimize ext2fs_convert_subcluster_bitmap()") could fix this problem? AFAIK, that patch mostly relates to mke2fs. |
| Comment by Li Xi [ 28/Aug/19 ] |
|
> It isn't clear to me how patch 35659 ("libext2fs: optimize ext2fs_convert_subcluster_bitmap()") could fix this problem? AFAIK, that patch mostly relates to mke2fs. The pass1 check also calls ext2fs_convert_subcluster_bitmap(). The device has 1PB size, thus a lot of groups. But most of the groups are not used. ext2fs_convert_subcluster_bitmap() costs a lot of time. According to th e test result, each thread spends ~1400 seconds on this. After using the patch, the time cost have been reduced to about 10 seconds. |
| Comment by Li Xi [ 28/Aug/19 ] |
|
Following is a test log with patch https://review.whamcloud.com/#/c/35922/8 I used 128 threads to do the pass1 check. And only 13 threads have real work to do. It took at most 800 seconds for the slowest thread to finish. The time cost of pass1 check would take 3771 seconds. That means, it might be possible to get speed up of 3771/800 = 4.7 times. log: https://jira.whamcloud.com/secure/attachment/33433/128_threads.txt |
| Comment by Gerrit Updater [ 29/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/35961 |
| Comment by Andreas Dilger [ 29/Aug/19 ] |
While there is a limit to the parallelism on a given system, if you split the workload into flexbg_count chunks (Lustre OSTs use flexbg_count=256 so that inode tables and block bitmaps are allocated in 1MB chunks on disk) then the workload could be spread over 100 threads in your test case, and the workload would likely be much more even. Each thread would process groups numbered: group = n * num_threads + flexbg_count * thread + [0..flexbg_count - 1], n: 0, 1, 2, ... In theory, even with the 13 threads doing an even amount of work (I'm assuming a "base" of ~215s for each thread, and the 5000s above that is "work") then pass1 would finish in about 635s. If 104 threads could participate evenly (at 256 groups/thread no more will have work to do), then pass1 could finish in about 265s. |
| Comment by Gerrit Updater [ 30/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36001 |
| Comment by Andreas Dilger [ 30/Aug/19 ] |
|
Li Xi, have you looked at using pthreads for the process control and communication between threads rather than the current fork+exec? Looking at the e2fsck performance results, it seems that pass3 takes as long to complete as pass1, so it seems that once we have pass1 running in parallel then we also need to improve pass3 in order to significantly reduce the total runtime. With pass3 (pathname connectivity), it will definitely need more fine-grained communication between the threads to exchange results, and also also shared memory to avoid significant duplication of effort or memory usage. |
| Comment by Li Xi [ 30/Aug/19 ] |
Currently, my patches are already using pthreads, not fork+exec.
I don't think so. Pass3 only takes a 0.02 seconds according to https://jira.whamcloud.com/browse/LU-8465?focusedCommentId=253670&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-253670 The logs of e2fsck is not super clear though: Pass 3: Checking directory connectivity |
| Comment by Gerrit Updater [ 30/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36004 |
| Comment by Li Xi [ 30/Aug/19 ] |
|
Patch https://review.whamcloud.com/36004 adds e2fsck_dir_info_min_larger_equal(). test_max_less_equal.c |
| Comment by Gerrit Updater [ 30/Aug/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36006 |
| Comment by Gerrit Updater [ 02/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36018 |
| Comment by Gerrit Updater [ 02/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36019 |
| Comment by Gerrit Updater [ 02/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36020 |
| Comment by Gerrit Updater [ 03/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36026 |
| Comment by Gerrit Updater [ 04/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36044 |
| Comment by Gerrit Updater [ 05/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36053 |
| Comment by Gerrit Updater [ 05/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36054 |
| Comment by Gerrit Updater [ 06/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36079 |
| Comment by Gerrit Updater [ 06/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36080 |
| Comment by Gerrit Updater [ 06/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36081 |
| Comment by Gerrit Updater [ 06/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36083 |
| Comment by Gerrit Updater [ 06/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36084 |
| Comment by Gerrit Updater [ 08/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36097 |
| Comment by Gerrit Updater [ 08/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36098 |
| Comment by Gerrit Updater [ 08/Sep/19 ] |
|
Li Xi (lixi@ddn.com) uploaded a new patch: https://review.whamcloud.com/36099 |
| Comment by Wang Shilong (Inactive) [ 19/Nov/19 ] |
|
Looked at current patches, i think it might have a design problem that was ignored, e2fsck works as following: e2fsck_pass1->e2fsck_pass1e->e2fsck_pass2->e2fsck_pass3->e2fsck_pass4->e2fsck_pass5 And existed patches flow was trying to use pthreads in e2fsck_pass1(), then merge results after e2fsck_pass1 and continue. As in the threads of e2fsck_pass1() we will malloc memory which will be used later, the current model as i understand is: Parent process->pthread_create sub thread->sub thread malloc memory->Parent process try to access sub thread memory? I don't think this will work? We might need some good memory sharing frame to work this out? |
| Comment by Li Xi [ 22/Nov/19 ] |
|
> The problem is there isn't any memory sharing module to make sure how this could be worked? Shilong, there will be certainly some need to sync between different threads in some circumstances. For example, bitmap is updated by a thread, that means the bitmap would need to synced to all of the other threads too. That part is not finished by the patches yet. In order to sync between threads, we might need to change the logic of the e2fsck_pass1() a little bit to have some sync step before trying to fix the problem. There are other ways, I am not sure which one is better though. > Parent process->pthread_create sub thread->sub thread malloc memory->Parent process try to access sub thread memory No, all the memory allocated by the sub threads will be merged to the global thread, and then freed. |
| Comment by Gerrit Updater [ 26/Nov/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/36862 |
| Comment by Gerrit Updater [ 26/Nov/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/36863 |
| Comment by Andreas Dilger [ 12/Dec/19 ] |
|
I was reading the pthread(7) man page, and it states:
So we should be able to have e.g. a shared block and inode bitmaps as long as they are allocated before the threads are forked (so they all know where the bitmaps are allocated), and we use locking to avoid multiple threads updating the same structures at the same time (words or rbtree). We can use a scalable pthread_mutex() implementation for updating the block bitmaps (e.g. block group number hashed across 4*num_threads locks, similar to struct blockgroup_lock *s_blockgroup_lock in the kernel). The inode bitmap should be relatively uncontended, since we will be splitting whole block groups between threads, but it should still use a scalable pthread_mutex_lock() for access like with the block bitmap. I think it makes sense to lock whole groups rather than individual words, since it is likely extents will span hundreds of blocks, and threads will be processing thousands of inodes in the same bitmap. I think there are enough abstraction hooks in libext2fs for the bitmap handling that we can register our own callbacks from the code to avoid embedding pthreads into the core library (which will otherwise cause problems for non-pthread users). It may be a bit tricky to consolidate the rbtree bitmap with parallel access, but we could potentially lock the nodes of the rbtree similar to htree lock instead of the groups? |
| Comment by Andreas Dilger [ 12/Dec/19 ] |
|
Note, in addition to the bitmaps, it probably makes sense to push the problem handling code to the main thread. This could be done with a producer-consumer model. While single-threading the error handling is not as fast as doing it in parallel, I think it may too complex to decide which problems can be fixed by a thread (and in parallel). It may be that all pass1 errors can be handled immediately (i.e. they only affect the current inode and mark its blocks set in the bitmap), but this would need to be verified. |
| Comment by Gerrit Updater [ 16/Dec/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37022 |
| Comment by Gerrit Updater [ 16/Dec/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37024 |
| Comment by Gerrit Updater [ 16/Dec/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37023 |
| Comment by Gerrit Updater [ 16/Dec/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37026 |
| Comment by Gerrit Updater [ 16/Dec/19 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37025 |
| Comment by Andreas Dilger [ 18/Dec/19 ] |
|
Since (AFAIK) the block and inode bitmap code is using an rbtree, I think there are two options for how to handle these bitmaps between threads:
It would probably be worthwhile to run a quick benchmark to call pthread_mutex() in a loop on an uncontended lock to see how many locks/sec it can get,then run the same for two threads on the same lock to see what the aggregate locks/dev they can get, and two threads locking different locks. This will help make the decision whether the locking is expensive and should be avoided at all costs, or whether lock hashing would be enough to avoid contention in most uses. |
| Comment by Gerrit Updater [ 21/Feb/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37657 |
| Comment by Gerrit Updater [ 21/Feb/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37658 |
| Comment by Gerrit Updater [ 03/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37782 |
| Comment by Gerrit Updater [ 03/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37784 |
| Comment by Gerrit Updater [ 03/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37783 |
| Comment by Gerrit Updater [ 03/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37785 |
| Comment by Wang Shilong (Inactive) [ 03/Mar/20 ] |
|
Currently, i've tested the whole series patches with number of thread 8, and all test passed with e2fsprogs, but there are still some problems that i want to discuss here: actually, for pass1, parallel repair is possible, as mostly of repiar for it is just inode rewrite(flags, size etc), but there are still some possible issues to be global: 1) found_block, dup_blocks. 2) block allocations/free, 3)superblock So potentially, there could be two ways to address issues like this: Good benefits of this is: Bad side is: 2) Allow parallel fix of pass1(what current series did) Good benefits of this: Bad sides. Any ideas or input what is better idea that we could go? |
| Comment by Li Xi [ 03/Mar/20 ] |
|
Comparing to parallel checking, parallel fixing isn't so important, because 1) most file systems are not broken and 2) most part of a broken file system are not broken 3) it is more acceptable to fixing a broken file system more slowly but less risky than more quickly but more risky. So, if parallel fixing is not super important, I think the top challenge here is how to write trustable fix codes. So, if putting all fixing to a single thread is simpler and clearer, then we should do that. But I agree with you that this is not going to be the case. Each thread has its own checking and fixing process. Fixing can only happen naturely when checking finds a problem. I am not sure it is easy to put all codes into a single thread. And fixing the problems in parallel by multiple threads look dangerous too. The reason is, essentailly multiple threads write to the same shared device. And if locks are not protecting the ciritical region correctly, data will be corrupted. So, instead of either the upper two ways, I think maybe the following ways is better. Each thread do the check and fix by itself. But whenever the thread need to change/write anything (either in memory or to disk) that could be shared by other threads, it should do that exclusively in a special fixing context. Before the changing/fixing, all threads might need to sync with the fixing thread on the shared information (block bitmap etc). And after the changing/fixing, all threads might need to sync the fixing thread again so everyone agree on the change. And during the changing/fixing, no other thread do anything including doing I/O, so that the fixing thread can do I/O exclusively or change anything it wants. I think we can start by strict policy first, i.e. anything that we are not sure whether is shared by others, we can use the fixing context to protect it. When we know more about the logic details and performance bottlenecks, we can future use other ways to remove possible bottlenecks. And in the end, the parallel fixing process will be turstable, stable and at the same time quick. Comparing to fixing the problem in a dedicated thread, we don't need to change the current fixing codes a lot. We just need to add codes to enter and quit the context. Comparing to adding a lot of mutex/read/write locks, this doesn't require deep understanding of all details. And we are pretty sure the fixing and modification won't cause any data corruption due to race or overlapping. What do you think? |
| Comment by Wang Shilong (Inactive) [ 04/Mar/20 ] |
|
All threads might need agreed changes before going on, this might be problem, actually this is exclusive operation: Considering superblock cases, we have 100 threads, eveytime, we need dirty superblock, we need copy 99 * 4096 buffer head, maybe just one mutex to hold on whenever fix is needed for possible shared contents modified is simper, in this case, thread contexts |
| Comment by Andreas Dilger [ 04/Mar/20 ] |
|
I agree with Li Xi, that the speed of fixing problems is much less critical than the scanning of correct parts of the filesystem, and the correct fixing of errors. The only main slowdown in pass1 is the duplicate blocks handling case (pass1b/c/d), and I think we should generally try to avoid that (adding/improving the "inode badness" patch to better catch cases where one inode is a duplicate of another). The vault20-pFSCK-domingo.pdf |
| Comment by Gerrit Updater [ 06/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37825 |
| Comment by Gerrit Updater [ 06/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37826 |
| Comment by Gerrit Updater [ 10/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37854 |
| Comment by Gerrit Updater [ 10/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37855 |
| Comment by Gerrit Updater [ 10/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37856 |
| Comment by Gerrit Updater [ 10/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37857 |
| Comment by Gerrit Updater [ 10/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37858 |
| Comment by Gerrit Updater [ 10/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37859 |
| Comment by Gerrit Updater [ 11/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37885 |
| Comment by Gerrit Updater [ 11/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37887 |
| Comment by Gerrit Updater [ 11/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37886 |
| Comment by Gerrit Updater [ 11/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37884 |
| Comment by Gerrit Updater [ 12/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37905 |
| Comment by Gerrit Updater [ 20/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37995 |
| Comment by Gerrit Updater [ 20/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37999 |
| Comment by Gerrit Updater [ 20/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37997 |
| Comment by Gerrit Updater [ 20/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37996 |
| Comment by Gerrit Updater [ 20/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/37998 |
| Comment by Gerrit Updater [ 20/Mar/20 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/38000 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39840 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39841 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39843 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39845 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39844 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39842 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39846 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39847 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39848 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39850 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39849 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39851 |
| Comment by Gerrit Updater [ 09/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39852 |
| Comment by Gerrit Updater [ 10/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39874 |
| Comment by Gerrit Updater [ 15/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/39914 |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35684/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39840/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35689/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35690/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35696/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35698/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35701/ |
| Comment by Gerrit Updater [ 18/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35710/ |
| Comment by Gerrit Updater [ 23/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40014 |
| Comment by Gerrit Updater [ 23/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40015 |
| Comment by Gerrit Updater [ 23/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40017 |
| Comment by Gerrit Updater [ 24/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40021 |
| Comment by Gerrit Updater [ 24/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40024 |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40014/ |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40015/ |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35712/ |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35726/ |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35763/ |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35780/ |
| Comment by Gerrit Updater [ 25/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35793/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35846/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35855/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35922/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/35961/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36001/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36004/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36018/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36020/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36026/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39841/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39843/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36044/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36054/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36097/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/36098/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37024/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37782/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37783/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37825/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37826/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37856/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37885/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37905/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37884/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37859/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39852/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37887/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37995/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37997/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37998/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37999/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/38000/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/37996/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39844/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39846/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40016/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39849/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39850/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39914/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39851/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/39874/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40024/ |
| Comment by Gerrit Updater [ 26/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40060 |
| Comment by Shuichi Ihara [ 26/Sep/20 ] |
|
Attached is test results of pfsck and compared with fsck today. |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40063 |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40060/ |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40021/ |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40017/ |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40065 |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40066 |
| Comment by Gerrit Updater [ 27/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40068 |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) merged in patch https://review.whamcloud.com/40068/ |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40069 |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) merged in patch https://review.whamcloud.com/40065/ |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40070 |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40071 |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) merged in patch https://review.whamcloud.com/40066/ |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Wang Shilong (wshilong@whamcloud.com) merged in patch https://review.whamcloud.com/40069/ |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40075 |
| Comment by Gerrit Updater [ 28/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40071/ |
| Comment by Gerrit Updater [ 29/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40075/ |
| Comment by Gerrit Updater [ 29/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/40081 |
| Comment by Gerrit Updater [ 29/Sep/20 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/40081/ |
| Comment by Wang Shilong (Inactive) [ 30/Sep/20 ] |
|
I think it fine to close this ticket, and we could open new ticket for further work. |
| Comment by Andreas Dilger [ 30/Sep/20 ] |
|
The e2fsprogs-1.45.6.wc2 build is available at https://downloads.whamcloud.com/public/e2fsprogs/1.45.6.wc2/ |
| Comment by Gerrit Updater [ 12/Oct/20 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/40070/ |
| Comment by Gerrit Updater [ 26/Mar/21 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/43129/ |
| Comment by Gerrit Updater [ 16/Jun/21 ] |
|
Li Dongyang (dongyangli@ddn.com) uploaded a new patch: https://review.whamcloud.com/44010 |
| Comment by Gerrit Updater [ 16/Jun/21 ] |
|
Li Dongyang (dongyangli@ddn.com) uploaded a new patch: https://review.whamcloud.com/44011 |