Details
-
Question/Request
-
Resolution: Fixed
-
Minor
-
None
-
None
-
3
-
9223372036854775807
Description
Running a test to exercise flock (N clients requesting a lock on different non-overlapping regions) showed high load on the MDS. There was previously a similar issue with regular extent locks (which are used to control data coherency for regular reads and writes) and adding an extra structure called interval tree reduced the load. Likely the same structure can be used for flock.
With a trivial benchmark, single client, local setup, non-overlapping locks:
FLOCKS_TEST 5: SET write 1000 flock(s) took 0.06s 16048.30/sec FLOCKS_TEST 5: SET write 2000 flock(s) took 0.14s 14526.76/sec FLOCKS_TEST 5: SET write 5000 flock(s) took 0.60s 8264.82/sec FLOCKS_TEST 5: SET write 10000 flock(s) took 2.94s 3401.57/sec FLOCKS_TEST 5: SET write 25000 flock(s) took 36.29s 688.98/sec FLOCKS_TEST 5: SET write 50000 flock(s) took 281.29s 177.75/sec FLOCKS_TEST 5: SET write 75000 flock(s) took 661.73s 113.34/sec
Test case in patch https://review.whamcloud.com/53094 "LU-17276 tests: flock scalability test".