[LU-66] obdfilter-survey performance issue on NUMA system Created: 09/Feb/11 Updated: 17/Dec/13 Resolved: 17/Dec/13 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.1.0 |
| Fix Version/s: | Lustre 2.1.0 |
| Type: | Improvement | Priority: | Minor |
| Reporter: | Liang Zhen (Inactive) | Assignee: | Niu Yawei (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
| Bugzilla ID: | 22,980 |
| Rank (Obsolete): | 8541 |
| Description |
|
this is just copy of bug 22980, but I think it's better to track & discuss it at here: Hello, Testing our new IO servers we have an issue with obdfilter-survey. Our OSSs are based on 4 When we perform raw tests with sgpdd-survey, over 24 luns we get ~4400 MB/s on write and more than Then if we start a Lustre filesystem and we test these 24 osts with obdfilter-survey (size=24192 If we perform IOzone tests from five clients (2 threads per client, connected to the server with Then we disconnected two sockets using command "echo 0 > /sys/devices/system/cpu/cpu5/online" on We also made these tests with Lustre 1.6, with other storage bays and with similar platforms (4 It's like if obdfilter-survey has any kind of saturation when there are many sockets. What do you |
| Comments |
| Comment by Liang Zhen (Inactive) [ 09/Feb/11 ] |
|
initial data from Sebastien --------------------------------------- Hi, I gave a try to attachment 32668 [details] on Lustre 2.0. I ran the tests on a MESCA server (OSS), running a Without attachment 32668 [details], all sockets activated: [root@berlin7 ~]# numactl -H rszlo=1024 rszhi=1024 nobjlo=1 nobjhi=1 thrlo=1 thrhi=128 case=disk tests_str="write read" rslt_loc=/root/obdsurvey obdfilter-survey Fri Jan 21 13:37:33 CET 2011 Obdfilter-survey for case=disk from berlin7 ost 15 sz 62914560K rsz 1024K obj 15 thr 15 write 611.92 [ 19.00, 54.00] read 1058.04 [ 48.99, 101.00] ost 15 sz 62914560K rsz 1024K obj 15 thr 30 write 1236.58 [ 50.99, 106.98] read 1818.19 [ 31.98, 367.92] ost 15 sz 62914560K rsz 1024K obj 15 thr 60 write 1447.42 [ 10.00, 231.96] read 1928.87 [ 19.00, 432.96] ost 15 sz 62914560K rsz 1024K obj 15 thr 120 write 1632.67 [ 8.00, 341.30] read 1855.03 [ 0.00, 430.97] ost 15 sz 62914560K rsz 1024K obj 15 thr 240 write 1572.07 [ 0.00, 380.62] read 1846.84 [ 21.00, 385.99] ost 15 sz 62914560K rsz 1024K obj 15 thr 480 write 1593.21 [ 11.00, 372.99] read 1811.64 [ 19.00, 400.68] ost 15 sz 62914560K rsz 1024K obj 15 thr 960 write 1508.13 [ 5.00, 380.98] read 1705.11 [ 3.00, 318.97] ost 15 sz 62914560K rsz 1024K obj 15 thr 1920 write 1362.63 [ 1.00, 365.76] read 1595.17 SHORT Without attachment 32668 [details], only one socket activated: [root@berlin7 ~]# numactl -H available: 4 nodes (0-3) node 0 cpus: 0 4 8 12 16 20 24 28 node 0 size: 16364 MB node 0 free: 13578 MB node 1 cpus: node 1 size: 16384 MB node 1 free: 15862 MB node 2 cpus: node 2 size: 16384 MB node 2 free: 15653 MB node 3 cpus: node 3 size: 16382 MB node 3 free: 16041 MB node distances: node 0 1 2 3 0: 10 21 21 21 1: 21 10 21 21 2: 21 21 10 21 3: 21 21 21 10 [root@berlin7 ~]# [root@berlin7 ~]# targets="`lctl dl |grep obdfilter | awk '{print $4} ' | tr '\n' ' '`" size=4096 With attachment 32668 [details], all sockets activated: [root@berlin7 ~]# numactl -H ' | tr '\n' ' '`" size=4096 So it seems this lock contention is not the most limiting factor here. HTH, |
| Comment by Andreas Dilger [ 09/Feb/11 ] |
|
This is already being discussed in bug |
| Comment by Liang Zhen (Inactive) [ 09/Feb/11 ] |
|
Andreas, this ticket is created for tracking efforts for Bull, also, this one is about NUMA performance and |
| Comment by Niu Yawei (Inactive) [ 09/Feb/11 ] |
|
Yes, and |
| Comment by Niu Yawei (Inactive) [ 09/Feb/11 ] |
|
Copy form b22980: Thank you for your review, Andreas. As we discussed in the Jira system, I think this issue might be So next setp, I want Sebanstien to run some tests:
At the same time, we want collect some oprofile data during above tests. BTW: I'm wondering how Iozne get the 2500M/s (mentioned in comment #1), how many objects in the |
| Comment by Sebastien Buisson (Inactive) [ 10/Feb/11 ] |
|
Hi, Concerning IOZone, comment #1 in bug 22980 dates back from June 2010, so I do not have the full details about this. Write: iozone -s 4g -r 1024 -+n -c -e -i 0 -t 2 -F /lustre/dir<X>/file1 /lustre/dir<X>/file2 I do not remember the stripe count, probably 2 or 4. I will be able to run the obdfilter-survey tests you are asking for, but I would really appreciate if you could you specify the exact and precise oprofile command lines for the stats you want. TIA, |
| Comment by Niu Yawei (Inactive) [ 10/Feb/11 ] |
|
Hi, Sebastien I see, I was just wondering if the object count of iozone test is same with obdfilter-survey test, and how the aggreated throughtput is calculated. Anyway, let's focus on obdfilter-survey test at this moment. For the oprofile command, I think just general usage is fine:
I'm not oprofile expert, if you meet any trouble with running oprofile, you can ask Liang for help, I believe he played oprofile a lot when he was working on SMP performance. |
| Comment by Liang Zhen (Inactive) [ 11/Feb/11 ] |
|
Sebastien, Could you also collect numastat at begin/end of each test (or you know any better way to collect stats for NUMA behavor)? so we can understand better about whether it's just about cross-node data-traffic or not Thanks |
| Comment by Sebastien Buisson (Inactive) [ 14/Feb/11 ] |
|
obdfilter-survey results, without oprofile and numastat. |
| Comment by Liang Zhen (Inactive) [ 14/Feb/11 ] |
|
Sebastien,
Thanks |
| Comment by Sebastien Buisson (Inactive) [ 14/Feb/11 ] |
|
Hi, Yes you are right, we have 2 IOHs on the OSS. Half of the OSSes are directly connected to the first IOH, and the other half is directly connected to the second IOH. Sebastien. |
| Comment by Liang Zhen (Inactive) [ 15/Feb/11 ] |
|
Sebastien, so I think each NUMIOA node contains 2 sockets (or numa nodes) right? could you give us detail information about how CPU nodes are distributed on NUMIOA node? i.e: We are working on two directions right now:
Thanks |
| Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ] |
|
Yes, our MESCA machine is made of 2 NUMIOA nodes: Cheers, |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
patch to set cpu affinity for lctl test_brw processes and an example config file. |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
Hi, Sebastien Thanks for your test result, looks the patch improves write performance little when there are limited cores (1,2 sockets, 8,16 cores), for 24,32 cores, I don't see improvements. What surprised me is that the read performance has a big improvment when cores are distributed on different sockets, I don't know the reason so far. To testify if the issue is NUMIOA dependent (degradation caused by accessing remote memory/IOH), I made a patch to set cpu affinity for lctl brw_test threads, we want use this patch to collect more data for further analysis. (the patch and the example config file are attached, the config file pathname should be "/tmp/affinity_map") Not sure if we can access your machine now (I think Liang has provided a static address 99.96.190.234), if it's already acceesable for us, pelease send us a guide on how to run tests on your machine as well, then we can run tests by ourselves from now on. Thank you. BTW: could you also provide your sgpdd_survey command which mentioned in the first comment? |
| Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ] |
|
Full obdfilter-survey results. In the tarball please find:
As you can see, I was not able to run the tests in the '2 sockets' configuration while collecting data with oprofile. Node was crashing in the middle of obdfilter-survey, and I do not know who to blame here... |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
Thanks a lot, Sebastien. What's the kernel version did you run the test on? |
| Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ] |
|
We use a custom kernel, based on RHEL6 GA (2.6.32-71.el6.x86_64). |
| Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ] |
|
The test system we are dedicating to you is not ready yet, so I will have to run the tests by myself. I will try lctl_setaffinity.patch, but could you please tell me what kind of tests do you need? Only in 'thread' mode right? Still with oprofile and numastat? How many cores/sockets activated? With or without 'unlocked_ioctl' patch? TIA, |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
Hi, Sebastien I'd like you to run two tests, one in 'thread' mode, and another in 'objid' mode:
In the 'objid' mode, you have to provide objid to cpu core map in the config file, so you should know the object ids and try to mapp them to appropriate cpus (to make cpu always access the local IOH) before the test. Thanks. |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
change vmalloc() to kamlloc() in the iocl path. |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
Hi, Sebastien The oprfile data provided by you is very helpful, in the unpatched tests, we can see thread_return() has extremly high rank, I think it's caused by the contention on BKL; in the patched (with unlocked_ioctl) tests, we can see alloc_vmap_area() and find_vmap_area() have very high rank, I think it's caused by the contention on the vmap_area_lock. I made a patch (remove_vmalloc.patch) which change the vmalloc() to kmalloc() in ioctl path, which could eliminate the contention on vmap_area_lock. Before you run the tests which I suggested in my last comment, I really like you to run this patch (togehter with the unlock_ioctl patch) first to see what'll happen. (of course, please enable oprofile while running tests). Thank you. |
| Comment by Niu Yawei (Inactive) [ 15/Feb/11 ] |
|
Change the vmalloc to kmalloc in ioctl path. (the previous one isn't correct, updated with this one) |
| Comment by Sebastien Buisson (Inactive) [ 16/Feb/11 ] |
|
Full obdfilter-survey results with unlocked_ioctl and remove_vmalloc patches. In the tarball please find:
|
| Comment by Niu Yawei (Inactive) [ 17/Feb/11 ] |
|
Thanks for your testing, Sebastien. The result shows both read and write performance got huge improvement, and the oprofile data looks normal this time. So I think the degradation is caused by the contention on BKL and vmap_area_lock. What I don't see is why the read throughput is extreme high in some cases (more than 10000 MB/s), what's the raw bandwith of each OST? |
| Comment by Sebastien Buisson (Inactive) [ 17/Feb/11 ] |
|
You're welcome. The storage array we are attached to should not give us more than 5 GB/s (read and write). So I think the figures given by obdfilter-survey are inaccurate because the test is not longer enough. Maybe I should increase size. Do you still need me to run affinity tests? Cheers, |
| Comment by Niu Yawei (Inactive) [ 17/Feb/11 ] |
|
I don't think we need run the affinity tests, thank you. |
| Comment by Liang Zhen (Inactive) [ 17/Feb/11 ] |
|
I agree that we don't need to run affinity tests because numastat shows that foreign memory access is not a big issue (< 5%). However, I do think that we should increase size (probably 5X) so we can get better vision.
Thanks |
| Comment by Sebastien Buisson (Inactive) [ 21/Feb/11 ] |
|
New results with unlocked_ioctl and remove_vmalloc patches. In the tarball please find:
I am sorry, I was not able to get results for 3, 2 and 1 socket. I launched the tests several times, and each time the server crashed. I seems the system does not appreciate to run oprofile with not all sockets... sgpdd_survey results clearly show a limit around 3 GB/s. This limitation is due to the available bandwidth to the storage, because we use only 4 FC links. |
| Comment by Niu Yawei (Inactive) [ 21/Feb/11 ] |
|
Hi, Sebastien The results looks really good, I think it's basically what we expected, thank you. One thing unknown is that the write performance dropped a lot in 960 threads. To measure how the cpu affinity affect test result, could you help us to do more tests? I think it'll be useful for our further performance tuning work. What I want to test is:
In the "objid" mode, each lctl thread will be mapped to a specified cpu, so you should know all the objids before run tests and set the objid-cpu mapping in the /tmp/affinity_map (please refer to the affinity_map example), of course, the objid should be on the local IOH of it's mapped cpu. |
| Comment by Sebastien Buisson (Inactive) [ 23/Feb/11 ] |
|
If I understand correctly, in order to know in advance the objids I should have a look at the obdfilter_survey_xxxx.detail file and consider the next run will do '+1' on the ids. =======================> ost 15 sz 314572800K rsz 1024K obj 15 thr 15 So, as you can see, all objects have the same ids on the OSTs... In that case, I am afraid the 'objid to core' mapping is useless. Sebastien. |
| Comment by Niu Yawei (Inactive) [ 23/Feb/11 ] |
|
Hi, Sebastien Ah, right. Binding object id to cpu doesn't make sense for this test, I've changed the patch to bind devno to cpu (lctl_setaffinity_v2.patch), and I also updated the example affinity_map. Please use the new patch to run the test that I meantioned in my previous comment. Thanks for your effort! |
| Comment by Sebastien Buisson (Inactive) [ 24/Feb/11 ] |
|
Results with unlocked_ioctl and remove_vmalloc patches, plus affinity patch for obdfilter-survey. In the tarball please find:
I am sorry to post these results today, but Jira was not accessible yesterday afternoon (French time). I took the opportunity of running some more tests, with on-purpose bad affinities. In the results, several points are surprising:
|
| Comment by Niu Yawei (Inactive) [ 24/Feb/11 ] |
|
Hi, Sebastien I agree with your conclusion that NUMIOA affinity doesn't affect test result much, to double confirm it, I think we should run the obdfilter-survey (without affinity patch) once more on the new system to see if there is any difference. For the issue of sgpdd-survey write worse than obdfilter-survey, I think one possible reason is that sgpdd-survey used few threads, however, the result shows 128 threads number is very close to 64 threads', maybe there is some bottleneck in the sgpdd-survey test tool, I will look into the code to get more details. At the same time, I think two quick tests could be helpful for this investigation:
I've got the access to the test system today, but it'll take me some time to learn how to run these tests on it, could you help me to run above three tests this time? Thanks a lot. |
| Comment by Niu Yawei (Inactive) [ 02/Mar/11 ] |
|
I ran bunch of sgpdd-survey tests on berlin6, and the oprofile result shows more than 50% copy_user_generic_string() samples. Since sgpdd-survey calls read/write syscall to perform I/O against block device, there should be lots of copy_from/to_user() for transfering data between userspace and kernel, and that consumes lots of CPU time. However, I don't think the copy_from_user() is the major bottoleneck of sgpdd-survey write, because when I ran sgpdd-survey read only, the copy_user_generic_string() samples is still very high (more than 60%), but sgpdd-survey read performance is comparable with obdfilter-survey's. The sgp_dd calls it's own sg_write()/sg_read(), and sg_read/write() just simply generate io request for underlying device, then unlug the device (that explains why iostat doesn't work for sgp_dd tests, because it bypass kernel io statistic code). I think tbe bottleneck should be in sg_write() (maybe it doesn't work well in multi-cores/mult-threads condition), though the exact root cause is unkown yet. |
| Comment by Liang Zhen (Inactive) [ 08/Mar/11 ] |
|
performance charts for obdfilter-survey & sgpdd-survey |
| Comment by Liang Zhen (Inactive) [ 08/Mar/11 ] |
|
sorry, some data in previous file is wrong |
| Comment by Liang Zhen (Inactive) [ 20/Mar/11 ] |
|
performance graphs (adding data for non-patched version) |
| Comment by Liang Zhen (Inactive) [ 21/Mar/11 ] |
|
latest testing results graphs |
| Comment by Niu Yawei (Inactive) [ 17/Dec/13 ] |
|
I think this can be closed now. |