[LU-66] obdfilter-survey performance issue on NUMA system Created: 09/Feb/11  Updated: 17/Dec/13  Resolved: 17/Dec/13

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.1.0
Fix Version/s: Lustre 2.1.0

Type: Improvement Priority: Minor
Reporter: Liang Zhen (Inactive) Assignee: Niu Yawei (Inactive)
Resolution: Fixed Votes: 0
Labels: None

Attachments: Text File affinity_map     File affinity_results.tgz     PDF File bull_obdfilter_survey_chart_110309.pdf     PDF File bull_obdfilter_survey_chart_110319.pdf     File full_results.tgz     File full_results_kmalloc.tgz     Text File lctl_setaffinity_v2.patch     File new_results_kmalloc.tgz     Text File obdfilter-survey_results.txt     Text File remove_vmalloc.patch    
Bugzilla ID: 22,980
Rank (Obsolete): 8541

 Description   

this is just copy of bug 22980, but I think it's better to track & discuss it at here:

Hello,

Testing our new IO servers we have an issue with obdfilter-survey. Our OSSs are based on 4
Nehalem-EX processors, connected to a Boxboro chipset. Every socket has 6 cores. On every OST we
have several FC channels connected to our storage bay.

When we perform raw tests with sgpdd-survey, over 24 luns we get ~4400 MB/s on write and more than
5500 MB/s on read.

Then if we start a Lustre filesystem and we test these 24 osts with obdfilter-survey (size=24192
rszlo=1024 rszhi=1024 nobjlo=1 nobjhi=2 thrlo=1 thrhi=16 case=disk tests_str="write read" sh
obdfilter-survey) we always have a performance limit on 1200 MB/s for write and read.

If we perform IOzone tests from five clients (2 threads per client, connected to the server with
Infiniband) we get more than 2500 MB/s.

Then we disconnected two sockets using command "echo 0 > /sys/devices/system/cpu/cpu5/online" on
every cpu belonging to these two sockets and we get expected results on obdfilter-survey (4600 MB/s
on write and 5500 MB/s on read). If we only disconnect one socket then obdfilter-survey gives us a
max of 1600 MB/s. Using only one socket results are slightly worse than with two sockets.

We also made these tests with Lustre 1.6, with other storage bays and with similar platforms (4
sockets and 8 cpus per socket) having always the same kind of problem. If we activate the
hyper-threading functionality on every socket then performances are even worse.

It's like if obdfilter-survey has any kind of saturation when there are many sockets. What do you
think? Thanks,



 Comments   
Comment by Liang Zhen (Inactive) [ 09/Feb/11 ]

initial data from Sebastien

---------------------------------------

Hi,

I gave a try to attachment 32668 [details] on Lustre 2.0. I ran the tests on a MESCA server (OSS), running a
RHEL6 kernel (2.6.32).
With this kernel, HAVE_UNLOCKED_IOCTL is defined. But unfortunately, I could see no improvement in
the performance given by obdfilter-survey:

Without attachment 32668 [details], all sockets activated:

[root@berlin7 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 8 12 16 20 24 28
node 0 size: 16364 MB
node 0 free: 13521 MB
node 1 cpus: 1 5 9 13 17 21 25 29
node 1 size: 16384 MB
node 1 free: 15637 MB
node 2 cpus: 2 6 10 14 18 22 26 30
node 2 size: 16384 MB
node 2 free: 15584 MB
node 3 cpus: 3 7 11 15 19 23 27 31
node 3 size: 16382 MB
node 3 free: 15977 MB
node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10
[root@berlin7 ~]#
[root@berlin7 ~]#
[root@berlin7 ~]#
[root@berlin7 ~]# targets="`lctl dl |grep obdfilter | awk '

{print $4}' | tr '\n' ' '`" size=4096
rszlo=1024 rszhi=1024 nobjlo=1 nobjhi=1 thrlo=1 thrhi=128 case=disk tests_str="write read"
rslt_loc=/root/obdsurvey obdfilter-survey
Fri Jan 21 13:37:33 CET 2011 Obdfilter-survey for case=disk from berlin7
ost 15 sz 62914560K rsz 1024K obj 15 thr 15 write 611.92 [ 19.00, 54.00] read 1058.04 [
48.99, 101.00]
ost 15 sz 62914560K rsz 1024K obj 15 thr 30 write 1236.58 [ 50.99, 106.98] read 1818.19 [
31.98, 367.92]
ost 15 sz 62914560K rsz 1024K obj 15 thr 60 write 1447.42 [ 10.00, 231.96] read 1928.87 [
19.00, 432.96]
ost 15 sz 62914560K rsz 1024K obj 15 thr 120 write 1632.67 [ 8.00, 341.30] read 1855.03 [
0.00, 430.97]
ost 15 sz 62914560K rsz 1024K obj 15 thr 240 write 1572.07 [ 0.00, 380.62] read 1846.84 [
21.00, 385.99]
ost 15 sz 62914560K rsz 1024K obj 15 thr 480 write 1593.21 [ 11.00, 372.99] read 1811.64 [
19.00, 400.68]
ost 15 sz 62914560K rsz 1024K obj 15 thr 960 write 1508.13 [ 5.00, 380.98] read 1705.11 [
3.00, 318.97]
ost 15 sz 62914560K rsz 1024K obj 15 thr 1920 write 1362.63 [ 1.00, 365.76] read 1595.17
SHORT



Without attachment 32668 [details], only one socket activated:

[root@berlin7 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 8 12 16 20 24 28
node 0 size: 16364 MB
node 0 free: 13578 MB
node 1 cpus:
node 1 size: 16384 MB
node 1 free: 15862 MB
node 2 cpus:
node 2 size: 16384 MB
node 2 free: 15653 MB
node 3 cpus:
node 3 size: 16382 MB
node 3 free: 16041 MB
node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10
[root@berlin7 ~]#
[root@berlin7 ~]# targets="`lctl dl |grep obdfilter | awk '{print $4}

' | tr '\n' ' '`" size=4096
rszlo=1024 rszhi=1024 nobjlo=1 nobjhi=1 thrlo=1 thrhi=128 case=disk tests_str="write read"
rslt_loc=/root/obdsurvey obdfilter-survey Fri Jan 21 14:10:51 CET 2011 Obdfilter-survey for
case=disk from berlin7
ost 15 sz 62914560K rsz 1024K obj 15 thr 15 write 618.98 [ 25.00, 54.99] read 2725.46 [
121.00, 370.30]
ost 15 sz 62914560K rsz 1024K obj 15 thr 30 write 1328.17 [ 62.99, 118.98] read 3139.51 [
106.98, 453.98]
ost 15 sz 62914560K rsz 1024K obj 15 thr 60 write 1895.78 [ 63.00, 240.98] read 3193.16 [
78.93, 434.98]
ost 15 sz 62914560K rsz 1024K obj 15 thr 120 write 2579.81 [ 36.00, 374.94] read 2845.06 [
76.00, 509.88]
ost 15 sz 62914560K rsz 1024K obj 15 thr 240 write 2177.08 [ 54.00, 386.80] read 2924.08 [
44.00, 438.97]
ost 15 sz 62914560K rsz 1024K obj 15 thr 480 write 1939.15 [ 6.00, 360.85] read 2506.11 [
17.98, 363.98]
ost 15 sz 62914560K rsz 1024K obj 15 thr 960 write 2106.54 [ 2.00, 332.44] read 2378.08 [
65.00, 417.72]
ost 15 sz 62914560K rsz 1024K obj 15 thr 1920 write 1545.92 [ 1.00, 386.49] read 2059.07
SHORT

With attachment 32668 [details], all sockets activated:

[root@berlin7 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 8 12 16 20 24 28
node 0 size: 16364 MB
node 0 free: 13507 MB
node 1 cpus: 1 5 9 13 17 21 25 29
node 1 size: 16384 MB
node 1 free: 15534 MB
node 2 cpus: 2 6 10 14 18 22 26 30
node 2 size: 16384 MB
node 2 free: 15558 MB
node 3 cpus: 3 7 11 15 19 23 27 31
node 3 size: 16382 MB
node 3 free: 15911 MB
node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10
[root@berlin7 ~]#
[root@berlin7 ~]#
[root@berlin7 ~]# targets="`lctl dl |grep obdfilter | awk '

{print $4}

' | tr '\n' ' '`" size=4096
rszlo=1024 rszhi=1024 nobjlo=1 nobjhi=1 thrlo=1 thrhi=128 case=disk tests_str="write read"
rslt_loc=/root/obdsurvey obdfilter-survey
Fri Jan 21 14:39:14 CET 2011 Obdfilter-survey for case=disk from berlin7
ost 15 sz 62914560K rsz 1024K obj 15 thr 15 write 651.19 [ 27.99, 52.99] read 2622.92 [
61.99,1419.80]
ost 15 sz 62914560K rsz 1024K obj 15 thr 30 write 1249.90 [ 51.99, 112.97] read 2185.82 [
37.98, 766.90]
ost 15 sz 62914560K rsz 1024K obj 15 thr 60 write 1569.65 [ 55.98, 204.97] read 2069.03 [
16.97, 610.65]
ost 15 sz 62914560K rsz 1024K obj 15 thr 120 write 1648.17 [ 28.90, 345.93] read 2011.21 [
43.30, 788.25]
ost 15 sz 62914560K rsz 1024K obj 15 thr 240 write 1551.18 [ 14.73, 395.89] read 1987.69 [
60.77, 520.36]
ost 15 sz 62914560K rsz 1024K obj 15 thr 480 write 1582.04 [ 9.29, 338.90] read 1943.12 [
58.36, 570.59]
ost 15 sz 62914560K rsz 1024K obj 15 thr 960 write 1480.17 [ 6.00, 292.76] read 1899.71 [
60.11, 444.05]
ost 15 sz 62914560K rsz 1024K obj 15 thr 1920 write 1277.28 [ 2.98, 312.91] read 1844.78 [
50.56, 432.15]

So it seems this lock contention is not the most limiting factor here.

HTH,
Sebastien.

Comment by Andreas Dilger [ 09/Feb/11 ]

This is already being discussed in bug LU-29, so I don't think we need this bug.

Comment by Liang Zhen (Inactive) [ 09/Feb/11 ]

Andreas, this ticket is created for tracking efforts for Bull, also, this one is about NUMA performance and LU-29 is more about SMP scalability (BKL).

Comment by Niu Yawei (Inactive) [ 09/Feb/11 ]

Yes, and LU-29 is 1.8.6 blocker, and Peter Jones said NUMA support isn't 1.8.6 goal, so we'd better open another ticket for it.

Comment by Niu Yawei (Inactive) [ 09/Feb/11 ]

Copy form b22980:

Thank you for your review, Andreas. As we discussed in the Jira system, I think this issue might be
NUMA dependent, because the 'unlocked_ioctl' patch works for another customer who runs test over
SMP (though he just compared 24 cores).

So next setp, I want Sebanstien to run some tests:

  • We just compared 32 cores with patch applied, I think we'd better get the numbers for 1,2,3,4
    sockets enabled (8,16,24,32 cores) with/without patch applied(unlocked_ioctl patch).
  • To verify if it's NUMA dependent issue, we also want the numbers for 8,16,24,32 cores enabled,
    but this time the enabled cores are distributed on different sockets evenly. (enable 2,4,6,8 cores
    on each socket)

At the same time, we want collect some oprofile data during above tests.

BTW: I'm wondering how Iozne get the 2500M/s (mentioned in comment #1), how many objects in the
Iozne test? and how the number is measured?

Comment by Sebastien Buisson (Inactive) [ 10/Feb/11 ]

Hi,

Concerning IOZone, comment #1 in bug 22980 dates back from June 2010, so I do not have the full details about this.
In this case, we ran IOzone with approximately the following command lines, in parallel on 5 clients:

Write: iozone -s 4g -r 1024 -+n -c -e -i 0 -t 2 -F /lustre/dir<X>/file1 /lustre/dir<X>/file2
Read: iozone -s 4g -r 1024 -+n -c -e -i 1 -t 2 -F /lustre/dir<X>/file1 /lustre/dir<X>/file2

I do not remember the stripe count, probably 2 or 4.
The IOZone figures we gave are the ones directly returned by IOZone ('Children see throughput').

I will be able to run the obdfilter-survey tests you are asking for, but I would really appreciate if you could you specify the exact and precise oprofile command lines for the stats you want.

TIA,
Sebastien.

Comment by Niu Yawei (Inactive) [ 10/Feb/11 ]

Hi, Sebastien

I see, I was just wondering if the object count of iozone test is same with obdfilter-survey test, and how the aggreated throughtput is calculated. Anyway, let's focus on obdfilter-survey test at this moment.

For the oprofile command, I think just general usage is fine:

  • opcontrol --vmlinux=/patch/to/vmlinux
  • opcontrol --start
  • run test
  • opcontrol --dump
  • opreport -l (the output is what we want)
  • opcontrol --reset

I'm not oprofile expert, if you meet any trouble with running oprofile, you can ask Liang for help, I believe he played oprofile a lot when he was working on SMP performance.

Comment by Liang Zhen (Inactive) [ 11/Feb/11 ]

Sebastien,

Could you also collect numastat at begin/end of each test (or you know any better way to collect stats for NUMA behavor)? so we can understand better about whether it's just about cross-node data-traffic or not

Thanks
Liang

Comment by Sebastien Buisson (Inactive) [ 14/Feb/11 ]

obdfilter-survey results, without oprofile and numastat.

Comment by Liang Zhen (Inactive) [ 14/Feb/11 ]

Sebastien,
thanks for you data.
I have a couple of questions about your server (learnt from your presentation on LUG...):

  • I assume you have 2 IOH on the server right?
  • if you have IOHs > 1, how OSTs are distributed on IOHs? 1,3,5,7... on IOH1, and 0,2,4,6... on IOH2?

Thanks
Liang

Comment by Sebastien Buisson (Inactive) [ 14/Feb/11 ]

Hi,

Yes you are right, we have 2 IOHs on the OSS. Half of the OSSes are directly connected to the first IOH, and the other half is directly connected to the second IOH.

Sebastien.

Comment by Liang Zhen (Inactive) [ 15/Feb/11 ]

Sebastien, so I think each NUMIOA node contains 2 sockets (or numa nodes) right? could you give us detail information about how CPU nodes are distributed on NUMIOA node? i.e:
numioa node0 == socket0 + socket1 + IOH0, numioa node1 == socket2 + socket3 + IOH1 or
numioa node0 == socket0 + socket2 + IOH0, numioa node1 == socket1 + socket3 + IOH1 or
something like that?

We are working on two directions right now:

  • try to find out whether there are more hight contention locks like BKL, oprofile can help this
  • make lctl processes to be numioa node affinity, Niu is working on a prototype patch

Thanks
Liang

Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ]

Yes, our MESCA machine is made of 2 NUMIOA nodes:
numioa node 0 == socket0 + socket1 + IOH0
numioa node 1 == socket2 + socket3 + IOH1

Cheers,
Sebastien.

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

patch to set cpu affinity for lctl test_brw processes and an example config file.

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

Hi, Sebastien

Thanks for your test result, looks the patch improves write performance little when there are limited cores (1,2 sockets, 8,16 cores), for 24,32 cores, I don't see improvements. What surprised me is that the read performance has a big improvment when cores are distributed on different sockets, I don't know the reason so far.

To testify if the issue is NUMIOA dependent (degradation caused by accessing remote memory/IOH), I made a patch to set cpu affinity for lctl brw_test threads, we want use this patch to collect more data for further analysis. (the patch and the example config file are attached, the config file pathname should be "/tmp/affinity_map")

Not sure if we can access your machine now (I think Liang has provided a static address 99.96.190.234), if it's already acceesable for us, pelease send us a guide on how to run tests on your machine as well, then we can run tests by ourselves from now on. Thank you.

BTW: could you also provide your sgpdd_survey command which mentioned in the first comment?

Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ]

Full obdfilter-survey results. In the tarball please find:

  • summary.txt: array that sums up test results
  • result_*.txt: results for a specific test, with also 'numastat' output
  • opreport_*.txt: associated oprofile data

As you can see, I was not able to run the tests in the '2 sockets' configuration while collecting data with oprofile. Node was crashing in the middle of obdfilter-survey, and I do not know who to blame here...

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

Thanks a lot, Sebastien. What's the kernel version did you run the test on?

Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ]

We use a custom kernel, based on RHEL6 GA (2.6.32-71.el6.x86_64).

Comment by Sebastien Buisson (Inactive) [ 15/Feb/11 ]

The test system we are dedicating to you is not ready yet, so I will have to run the tests by myself.

I will try lctl_setaffinity.patch, but could you please tell me what kind of tests do you need? Only in 'thread' mode right? Still with oprofile and numastat? How many cores/sockets activated? With or without 'unlocked_ioctl' patch?

TIA,
Sebastien.

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

Hi, Sebastien

I'd like you to run two tests, one in 'thread' mode, and another in 'objid' mode:

  • with 'unlocked_ioctl' patch;
  • 24 cores actived; (cores distributed on 4 sockets);
  • with oprofile and numastats;
  • provide the /tmp/obdfilter_survey_xxxx.detail as well, where the thread/object cpu mapping is logged.

In the 'objid' mode, you have to provide objid to cpu core map in the config file, so you should know the object ids and try to mapp them to appropriate cpus (to make cpu always access the local IOH) before the test. Thanks.

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

change vmalloc() to kamlloc() in the iocl path.

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

Hi, Sebastien

The oprfile data provided by you is very helpful, in the unpatched tests, we can see thread_return() has extremly high rank, I think it's caused by the contention on BKL; in the patched (with unlocked_ioctl) tests, we can see alloc_vmap_area() and find_vmap_area() have very high rank, I think it's caused by the contention on the vmap_area_lock.

I made a patch (remove_vmalloc.patch) which change the vmalloc() to kmalloc() in ioctl path, which could eliminate the contention on vmap_area_lock. Before you run the tests which I suggested in my last comment, I really like you to run this patch (togehter with the unlock_ioctl patch) first to see what'll happen. (of course, please enable oprofile while running tests). Thank you.

Comment by Niu Yawei (Inactive) [ 15/Feb/11 ]

Change the vmalloc to kmalloc in ioctl path. (the previous one isn't correct, updated with this one)

Comment by Sebastien Buisson (Inactive) [ 16/Feb/11 ]

Full obdfilter-survey results with unlocked_ioctl and remove_vmalloc patches. In the tarball please find:

  • summary.txt: array that sums up test results
  • result_*.txt: results for a specific test, with also 'numastat' output
  • opreport_*.txt: associated oprofile data
Comment by Niu Yawei (Inactive) [ 17/Feb/11 ]

Thanks for your testing, Sebastien.

The result shows both read and write performance got huge improvement, and the oprofile data looks normal this time. So I think the degradation is caused by the contention on BKL and vmap_area_lock. What I don't see is why the read throughput is extreme high in some cases (more than 10000 MB/s), what's the raw bandwith of each OST?

Comment by Sebastien Buisson (Inactive) [ 17/Feb/11 ]

You're welcome.

The storage array we are attached to should not give us more than 5 GB/s (read and write). So I think the figures given by obdfilter-survey are inaccurate because the test is not longer enough. Maybe I should increase size.

Do you still need me to run affinity tests?

Cheers,
Sebastien.

Comment by Niu Yawei (Inactive) [ 17/Feb/11 ]

I don't think we need run the affinity tests, thank you.

Comment by Liang Zhen (Inactive) [ 17/Feb/11 ]

I agree that we don't need to run affinity tests because numastat shows that foreign memory access is not a big issue (< 5%). However, I do think that we should increase size (probably 5X) so we can get better vision.
Sebastien, could you please help us to run:

  • increase size (5X)
  • only run with patches (kmalloc patch and remove BKL patch)
  • only run with 1,2,3,4 sockets (don't need to iterate over 8,16,24 cores)
  • if it's possible, could you give us sgp-dd results on the same hardware, so we can see whether there is anything else we can improve.

Thanks
Liang

Comment by Sebastien Buisson (Inactive) [ 21/Feb/11 ]

New results with unlocked_ioctl and remove_vmalloc patches. In the tarball please find:

  • result_*.txt: results for a specific obdfilter-survey test, with also 'numastat' output
  • opreport_*.txt: associated oprofile data
  • sgpdd_res.txt : sgpdd_survey results

I am sorry, I was not able to get results for 3, 2 and 1 socket. I launched the tests several times, and each time the server crashed. I seems the system does not appreciate to run oprofile with not all sockets...

sgpdd_survey results clearly show a limit around 3 GB/s. This limitation is due to the available bandwidth to the storage, because we use only 4 FC links.

Comment by Niu Yawei (Inactive) [ 21/Feb/11 ]

Hi, Sebastien

The results looks really good, I think it's basically what we expected, thank you.

One thing unknown is that the write performance dropped a lot in 960 threads. To measure how the cpu affinity affect test result, could you help us to do more tests? I think it'll be useful for our further performance tuning work.

What I want to test is:

  • apply "remove BKL" + "kmalloc" + "lctl_setaffinity" patches;
  • run test in "objid" mode, 4 sockets enabled, and without oprofile enabled.
  • provide the result, numstat and the /tmp/obdfilter_survey_xxxx.detail (where the thread/object cpu mapping is logged)

In the "objid" mode, each lctl thread will be mapped to a specified cpu, so you should know all the objids before run tests and set the objid-cpu mapping in the /tmp/affinity_map (please refer to the affinity_map example), of course, the objid should be on the local IOH of it's mapped cpu.

Comment by Sebastien Buisson (Inactive) [ 23/Feb/11 ]

If I understand correctly, in order to know in advance the objids I should have a look at the obdfilter_survey_xxxx.detail file and consider the next run will do '+1' on the ids.
The problem is the obdfilter_survey_xxxx.detail contains the following:

=======================> ost 15 sz 314572800K rsz 1024K obj 15 thr 15
=============> Create 1 on localhost:quartcel-OST0005_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0008_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0007_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST000c_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0000_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST000d_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0004_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0006_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0002_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST000a_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0009_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST000b_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST000e_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0001_ecc
create: 1 objects
create: #1 is object id 0x29
=============> Create 1 on localhost:quartcel-OST0003_ecc
create: 1 objects
create: #1 is object id 0x29

So, as you can see, all objects have the same ids on the OSTs... In that case, I am afraid the 'objid to core' mapping is useless.
Unless I manually create new objects on OSTs so that objids are different everywhere?

Sebastien.

Comment by Niu Yawei (Inactive) [ 23/Feb/11 ]

Hi, Sebastien

Ah, right. Binding object id to cpu doesn't make sense for this test, I've changed the patch to bind devno to cpu (lctl_setaffinity_v2.patch), and I also updated the example affinity_map. Please use the new patch to run the test that I meantioned in my previous comment. Thanks for your effort!

Comment by Sebastien Buisson (Inactive) [ 24/Feb/11 ]

Results with unlocked_ioctl and remove_vmalloc patches, plus affinity patch for obdfilter-survey. In the tarball please find:

  • summary.txt: array that sums up test results
  • result_*.txt: results for a specific obdfilter-survey test, with also 'numastat' output
  • obdfilter_survey*.detail: associated obdfilter-survey detailed data
  • affinity_map.*: associated affinity mapping
  • sgpdd_res_new.txt : sgpdd_survey results

I am sorry to post these results today, but Jira was not accessible yesterday afternoon (French time). I took the opportunity of running some more tests, with on-purpose bad affinities.
I also ran sgpdd-survey tests, because the hardware on which these tests were launched is not the same as before. Due to software reconfiguration, we now have access to 16 LUNs (instead of 15) through 8 FC links (instead of 4). So raw performance is better than before, as we are not more limited by FC bandwidth.

In the results, several points are surprising:

  • obdfilter-survey results are better than sgpdd-survey results in write;
  • affinity mapping has little impact on performance, good mapping is better than bad mapping only with a high number of threads.
Comment by Niu Yawei (Inactive) [ 24/Feb/11 ]

Hi, Sebastien

I agree with your conclusion that NUMIOA affinity doesn't affect test result much, to double confirm it, I think we should run the obdfilter-survey (without affinity patch) once more on the new system to see if there is any difference.

For the issue of sgpdd-survey write worse than obdfilter-survey, I think one possible reason is that sgpdd-survey used few threads, however, the result shows 128 threads number is very close to 64 threads', maybe there is some bottleneck in the sgpdd-survey test tool, I will look into the code to get more details. At the same time, I think two quick tests could be helpful for this investigation:

  • run sgpdd-survey on only one device, try to get the raw bandwith of each single device;
  • collect oprofile data while running sgpdd-survey over 16 devices to see if there is any contention;

I've got the access to the test system today, but it'll take me some time to learn how to run these tests on it, could you help me to run above three tests this time? Thanks a lot.

Comment by Niu Yawei (Inactive) [ 02/Mar/11 ]

I ran bunch of sgpdd-survey tests on berlin6, and the oprofile result shows more than 50% copy_user_generic_string() samples. Since sgpdd-survey calls read/write syscall to perform I/O against block device, there should be lots of copy_from/to_user() for transfering data between userspace and kernel, and that consumes lots of CPU time. However, I don't think the copy_from_user() is the major bottoleneck of sgpdd-survey write, because when I ran sgpdd-survey read only, the copy_user_generic_string() samples is still very high (more than 60%), but sgpdd-survey read performance is comparable with obdfilter-survey's.

The sgp_dd calls it's own sg_write()/sg_read(), and sg_read/write() just simply generate io request for underlying device, then unlug the device (that explains why iostat doesn't work for sgp_dd tests, because it bypass kernel io statistic code). I think tbe bottleneck should be in sg_write() (maybe it doesn't work well in multi-cores/mult-threads condition), though the exact root cause is unkown yet.

Comment by Liang Zhen (Inactive) [ 08/Mar/11 ]

performance charts for obdfilter-survey & sgpdd-survey
btw, non-patched data is not here because it's too difficult for us to reinstalled unpatched rpms remotely, so we don't have data for unpatched results on the same machine.

Comment by Liang Zhen (Inactive) [ 08/Mar/11 ]

sorry, some data in previous file is wrong

Comment by Liang Zhen (Inactive) [ 20/Mar/11 ]

performance graphs (adding data for non-patched version)

Comment by Liang Zhen (Inactive) [ 21/Mar/11 ]

latest testing results graphs

Comment by Niu Yawei (Inactive) [ 17/Dec/13 ]

I think this can be closed now.

Generated at Sat Feb 10 01:03:23 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.