[LU-3062] Multiple clients writing to the same file caused mpi application to fail Created: 29/Mar/13  Updated: 08/Sep/16  Resolved: 08/Sep/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.3.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Jay Lan (Inactive) Assignee: Oleg Drokin
Resolution: Cannot Reproduce Votes: 0
Labels: ptr
Environment:

Lustre server 2.1.4 centos 6.3
Lustre clients 2.3.0 sles11sp1


Attachments: Text File 1000145.pbspl1.0.log.txt     File 1000145.pbspl1.0.log.txt.-pbs     Text File lu-3062-reproducer-logs.tgz     File nbp2-server-logs.LU-3062     File reproducer1.scr     File reproducer1.scr.o1000145     File reproducer2.scr     File reproducer_debug_r311i1n10_log     File reproducer_debug_r311i1n9_log     File reproducer_full_debug_log     Text File reproducer_full_debug_xaa.bz2     File reproducer_full_debug_xab.bz2     Text File reproducer_full_debug_xac.bz2     Text File reproducer_full_debug_xad.bz2     File reproducer_full_debug_xae.bz2    
Severity: 2
Rank (Obsolete): 7461

 Description   

After we upgraded our clients from 2.1.3 to 2.3.0, some users (the crowd is increasing) started seeing their application to fail, to hang, or even crash. The servers run 2.1.4. In all cases, same application ran OK with 2.1.3.

Since we do not have reproducer on the hang and the crash cases, we here attach a reproducer that can cause application to fail. The test were executed with stripe count of 1, 2, 4, 8, 16. The higher number the stripe count the more likely application fails.

The 'reproducer1.scr' is a PBS script to start 1024 mpi tests.
'reproducer1.scr.o1000145' is PBS output of the execution.
'1000145.pbspl1.0.log.txt' is an output of one of our tools to collect /var/log/messages from servers and clients related to the specified job.

The PBS specific argument lines start with "#PBS " string and are ignored if executed without PBS. The script use SGI MPT, but can be converted to openmpi or intel mpi.



 Comments   
Comment by Oleg Drokin [ 29/Mar/13 ]

Ok, so from the logs we can see the client was evicted by server for some reason.
Now why it was evicted is not clear because there seems to be no server logs included, but I imagine it's due to AST timeouts. We included multiple patches in 2.4 to help this cause.

In addition to that I cannot stop wondering about this message:

Thu Mar 28 12:43:49 2013 M r325i4n13 kernel: [569506.266405] pcieport 0000:00:02.0: AER: Multiple Corrected error received: id=0010
Thu Mar 28 12:43:49 2013 M r325i4n13 kernel: [569506.266572] pcieport 0000:00:02.0: PCIE Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Receiver ID)
Thu Mar 28 12:43:49 2013 M r325i4n13 kernel: [569506.276986] pcieport 0000:00:02.0:   device [8086:3c04] error status/mask=00000040/00002000
Thu Mar 28 12:43:49 2013 M r325i4n13 kernel: [569506.285434] pcieport 0000:00:02.0:    [ 6] Bad TLP            

hopefully it did not result in any dropped messages.

Comment by Jay Lan (Inactive) [ 29/Mar/13 ]

About the "PCIE Bus Error: severity=Corrected" errors, I checked with our admins. He said it was normal and not indicative of an IB problem.

I will collect and provide logs at the servers.

BTW, the test was run using 1024 cpus distributed to 64 nodes. However, I was able to reproduce the problem with only 4 sandy bridge nodes, 4*16=64 processes.

Comment by Jay Lan (Inactive) [ 29/Mar/13 ]

The PCIE corrected errors seem to be related to sandy bridge PCI 3.0. We have seen 10,000s of those errors a day. However, same applications did not fail if run 2.1.3 clients.

Can you identify those AST timeouts patches? Are they client side? Note that we run 2.1.4 at servers. Is there any issue with 2.1.4 server + 2.3.0 client combination?

Comment by Jay Lan (Inactive) [ 29/Mar/13 ]

This tarball contains syslog between [Thu Mar 28 12:00:00 2013] and [Thu Mar 28 13:00:00 2013].

service160 is mds/mgs. The rest are oss'es.

Comment by Jinshan Xiong (Inactive) [ 01/Apr/13 ]

Hi Jay Lan,

What does "write(9999) 66" mean in the reproducer? I mean how much data it will write to the file by this command.

Can you please collect lustre logs on the client and server side while running the reproducer?

Comment by Jay Lan (Inactive) [ 01/Apr/13 ]

(pbspl1,241) od -x fort.9999
0000000 0004 0000 0042 0000 0004 0000
0000014
(pbspl1,242) ls -l fort.9999
rw-rr- 1 jlan g1099 12 Mar 28 18:39 fort.9999
(pbspl1,243)

It is 12 bytes.

Comment by Jay Lan (Inactive) [ 01/Apr/13 ]

Hi Jinshan,

The client side logs are in 1000145.pbspl1.0.log.txt. You may want to filter out pbs information. The nbp2-server-logs.LU-3062 is the tarball of all server side logs:

linux39.jlan 109> tar -tzf nbp2-server-logs.LU-3062
service160
service161
service162
service163
service164
service165
service166
service167
service168

Service160 is the mds/mgs, the rest are oss.

Comment by Jay Lan (Inactive) [ 01/Apr/13 ]

This is a shorten version of 1000145.pbspl1.0.log.txt with pbs_mom and epilogue messages removed.

Comment by Jinshan Xiong (Inactive) [ 01/Apr/13 ]

Hi Jay Lan,

I already took a look at those files, and I need more detail information about, can you please turn on more debug options, especially LNET on the client and server side and collect it again? The most interesting thing is that even the clients lost connection to the MGS which is not involved in the IO path at all. If I guess it correctly, this is likely a LNET problem. But I'd like to make it clear before pointing my finger to others.

Do you know if f90 opens the file with O_APPEND, and "write(9999) 66" just writes 66 bytes to the file?

THank you.

Comment by Jay Lan (Inactive) [ 01/Apr/13 ]

I will try to reproduce the problem with increased debugging.

The f90 program does not open with O_APPEND. All instances writes 12 bytes to the file. The content of the file:
0000000 0004 0000 0042 0000 0004 0000
contains three 4-byte word. The first and the last words probably are envelop. The second byte is the hex value of the number "66". Not 66 bytes. The program just write a number "66" to the output file.

Comment by James Karellas [ 01/Apr/13 ]

original reproducer had a bug in it.

Comment by James Karellas [ 01/Apr/13 ]

Client debug logs. Server will be tougher to get. We may have to switch to our test filesystem to get that to work. Please look at client side logs and determine if you want me to still get server logs.

Comment by Jay Lan (Inactive) [ 01/Apr/13 ]

Is there an interop issue between 2.3.0 client and 2.1.4 server? Any change in 2.3.0 client requires same change at the server?

Comment by James Karellas [ 01/Apr/13 ]

full debug turned on this time.. debug logs were over 500MB.. jira has only 10MB limit so i took the portion of the log that made sense, that shows the OST disconnect. let me know if you want the whole client logs and we can figure out how to get them to you.

Comment by James Karellas [ 02/Apr/13 ]

Uploading full debug log file. Split file first, then compressed each segment using bzip2 to get below the max file size of 10MB.
Order of files is xaa, xab, xac, xad, xae.

Comment by Peter Jones [ 30/May/13 ]

Bobbie

Could you please setup the reproducer supplied on April 1st above.

Thanks

Peter

Comment by Bobbie Lind (Inactive) [ 06/Jun/13 ]

I have received my account on Rosso and expect to complete the testing over the coming week.

Comment by Andreas Dilger [ 14/Jun/13 ]

So, just to clarify, the problem here is that the reproducer program is starting 1024 tasks to write 12 bytes to the same offset=0 of the same file (striped over 16 OSTs?), and there is a lot of contention? Or am I misunderstanding and each thread will write to non-overlapping ranges of the file (i.e. like O_APPEND)?

This isn't terribly surprising, because either case is a pathologically bad IO pattern. If they are all writing to the same offset it is completely serialized by the locking, while using O_APPEND actually gets worse with increasing numbers of stripes, since it needs to lock all stripes to get the current file size.

Do you have any idea what the application is actually trying to accomplish with these overlapping writes? Is there any chance to modify the application to do (whatever it is trying to do) in a more sensible manner? Depending on what the application is actually trying to accomplish, there may be many more filesystem-friendly ways of doing this.

The servers should definitely not fail in this case, though I can imagine that the clients might time out waiting for their chance to overwrite the same bytes again. The clients should reconnect and complete the writes, however.

It might be possible to optimize pathological cases like this by using OST-side locking for the RPCs, though there is still a difficulty with sending sub-page writes that also need to be handled.

Comment by James Karellas [ 20/Jun/13 ]

Sent Andreas email on 18th, but didn't add to Jira.

Hey Andreas,

Our position is that users doing this type of work should
not cause an eviction. We agree that it is sub-optimal
at best (we have a different term for it: stupid), but
our users continue to do it. There are various reasons
why users can't/won't change their code here at NASA.
Word from management is that we need to get this fixed.
I've copied our local Lustre team in case anyone has
anything else to add.

Thanks,

jdk

Comment by Bobbie Lind (Inactive) [ 21/Jun/13 ]

I'm attaching the logs from running the reproducer on the opensfs cluster. It was a 16 node, 1mds, 2oss test.

Comment by Jinshan Xiong (Inactive) [ 27/Jun/13 ]

It seems that this is due to a deadlock on the client side.

Is it possible to do an experiment as follows:
1. write 66 to the shared file as usual;
2. if the write can't be finished in 5 minutes(this is only true on NASA cluster because the lock timeout was 1900 seconds; on our local site, we should make it shorter, say 60 seconds), the script will dump the stack trace of processes running on the node by 'echo t > /proc/sysrq-trigger'.

Comment by Jinshan Xiong (Inactive) [ 27/Jun/13 ]

After taking a further look, this should be a problem about truncate because the test program was opening the file with O_TRUNC. This is why more stripes leads to more likely causing the problem.

Can you please try this patch: http://review.whamcloud.com/5208 to see if it can help?

Comment by Jay Lan (Inactive) [ 18/Jul/13 ]

Jim Karellas tested 2.4 client (against 2.1.5 server) before and he was
still able to reproduce the problem with ease. Since 2.4 contains the http://review.whamcloud.com/5208 patch Jinshan mentioned and wanted us to test, I think we need to feed back the test result.

Comment by Jay Lan (Inactive) [ 08/Sep/16 ]

We have not seen this for a long while and we are running 2.7, so please close it.

Comment by Peter Jones [ 08/Sep/16 ]

ok Jay

Generated at Sat Feb 10 01:30:38 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.