This is about https://jira.whamcloud.com/browse/LU-17573 ticket:
We encountered a similar isssue after setting PFL and all OSTs ended up with a file too large error:
here is how we reproduced the error:
===================================================
[faaland1@mutt9 pfltest] $mkdir nopfl pfl-one pfl-two
[faaland1@mutt9 pfltest] $lfs setstripe -E 256G -c2 -S 16M -E -1 -c -1 pfl-two
[faaland1@mutt9 pfltest] $lfs setstripe -E -1 -c -1 pfl-one
[faaland1@mutt9 pfltest] $lfs setstripe -c -1 -S 16M nopfl
[faaland1@mutt9 pfltest] $pwd
/p/lflood/faaland1/test/pfltest
[faaland1@mutt9 pfltest] $lfs check servers .
lflood-OST0000-osc-ff1a28194597d000 active.
lflood-OST0001-osc-ff1a28194597d000 active.
lflood-OST0002-osc-ff1a28194597d000 active.
lflood-OST0003-osc-ff1a28194597d000 active.
lflood-MDT0000-mdc-ff1a28194597d000 active.
lflood-MDT0001-mdc-ff1a28194597d000 active.
lflood-MDT0002-mdc-ff1a28194597d000 active.
lflood-MDT0003-mdc-ff1a28194597d000 active.
MGC172.19.1.133@o2ib100 active.
[faaland1@mutt9 pfltest] $for xx in *; do echo $xx; dd if=/dev/zero bs=1 seek=$((8*2))T of=${xx}/16TB count=1; echo; done 2>&1
nopfl
1+0 records in
1+0 records out
1 byte copied, 0.0006174 s, 1.6 kB/s
pfl-one
1+0 records in
1+0 records out
1 byte copied, 0.000456144 s, 2.2 kB/s
pfl-two
dd: failed to truncate to 17592186044416 bytes in output file 'pfl-two/16TB': File too large
==============================================
If one of the watcher could xonfirm the current fix for this ticket really address the error we encountered.
thanks
For some reason this issue wasn't a "BUG" and I couldn't figure out how to change that.
I created new ticket https://jira.whamcloud.com/browse/LU-18347