[LU-11667] sanity test 317: FAIL: Expected Block 8 got 48 for f317.sanity Created: 14/Nov/18  Updated: 10/Dec/22  Resolved: 20/Nov/21

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.0, Lustre 2.12.4
Fix Version/s: Lustre 2.15.0

Type: Bug Priority: Minor
Reporter: Jian Yu Assignee: WC Triage
Resolution: Fixed Votes: 0
Labels: arm, arm-server, ppc
Environment:

Arch: aarch64 (client)


Issue Links:
Cloners
is cloned by LU-15223 Improve partial page read/write In Progress
Duplicate
Related
is related to LU-10300 Can the Lustre 2.10.x clients support... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

sanity test 317 failed on ARM clients as follows:

== sanity test 317: Verify blocks get correctly update after truncate ================================ 15:30:27 (1542036627)
1+0 records in
1+0 records out
5242880 bytes (5.2 MB) copied, 0.467256 s, 11.2 MB/s
/mnt/lustre/f317.sanity has size 2097152 OK
/mnt/lustre/f317.sanity has size 4097 OK
/mnt/lustre/f317.sanity has size 4000 OK
/mnt/lustre/f317.sanity has size 509 OK
/mnt/lustre/f317.sanity has size 0 OK
2+0 records in
2+0 records out
8192 bytes (8.2 kB) copied, 0.0562888 s, 146 kB/s
  File: '/mnt/lustre/f317.sanity'
  Size: 24575     	Blocks: 48         IO Block: 4194304 regular file
Device: 2c54f966h/743766374d	Inode: 144115708605760525  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2018-11-12 15:30:29.000000000 +0000
Modify: 2018-11-12 15:30:29.000000000 +0000
Change: 2018-11-12 15:30:29.000000000 +0000
 Birth: -
 sanity test_317: @@@@@@ FAIL: Expected Block 8 got 48 for f317.sanity 

Maloo report: https://testing.whamcloud.com/test_sets/074afc02-e7bf-11e8-815b-52540065bddc



 Comments   
Comment by Gerrit Updater [ 14/Nov/18 ]

Jian Yu (yujian@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33656
Subject: LU-11667 tests: disable sanity test 317 for ARM
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 4f6c573ff0f3ecd69a835a19d8402a69f39d088e

Comment by Andreas Dilger [ 27/Nov/18 ]

This could just be a test defect, because the dd bs=$grant_blk_size count=2 seek=5 can write chunks that are not aligned on a PAGE_SIZE boundary if blocksize != PAGE_SIZE.

Comment by James Nunez (Inactive) [ 12/Feb/20 ]

We're seeing the same failure for PPC client testing. See https://testing.whamcloud.com/test_sets/6b8903ee-4d49-11ea-b58e-52540065bddc for logs.

Comment by Xinliang Liu [ 16/Sep/21 ]

Created two same size 10B  file in home dir and /mnt/lustre dir, if the backend filesystem block size is 4K. Then the inode allocated blocks should be the same( that is 8 if count by block size 512B).

Test file created at home dir:

$ getconf PAGESIZE
65536
$ echo "123456789" > ~/testfile
$ stat ~/testfile
  File: /root/testfile
  Size: 10              Blocks: 8          IO Block: 65536  regular file
Device: fc02h/64514d    Inode: 12863429    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-09-16 02:51:21.268641287 +0000
Modify: 2021-09-16 03:08:05.382557951 +0000
Change: 2021-09-16 03:08:05.382557951 +0000
 Birth: -
$ stat -c %b ~/testfile
8
$ stat -c %B ~/testfile
512
$ stat -c %s ~/testfile
10
$ stat -f ~/testfile
  File: "/root/testfile"
    ID: fc0200000000 Namelen: 255     Type: xfs
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 52272379   Free: 45840170   Available: 45840170
Inodes: Total: 104549824  Free: 104176363

Test file created at Lustre dir:

$ getconf PAGESIZE
65536
$ echo "123456789" > /mnt/lustre/testfile
$ stat -c %s /mnt/lustre/testfile
10
$ stat -c %B /mnt/lustre/testfile
512
$ stat -c %b /mnt/lustre/testfile
128
$ stat  /mnt/lustre/testfile
  File: /mnt/lustre/testfile
  Size: 10              Blocks: 128        IO Block: 4194304 regular file
Device: 2c54f966h/743766374d    Inode: 144115205272502274  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-09-16 02:53:57.000000000 +0000
Modify: 2021-09-16 03:07:28.000000000 +0000
Change: 2021-09-16 03:07:28.000000000 +0000
 Birth: -
$ stat  -f /mnt/lustre/testfile
  File: "/mnt/lustre/testfile"
    ID: 2c54f96600000000 Namelen: 255     Type: lustre
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 78276      Free: 77931      Available: 71141
Inodes: Total: 100000     Free: 99726

But the Lustre test file's inode blocks is 128. This should be wrong?

 

 

Comment by Andreas Dilger [ 16/Sep/21 ]

I think this has always been the case for writes from 64KB PAGE_SIZE clients (e.g. back to ia64). The reason is that the client sends a full-page write, because it is only tracking dirty pages, and the server writes the full amount of data sent by the client. I suspect that ext4 is handling this by having multiple 4KB buffer_heads on a 64KB page, and using the buffer dirty state to determine which pages to write, but Lustre doesn't use buffer heads.

Comment by Xinliang Liu [ 29/Oct/21 ]

HI Andreas,

I found that this issue happens at Arm 64K PAGE_SIZE OST server.

When create a file, blocks are allocated with PAGE_SIZE aligned, see function osd_ldiskfs_map_inode_pages().

E.g. for 64K PAGE_SIZE Arm64 OST server, if create a file with size less than 64K, it actually allocates 128 blocks each block 512 Bytes.

We need to adjust the test for 64K PAGE_SIZE OST server.

Comment by Xinliang Liu [ 04/Nov/21 ]

I am thinking if we should make blocks allocation aligned with BLOCK_SIZE as ext4, which could save space for large PAGE_SIZE e.g. 64K. Then no need to make change to the test case. And I have a look at the code it seems both OSC client  and OST server need to adjust for this. The client always sends no hole pages (currently page start offset is always 0) to the server for writing now. And the server side needs to adjust making blocks allocation aligned with block size. 

Comment by Xinliang Liu [ 04/Nov/21 ]

Find out the client side page clip related code:

 298 int osc_io_commit_async(const struct lu_env *env,
 299                         const struct cl_io_slice *ios,
 300                         struct cl_page_list *qin, int from, int to,
 301                         cl_commit_cbt cb)
 302 {
...
 315         /* Handle partial page cases */
 316         last_page = cl_page_list_last(qin);
 317         if (oio->oi_lockless) {
 318                 page = cl_page_list_first(qin);
 319                 if (page == last_page) {
 320                         cl_page_clip(env, page, from, to);
 321                 } else {
 322                         if (from != 0)
 323                                 cl_page_clip(env, page, from, PAGE_SIZE);
 324                         if (to != PAGE_SIZE)
 325                                 cl_page_clip(env, last_page, 0, to);
 326                 }
 327         }
 328
 329         ll_pagevec_init(pvec, 0);

Currently, it seems a normal write don't go into this "if (oio->oi_lockless) {" part code. Anyone know why it is oi_lockless? @Andreas Dilger

 

Comment by Andreas Dilger [ 04/Nov/21 ]

paf0186 you are probably the most interested in changing this code. Handling sub-page writes for ARM would not be very different than sub-page writes for x86, which would potentially allow eg. IO500 unaligned writes to be handled much more efficiently.

Comment by Patrick Farrell [ 04/Nov/21 ]

Xinliang,

I am not 100% sure I understand your question - Are you saying it is oi_lockless?  It should not be.  This (commit_async) code is buffered, and lockless buffered is broken and also off by default.  I have a patch to remove it, but it's normally off anyway.

What are you looking for/hoping for here?

Note we clip pages in other places too.

Comment by Patrick Farrell [ 04/Nov/21 ]

"I am thinking if we should make blocks allocation aligned with BLOCK_SIZE as ext4, which could save space for large PAGE_SIZE e.g. 64K. Then no need to make change to the test case. And I have a look at the code it seems both OSC client  and OST server need to adjust for this. The client always sends no hole pages (currently page start offset is always 0) to the server for writing now. And the server side needs to adjust making blocks allocation aligned with block size. 
 "

Can you talk more about what you're thinking?  I am not quite what the implication of changing block allocation on the server would be on the client.  Why does changing server block allocation filter back to the client like this?

More generally, about partial page i/o:
Generally speaking, we can't have partial pages except at the start and end of each write - that's a limitation of infiniband, but there are also page cache restrictions.

In general, RDMA can be unaligned at the start, and unaligned at the end, but that's it.  This applies even when combining multiple RDMA regions - it's some limitation of the hardware/drivers.  So we have a truly unaligned I/O(with a partial page at beginning and end), but then we can't combine it with other I/Os.

There is also a page cache limitation here.  The Linux page cache insists on working with full pages - It will only allow partial pages at file_size.  So, eg, a 3K file is a single page with 3K in it, and we can write just 3K.  But if we want to write 3K in to a large 'hole' in a file, Linux will enforce writing PAGE_SIZE.  This is not a restriction we can easily remove, it is an important part of the page cache.

Comment by Patrick Farrell [ 04/Nov/21 ]

By the way, I am happy to keep talking about this, if you have thoughts or questions or whatever.  I've looked at sub-page I/O a few times, but you may have a different idea than what I have tried.

Comment by Andreas Dilger [ 04/Nov/21 ]

Patrick, I was thinking that if we can handle a write (uncached) from the client that is RDMA 64KB, but has a non-zero start and end offset (4KB initially), it might be generalizable to any byte offset.

I'm aware of the RDMA limitations, but I'm wondering if those can be bypassed (if necessary) by transferring a whole page over the network, but store it into a temporary page and copy the data for a cached/unaligned read-modify-write on the server to properly align the data. The content of the start/end of the page sent from the client would be irrelevant, since it will be trimmed by the server anyway when the copy is done

While the copy might be expensive for very large writes, my expectation is that this would be most useful for small writes. That does raise the question of whether the data could be transferred in the RPC as a short write, but for GPU direct we require RDMA to send the data directly from the GPU RAM to the OSS. Maybe it is just a matter of generalizing the short write handling to allow copying from the middle of an RDMA page?

Comment by Xinliang Liu [ 06/Nov/21 ]

Hi paf0186 and Andreas Dilger, thank you for the clarification about partial page write. It really helps me a lot.

For ldiskfs backend filesystem,  I see that if the user issue a partial page cached write the Lustre (including client side and server side) will convert it in to a full page write. I want to make Lustre do a real partial page write inside which with the length less than a PAGE_SIZE no matter the start is zero or non-zero , so that Lustre can handle bellow sanity 317 test partial page write for a large PAGE_SIZE e.g. 64 KB and pass the test. That's the problem I want to solve.

sanity.sh
test_317() {
...
23836     #
23837     # sparse file test
23838     # Create file with a hole and write actual two blocks. Block count
23839     # must be 16.
23840     #
23841     dd if=/dev/zero of=$DIR/$tfile bs=$grant_blk_size count=2 seek=5 \
23842         conv=fsync || error "Create file : $DIR/$tfile"
23843
 ...

I am trying to understand all the details and  limitation including some mentioned by you e.g. RDMA partial page write, GPU direct write etc.

I have a draft patch now which make client side send a niobuf,  which contains non-zero file start offset  and the real file end offset , to the server. This requires clip the page in the client side. And in the server side it only writes the necessary range(i.e. from the real non-zero file start offset to the file end offset).

I will send the patch for review soon. Let's try if we can work out a solution.  Thanks.

Comment by Patrick Farrell [ 06/Nov/21 ]

How do you handle the page cache?  Like, what's in there?  And how do you get the range for the clipping?  Etc.  Some of these questions will be answered with the patch, of course

But say you write this clipped partial page - What happens when you read it on the client which wrote it?  What is in the rest of the page?

And, going on from there:
What is in the rest of the page if the file was empty there?  And what is in the rest of the page if there was already data in the whole page when you write it?

Basically what I am saying is unless you get very clever, this will break the page cache.

You would also need to mark this page as non-mergable to avoid the RDMA issue, but that's easy to do.  The real sticking point is the page cache.

Comment by Patrick Farrell [ 06/Nov/21 ]

One idea which Andreas and I had some time ago was the idea of something like marking the page as not up to date (this means not marking it as up to date, ie, the raw page state is not-up-to-date and up to date is a flag), so if the page was accessed, it would cause the client to re-read it from the server.

This would mean the page was effectively uncached, which is a bit weird, but could work - I think the benefit is pretty limited since you can't easily combine these partial pages in to larger writes.  (RDMA issue again)

But anyway, not setting written pages up to date turned out to be really complicated, and I decided it was unworkable.  The write code assumes pages are up to date as part writing them, and while I was able to work around a few things, I decided it felt like I was very much going against the intent of the code.

Comment by Patrick Farrell [ 06/Nov/21 ]

We also need to ask:
What's the benefit/what's the end goal, and how much do we have to do to get there?

The benefit will be pretty limited if we can't also solve the RDMA issue.  The benefit would only apply for < page writes, and each one would have to be sent to disk by itself.

One way to solve the RDMA problem would be to send full pages over the network, but attach extra data in the RPC telling the server the actual range for each page.  This would be very complicated, I think, and involve new ways of handling writes on the client and server.

And this assumes we can solve the page cache issue!

Comment by Xinliang Liu [ 15/Nov/21 ]

 

Hi paf0186 and  adilger,  As partial write is so complicated we might take a long time for it. let's create another Jira card for partial write and discuss there? I also send a draft patch for review.

LU-15223

Comment by Gerrit Updater [ 20/Nov/21 ]

"Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/45395/
Subject: LU-11667 tests: Fix sanity test 317 for 64K PAGE_SIZE OST
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 63d4d9ff2f5c8cc992ca6b2f698bb43a3257bfb3

Comment by James A Simmons [ 20/Nov/21 ]

Work around landed. Proper fix is being done in LU-15223

Generated at Sat Feb 10 02:45:53 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.