[LU-12359] Remote shared burst buffer PCC on a shared backend fs Created: 29/May/19  Updated: 25/Aug/20

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: Qian Yingjin Assignee: Qian Yingjin
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Related
is related to LU-10606 HSM info as part of LOV layout xattr Open
Rank (Obsolete): 9223372036854775807

 Description   

HPC burst buffers are a fast storage layer positioned between the compute engines and the backend storage systems.

There are two representative burst buffer architectures: remote shared burst buffers and node-local burst buffer. DataWarp and Infinite Memory Engine belong to the former. In the case of remote shared burst buffers, the SSD storage resides in I/O nodes positioned between the compute nodes and the backend storage. Data movement between compute nodes and the burst buffer needs to go through a network. Placing burst buffers in I/O nodes facilitates their independent development, deployment, and maintenance. The aggregate bandwidth of node local burst buffers grows linearly with the number of compute nodes. Node-local burst buffers
also require a scalable metadata management to maintain a global namespace across all nodes.

RW-PCC provides an elegant way to couple node-local burst buffers with Lustre. The metadata is managed by Lustre and stored on MDTs. Thus, it becomes part of the global Lustre namespace. Moreover, the file data can be migrated from the LPCC cache to the Lustre OSTs via file restores, and it is transparent to the application. Furthermore,we can customize various cache strategies and provide cache isolation according to files’ attributes.

 

Although the node-local PCC nearly does not occupy any network resrouce when perform data IO, but the capacity of the node-local PCC is limited by the storage media on this client.

 

A novel remote shread PCC for Lustre filesystem is proposed, which can be used as a remote shared burst buffer on a shared PCC backend fs. This shared PCC backend fs could be a high speed networked filesystem (i.e. another Lustre storage) using high speed NVMe or SSD while the current Lustre filesystem is minaly using slow speed HDDs. 

By this way, all Lustre clients can use the shared PCC backend fs with larger capacity. And we can have 4 level storage tires for a single Lustre filesystem

  • OST storage tire
  • Original node-local PCC
  • Remote shared PCC on a shared backend fs
  • Traditional Lustre HSM solution

 

The implementation of remote shared PCC can use the Foundation and framework of current node-local PCC.

Moreover, Under the remote shared RO-PCC, once a file is attached into the shared PCC backend fs, it can sharely read from PCC by all clients.

For the remote shared RW-PCC, it works as original, and can only read/write by a single client.



 Comments   
Comment by Patrick Farrell (Inactive) [ 29/May/19 ]

"A novity remote shread PCC for" what is "novity" supposed to be?  (shread is obviously shared  )

Comment by Patrick Farrell (Inactive) [ 29/May/19 ]

I think I don't understand the point here - Why is this better than (for example) using Lustre directly to access the back end fs?

Comment by Qian Yingjin [ 29/May/19 ]

The reason is the limited capacity of current node-local PCC. And I think the remote shared PCC is compare favourably with other remote shared burst buffer such as IME and DataWarp, as  it has a unified global name space and can transparently access data, etc, al.

"

Thus, it becomes part of the global Lustre namespace. Moreover, the file data can be migrated from the LPCC cache to the Lustre OSTs via file restores, and it is transparent to the application. Furthermore,we can customize various cache strategies and provide cache isolation according to files’ attributes.

 "

Comment by Patrick Farrell (Inactive) [ 29/May/19 ]

So one of your examples was using a second Lustre file system, right?  So a shared, global file system - Why not access that file system directly?

Is the main idea that you could use this to access files stored on that second file system from the namespace of the first Lustre file system?  Hmmmmm!

Comment by Andreas Dilger [ 29/May/19 ]

Before any development effort is spent on this, there are several other things that are more useful to work on, such as productizing the WBC feature, CCI, HSM integration into composite layouts, etc.

Comment by Shuichi Ihara [ 29/May/19 ]

I have same question of Patrick asked. "remote PCC" should be Lustre on flash devices right? That means client is able to mount it directy and it should be faster than another PCC layer? Also, if there are mixed SSD and HDD OSTs in same Lustre namespace, FLR is more better way of writing data into SSD layer first, then migrate data to HDD OSTs with FLR mirror, no?

Comment by Qian Yingjin [ 29/May/19 ]

Compared with other remote shared burst buffer (IME and DataWarp, see https://www.nersc.gov/users/computational-systems/cori/burst-buffer/burst-buffer/), I think our remote shared PCC has its advantages:

1) Unified global namespace

2) Any high-speed networked filesystem can be used as PCC backend

3) The file data can be migrated from the PCC cache to the Lustre OSTs via file restores, and it is transparent to the application. While traditional remote shared burst buffer needs to stage data in /out burst buffer cache.

4) we can customize various cache strategies and provide cache isolation according to files’ attributes.

Comment by Qian Yingjin [ 29/May/19 ]

From my understanding, I don't think it needs any development effort (at least it works for RW-PCC without open attach feature/ and RO-PCC with/without open attach eanbled), the current PCC can already be used as remote shared burst buffer. 

Comment by Andreas Dilger [ 29/May/19 ]

My preference for long-term development in this area is to have mirrors/copies/archives of data integrated with composite layouts. The way that DAOS is using a foreign layout, and/or moving the HSM xattr into a component to link to copies outside of Lustre seems like the right approach. This allows multiple archive copies per file, and would unify the tools needed to manage PCC, HSM, and FLR.

That would also allow, for example, two Lustre filesystems to link to each other's files (the foreign layout xattr would contain the FID of the remote filesystem copy). In the normal case, each filesystem would have an "archive" copy in the remote filesystem, and if it wants to make a change locally it would mark the local archive copy as being dirty, set the remote primary copy as stale, modify the file locally (similar to PCC), then "archive" the file back to the remote filesystem. This would be symmetrical between both filesystems and allow e.g. remote replication where either copy could be updated (though not both copies at the same time).

Comment by Qian Yingjin [ 29/May/19 ]

>if there are mixed SSD and HDD OSTs in same Lustre namespace, FLR is more better way of writing data into SSD layer first, then migrate data to HDD OSTs with FLR mirror, no?

 

Compared with FLR, PCC can:

1) Transparently restore data from PCC into Lustre OSTs when it hit -ENOSPC  or -EDQUOT error; while FLR on SSD can not tolerate this kind of failures, I think.

2)customize various cache strategies and provide cache isolation according to files’ attributes.

i.e. PCC can provide cache isolation mechanisms for administrators to manage how much PCC storage capacity each user/group/project can use.

FLR can not customize how much SSD space each user/group/project can use.

Pool-based quota maybe, but not in user/group/project dimension, I think.

 

Moreover, PCC can implement a job-base quota via poject quota on the PCC backend fs, I think.

We just need to add a mapping between the job identifier (i.e. job name) and a dedicated project ID (i.e. 100) of PCC backend fs.

1) Before the job starts, set this mapping and the project quota enforcement  on the PCC backend fs;

2) when the job runs, at the time of attaching the file inot PCC, set the project ID (100) to the PCC copy to achive job-base quota.

3) when the job finishes, unmapping the relation between the job identifier and a project ID (i.e. 100) and remove the project quota enforcement associated with this project ID on the PCC backend fs.

 

Comment by Andreas Dilger [ 29/May/19 ]

The very significant problem with PCC today is that users will LOSE THEIR DATA if the client is offline. Using FLR to keep an extra copy of the data in the PCC cache is much more usable. I don't think FLR and PCC are incompatible with each other, if we move the PCC/HSM xattr into an FLR component. The main difference is that FLR can mark one mirror STALE, but keep the data in the OSTs, while HSM has to release the data permanently. Also, HSM today can only have one archive copy of the data, but if the HSM xattr is moved into a component we could have many copies.

I don't think that any of the PCC quota options are incompatible with Lustre providing quotas itself. Having to manage quota on each PCC node separately/locally would also be complex for the administrator, and not what they want for a distributed filesystem.

I think the first and most important thing to do in this area is to consolidate the HSM xattr with PFL/FLR/composite layouts to give us the flexibility to combine these features in interesting ways. Secondly, PCC is still limited to creating one file on the MDS for each local file, so productizing the WBC feature would allow a client to create files at a high speed locally, without an MDS RPC for each file, which is useful for many things.

Before we add many more complex features to PCC, we also need to get some feedback from users on how it is being used, fix bugs, etc. to know that time spent there is worthwhile compared to other features.

Generated at Sat Feb 10 02:51:54 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.