Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-933

allow disabling the mdc_rpc_lock for performance testing


    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: Lustre 2.3.0
    • Labels:
    • Story Points:
    • Rank (Obsolete):


      It is desirable to allow disabling the client mdc_


      _rpc_lock() in order to allow clients to send multiple filesystem-modifying RPCs at the same time. While this would break MDS recovery (due to insufficient transaction slots in the MDS last_rcvd file) it would allow a smaller number of clients to generate a much higher RPC load on the MDS. This is ideal for MDS/RPC load testing purposes, and can also be used to help evaluate the potential benefits of implementing the multi-slot last_rcvd feature.

      A simple mechanism to do this would be to set the client fail_loc to a specific value, which allows the client to multiple metadata-modifying requests at one time. Some care must be taken when setting and clearing this fail_loc, since it could lead to inconsistencies where mdc_get_rpc_lock() is skipped when the fail_loc is set, but mdc_put_rpc_lock() for that same RPC is run when fail_loc is cleared.

      One possibility is something like the following, though there are may others. This implementation:

      • ensures that requests sent when OBD_FAIL_MDC_SEM is turned off do not happen concurrent with other requests
      • is race free even in the transition period when OBD_FAIL_MDC_SEM is turned on or ff
      struct mdc_rpc_lock {
              cfs_semaphore_t       rpcl_sem;
              struct lookup_intent *rpcl_it;
              int                   rpcl_fakes;
      #define MDC_FAKE_RPCL_IT ((void *)0x2c0012bfUL)
      static inline void mdc_get_rpc_lock(struct mdc_rpc_lock *lck,
                                          struct lookup_intent *it)
              if (it == NULL || (it->it_op != IT_GETATTR && it->it_op != IT_LOOKUP)) {
                      /* This would normally block until the existing request finishes.
                       * If fail_loc is set it will block until the regular request is
                       * done, then set rpcl_it to MDC_FAKE_RPCL_IT.  Once that is set
                       * it will only be cleared when all fake requests are finished.
                       * Only when all fake requests are finished can normal requests
                       * be sent, to ensure they are recoverable again. */
                      if (CFS_FAIL_CHECK(OBD_FAIL_MDC_RPCS_SEM)) {
                              lck->rpcl_it = MDC_FAKE_RPCL_IT;
                      } else {
                              /* This will only happen when the CFS_FAIL_CHECK() was
                               * just turned off but there are still requests in progress.
                               * Wait until they finish.  It doesn't need to be efficient
                               * in this extremely rare case, just have low overhead in
                               * the common case when it isn't true. */
                              while (unlikely(lck->rpcl_it == MDC_FAKE_RPCL_IT))
                              LASSERT(lck->rpcl_it == NULL);
                              lck->rpcl_it = it;
      static inline void mdc_put_rpc_lock(struct mdc_rpc_lock *lck,
                                          struct lookup_intent *it)
              if (it == NULL || (it->it_op != IT_GETATTR && it->it_op != IT_LOOKUP)) {
                      if (lck->rpcl_it == MDC_FAKE_RPCL_IT) {
                              LASSERTF(lck->rpcl_fakes > 0, "%d\n", lck->rpcl_fakes);
                              if (--lck->rpcl_fakes == 0) {
                                      lck->rpcl_it = NULL;
                      } else {
                              LASSERTF(it == lck->rpcl_it, "%p != %p\n", it, lck->rpcl_it);
                              lck->rpcl_it = NULL;


          Issue Links



              • Assignee:
                liang Liang Zhen (Inactive)
                adilger Andreas Dilger
              • Votes:
                0 Vote for this issue
                7 Start watching this issue


                • Created: