Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-8037

subtree mounts should reject '..' as a path component

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.9.0
    • Lustre 2.9.0
    • 3
    • 9223372036854775807

    Description

      t:~# mount t@tcp:/lustre/.. /mnt/lustre2 -t lustre -o user_xattr,flock
      t:~# /bin/ls /mnt/lustre2
      BATCHID		   O	     oi.16.22  oi.16.37  oi.16.51  oi.16.9
      CATALOGS	   oi.16.0   oi.16.23  oi.16.38  oi.16.52  OI_scrub
      changelog_catalog  oi.16.1   oi.16.24  oi.16.39  oi.16.53  PENDING
      changelog_users    oi.16.10  oi.16.25  oi.16.4	 oi.16.54  quota_master
      CONFIGS		   oi.16.11  oi.16.26  oi.16.40  oi.16.55  quota_slave
      fld		   oi.16.12  oi.16.27  oi.16.41  oi.16.56  REMOTE_PARENT_DIR
      hsm_actions	   oi.16.13  oi.16.28  oi.16.42  oi.16.57  reply_data
      last_rcvd	   oi.16.14  oi.16.29  oi.16.43  oi.16.58  ROOT
      LFSCK		   oi.16.15  oi.16.3   oi.16.44  oi.16.59  seq_ctl
      lfsck_bookmark	   oi.16.16  oi.16.30  oi.16.45  oi.16.6   seq_srv
      lfsck_layout	   oi.16.17  oi.16.31  oi.16.46  oi.16.60  update_log
      lfsck_namespace    oi.16.18  oi.16.32  oi.16.47  oi.16.61  update_log_dir
      lost+found	   oi.16.19  oi.16.33  oi.16.48  oi.16.62
      lov_objid	   oi.16.2   oi.16.34  oi.16.49  oi.16.63
      lov_objseq	   oi.16.20  oi.16.35  oi.16.5	 oi.16.7
      NIDTBL_VERSIONS    oi.16.21  oi.16.36  oi.16.50  oi.16.8
      t:~# /bin/ls -l /mnt/lustre2
      /bin/ls: cannot access /mnt/lustre2/REMOTE_PARENT_DIR: No data available
      
      Message from syslogd@t at Apr 18 10:37:43 ...
       kernel:[ 1401.215808] LustreError: 3572:0:(md_object.h:305:lu2md()) ASSERTION( o == ((void *)0) || IS_ERR(o) || lu_device_is_md(o->lo_dev) ) failed: 
      
      Message from syslogd@t at Apr 18 10:37:43 ...
       kernel:[ 1401.218227] LustreError: 3572:0:(md_object.h:305:lu2md()) LBUG
      
      [  507.642901] LDISKFS-fs (loop3): mounted filesystem with ordered data mode. quota=on. Opts: 
      [  590.252657] Lustre: lustre-MDT0000: trigger OI scrub by RPC for [0xe:0x987a9b4c:0x0], rc = 0 [2]
      [ 1401.215808] LustreError: 3572:0:(md_object.h:305:lu2md()) ASSERTION( o == ((void *)0) || IS_ERR(o) || lu_device_is_md(o->lo_dev) ) failed: 
      [ 1401.218227] LustreError: 3572:0:(md_object.h:305:lu2md()) LBUG
      [ 1401.219350] Pid: 3572, comm: mdt00_002
      [ 1401.220099] 
      [ 1401.220100] Call Trace:
      [ 1401.220869]  [<ffffffffa09338b5>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
      [ 1401.222218]  [<ffffffffa0933eb7>] lbug_with_loc+0x47/0xb0 [libcfs]
      [ 1401.223424]  [<ffffffffa12f91a0>] mdt_getattr_internal+0xf00/0x1300 [mdt]
      [ 1401.224761]  [<ffffffffa0aa453a>] ? class_handle2object+0xea/0x1d0 [obdclass]
      [ 1401.226159]  [<ffffffffa12fbf1e>] mdt_getattr_name_lock+0xdfe/0x1920 [mdt]
      [ 1401.227481]  [<ffffffffa12fcf62>] mdt_intent_getattr+0x292/0x470 [mdt]
      [ 1401.228737]  [<ffffffffa12ee86e>] mdt_intent_policy+0x4ce/0xc80 [mdt]
      [ 1401.230070]  [<ffffffffa0c93122>] ldlm_lock_enqueue+0x132/0x920 [ptlrpc]
      [ 1401.231363]  [<ffffffffa0945101>] ? cfs_hash_rw_unlock+0x1/0x30 [libcfs]
      [ 1401.232676]  [<ffffffffa0cbd33f>] ldlm_handle_enqueue0+0x81f/0x14e0 [ptlrpc]
      [ 1401.234091]  [<ffffffffa0d30e94>] ? tgt_lookup_reply+0x34/0x190 [ptlrpc]
      [ 1401.235416]  [<ffffffffa0d427c1>] tgt_enqueue+0x61/0x230 [ptlrpc]
      [ 1401.236629]  [<ffffffffa0d4345f>] tgt_request_handle+0x90f/0x1470 [ptlrpc]
      [ 1401.238000]  [<ffffffffa0cf137a>] ptlrpc_main+0xcea/0x17f0 [ptlrpc]
      [ 1401.239269]  [<ffffffffa0cf0690>] ? ptlrpc_main+0x0/0x17f0 [ptlrpc]
      [ 1401.240477]  [<ffffffff8109e856>] kthread+0x96/0xa0
      [ 1401.241440]  [<ffffffff8100c30a>] child_rip+0xa/0x20
      [ 1401.242400]  [<ffffffff8100bb10>] ? restore_args+0x0/0x30
      [ 1401.243442]  [<ffffffff8109e7c0>] ? kthread+0x0/0xa0
      [ 1401.244398]  [<ffffffff8100c300>] ? child_rip+0x0/0x20
      

      Attachments

        Activity

          People

            wangshilong Wang Shilong (Inactive)
            jhammond John Hammond
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: