Details
-
Bug
-
Resolution: Fixed
-
Critical
-
Lustre 2.6.0, Lustre 2.7.0, Lustre 2.5.3
-
3
-
14622
Description
This issue was created by maloo for Nathaniel Clark <nathaniel.l.clark@intel.com>
This issue relates to the following test suite run:
http://maloo.whamcloud.com/test_sets/e5783778-f887-11e3-b13a-52540035b04c.
The sub-test test_132 failed with the following error:
test failed to respond and timed out
Info required for matching: sanity 132
Attachments
Issue Links
- duplicates
-
LU-5163 (lu_object.h:852:lu_object_attr()) ASSERTION( ((o)->lo_header->loh_attr & LOHA_EXISTS) != 0 ) failed
-
- Resolved
-
-
LU-6105 Update ZFS/SPL version to 0.6.3-1.2
-
- Resolved
-
- is duplicated by
-
LU-4716 replay-ost-single test_5: stuck in dbuf_read->zio_wait
-
- Resolved
-
- is related to
-
LU-6089 qsd_handler.c:1139:qsd_op_adjust()) ASSERTION( qqi ) failed
-
- Resolved
-
-
LU-6155 osd_count_not_mapped() calls dbuf_hold_impl() without the lock
-
- Resolved
-
-
LU-5277 sanity test_132: mdt_build_target_list(), unable to handle kernel NULL pointer dereference
-
- Resolved
-
-
LU-5737 sanity test_132: client NULL req->rq_import in ptlrpc_request_committed()
-
- Resolved
-
-
LU-6008 sanity test_102b: setstripe failed
-
- Resolved
-
-
LU-6195 osd-zfs: osd_declare_object_destroy() calls dmu_tx_hold_zap() with wrong keys
-
- Resolved
-
-
LU-4950 sanity-benchmark test fsx hung: txg_sync was stuck on OSS
-
- Closed
-
-
LU-7020 OST_DESTROY message times out on MDS repeatedly, indefinitely
-
- Closed
-
-
LU-2160 Implement ZFS dmu_tx_hold_append() declarations for llog
-
- Open
-
- is related to
-
LU-4968 Test failure sanity test_132: umount /mnt/ost2
-
- Resolved
-
-
LU-3665 obdfilter-survey test_3a: unmount stuck in obd_exports_barrier()
-
- Resolved
-
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
I would agree with Alex on this. By deferring unlink of small files it will probably double or triple the total IO that the MDT is doing because in addition to the actual dnode deletion it also needs to insert the dnode into the deathrow ZAP in one TXG and then delete it from the same ZAP in a different txg. If there are a large number of objects being deleted at once (easily possible on the MDT), then the deathrow ZAP may get quite large (and never shrink) and updates would become less efficient than if it is kept small.