Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12914

(mdt_open.c:312:mdt_prep_ma_buf_from_rep()) ASSERTION( ma->ma_lmv == ((void *)0) && ma->ma_lmm == ((void *)0) ) failed

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.14.0
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      In order to investigate the auto-test/sanity tests failures (and even an OOM crash) my patch at https://review.whamcloud.com/35856/ (for LU-12682) is experiencing running only review-ldiskfs-arm tests suite (where ARM client requires to run a v4.14.0-... Kernel), I have started trying to reproduce on a quite similar platform, by first running a similar Kernel v4 version on Client side.

      Doing so I have triggered an unexpected Server crash with the following signature :

      .............
      [774986.560025] Lustre: DEBUG MARKER: == sanity test 59: verify cancellation of llog records async ========================================= 14:45:59 (1572101159)
      [774993.083689] Lustre: DEBUG MARKER: == sanity test 60a: llog_test run from kernel module and test llog_reader ============================ 14:46:05 (1572101165)
      [774994.609842] Lustre: DEBUG MARKER: test_60 run 23230 - from kernel mode
      [774996.757618] Lustre: 132805:0:(llog_test.c:2232:llog_test_setup()) Setup llog-test device over MGS device
      [774996.771513] Lustre: 132805:0:(llog_test.c:113:llog_test_1()) 1a: create a log with name: 4b6f5788
      [774996.785122] Lustre: 132805:0:(llog_test.c:130:llog_test_1()) 1b: close newly-created log
      [774996.796944] Lustre: 132805:0:(llog_test.c:161:llog_test_2()) 2a: re-open a log with name: 4b6f5788
      [774996.809970] Lustre: 132805:0:(llog_test.c:181:llog_test_2()) 2b: create a log without specified NAME & LOGID
      [774996.823510] Lustre: 132805:0:(llog_test.c:199:llog_test_2()) 2b: write 1 llog records, check llh_count
      [774996.836659] Lustre: 132805:0:(llog_test.c:212:llog_test_2()) 2c: re-open the log by LOGID and verify llh_count
      [774996.851270] Lustre: 132805:0:(llog_test.c:259:llog_test_2()) 2d: destroy this log
      [774996.861852] Lustre: 132805:0:(llog_test.c:418:llog_test_3()) 3a: write 1023 fixed-size llog records
      [774996.909711] Lustre: 132805:0:(llog_test.c:383:llog_test3_process()) test3: processing records from index 501 to the end
      [774996.970709] Lustre: 132805:0:(llog_test.c:392:llog_test3_process()) test3: total 525 records processed with 0 paddings
      [774996.986871] Lustre: 132805:0:(llog_test.c:474:llog_test_3()) 3b: write 566 variable size llog records
      [774997.055273] Lustre: 132805:0:(llog_test.c:546:llog_test_3()) 3c: write records with variable size until BITMAP_SIZE, return -ENOSPC
      [774998.753409] Lustre: 132805:0:(llog_test.c:569:llog_test_3()) 3c: wrote 63962 more records before end of llog is reached
      [774998.766742] Lustre: 132805:0:(llog_test.c:598:llog_test_4()) 4a: create a catalog log with name: 4b6f5789
      [774998.778646] Lustre: 132805:0:(llog_test.c:613:llog_test_4()) 4b: write 1 record into the catalog
      [774998.790176] Lustre: 132805:0:(llog_test.c:640:llog_test_4()) 4c: cancel 1 log record
      [774998.800114] Lustre: 132805:0:(llog_test.c:652:llog_test_4()) 4d: write 6576 more log records
      [774999.179504] Lustre: 132805:0:(llog_test.c:668:llog_test_4()) 4e: add 5 large records, one record per block
      [774999.191875] Lustre: 132805:0:(llog_test.c:688:llog_test_4()) 4f: put newly-created catalog
      [774999.202172] Lustre: 132805:0:(llog_test.c:786:llog_test_5()) 5a: re-open catalog by id
      [774999.212062] Lustre: 132805:0:(llog_test.c:799:llog_test_5()) 5b: print the catalog entries.. we expect 2
      [774999.223857] Lustre: 132811:0:(llog_test.c:717:cat_print_cb()) seeing record at index 1 - [0x1:0x56b:0x0] in log [0xa:0xf:0x0]
      [774999.241515] Lustre: 132805:0:(llog_test.c:811:llog_test_5()) 5c: Cancel 6576 records, see one log zapped
      [774999.466969] Lustre: 132805:0:(llog_test.c:819:llog_test_5()) 5c: print the catalog entries.. we expect 1
      [774999.478675] Lustre: 132805:0:(llog_test.c:831:llog_test_5()) 5d: add 1 record to the log with many canceled empty pages
      [774999.491927] Lustre: 132805:0:(llog_test.c:839:llog_test_5()) 5e: print plain log entries.. expect 6
      [774999.503260] Lustre: 132805:0:(llog_test.c:851:llog_test_5()) 5f: print plain log entries reversely.. expect 6
      [774999.515926] Lustre: 132805:0:(llog_test.c:865:llog_test_5()) 5g: close re-opened catalog
      [774999.525923] Lustre: 132805:0:(llog_test.c:895:llog_test_6()) 6a: re-open log 4b6f5788 using client API
      [774999.537734] Lustre: MGS: non-config logname received: 4b6f5788
      [774999.548062] Lustre: 132805:0:(llog_test.c:927:llog_test_6()) 6b: process log 4b6f5788 using client API
      [775000.685225] Lustre: 132805:0:(llog_test.c:931:llog_test_6()) 6b: processed 63962 records
      [775000.695582] Lustre: 132805:0:(llog_test.c:938:llog_test_6()) 6c: process log 4b6f5788 reversely using client API
      [775002.107695] Lustre: 132805:0:(llog_test.c:942:llog_test_6()) 6c: processed 63962 records
      [775002.119167] Lustre: 132805:0:(llog_test.c:1090:llog_test_7()) 7a: test llog_logid_rec
      [775006.005750] Lustre: 132805:0:(llog_test.c:1101:llog_test_7()) 7b: test llog_unlink64_rec
      [775009.911972] Lustre: 132805:0:(llog_test.c:1112:llog_test_7()) 7c: test llog_setattr64_rec
      [775013.862265] Lustre: 132805:0:(llog_test.c:1123:llog_test_7()) 7d: test llog_size_change_rec
      [775017.696184] Lustre: 132805:0:(llog_test.c:1134:llog_test_7()) 7e: test llog_changelog_rec
      [775019.763371] Lustre: 132805:0:(llog_test.c:1040:llog_test_7_sub()) 7_sub: records are not aligned, written 64071 from 64767
      [775022.198423] Lustre: 132805:0:(llog_test.c:1146:llog_test_7()) 7f: test llog_changelog_user_rec
      [775023.579424] Lustre: 132805:0:(llog_test.c:1040:llog_test_7_sub()) 7_sub: records are not aligned, written 64452 from 64767
      [775026.115746] Lustre: 132805:0:(llog_test.c:1157:llog_test_7()) 7g: test llog_gen_rec
      [775030.289756] Lustre: 132805:0:(llog_test.c:1168:llog_test_7()) 7h: test llog_setattr64_rec_v2
      [775031.952686] Lustre: 132805:0:(llog_test.c:1040:llog_test_7_sub()) 7_sub: records are not aligned, written 64071 from 64767
      [775034.136714] Lustre: 132805:0:(llog_test.c:1208:llog_test_8()) 8a: fill the first plain llog
      [775034.152837] Lustre: 132805:0:(llog_test.c:1241:llog_test_8()) 8a: pin llog [0x1:0x578:0x0]
      [775034.162830] Lustre: 132805:0:(llog_test.c:1251:llog_test_8()) 8b: fill the second plain llog
      [775034.178517] Lustre: 132805:0:(llog_test.c:1272:llog_test_8()) 8b: second llog [0x1:0x57a:0x0]
      [775034.188769] Lustre: 132805:0:(llog_test.c:1287:llog_test_8()) 8d: count survived records
      [775034.199221] Lustre: 132805:0:(llog_test.c:1314:llog_test_8()) 8d: close re-opened catalog
      [775034.209089] Lustre: 132805:0:(llog_test.c:1377:llog_test_9()) 9a: test llog_logid_rec
      [775034.218647] Lustre: 132805:0:(llog_test.c:1361:llog_test_9_sub()) 9_sub: record type 1064553b in log 0x1:0x57c:0x0
      [775034.230926] Lustre: 132805:0:(llog_test.c:1388:llog_test_9()) 9b: test llog_obd_cfg_rec
      [775034.240688] Lustre: 132805:0:(llog_test.c:1399:llog_test_9()) 9c: test llog_changelog_rec
      [775034.250632] Lustre: 132805:0:(llog_test.c:1411:llog_test_9()) 9d: test llog_changelog_user_rec
      [775034.261058] Lustre: 132805:0:(llog_test.c:1511:llog_test_10()) 10a: create a catalog log with name: 4b6f578a
      [775034.439083] Lustre: 132805:0:(llog_test.c:1541:llog_test_10()) 10b: write 6576 log records
      [775034.919484] Lustre: 132805:0:(llog_test.c:1567:llog_test_10()) 10c: write 13152 more log records
      [775035.720014] Lustre: 132805:0:(llog_test.c:1599:llog_test_10()) 10c: write 6576 more log records
      [775036.038545] Lustre: 132805:0:(llog_cat.c:107:llog_cat_new_log()) MGS: there are no more free slots in catalog 4b6f578a
      [775036.170346] Lustre: 132805:0:(llog_test.c:1626:llog_test_10()) 10c: wrote 5412 records then 1164 failed with ENOSPC
      [775036.183098] Lustre: 132805:0:(llog_test.c:1645:llog_test_10()) 10d: Cancel 6576 records, see one log zapped
      [775036.401747] Lustre: 132805:0:(llog_test.c:1659:llog_test_10()) 10d: print the catalog entries.. we expect 3
      [775036.413779] Lustre: 132825:0:(llog_test.c:717:cat_print_cb()) seeing record at index 2 - [0x1:0x581:0x0] in log [0xa:0x10:0x0]
      [775036.429925] Lustre: 132825:0:(llog_test.c:717:cat_print_cb()) Skipped 2 previous similar messages
      [775036.512271] Lustre: 132805:0:(llog_test.c:1689:llog_test_10()) 10e: write 6576 more log records
      [775037.059750] Lustre: 132805:0:(llog_cat.c:107:llog_cat_new_log()) MGS: there are no more free slots in catalog 4b6f578a
      [775037.072968] Lustre: 132805:0:(llog_cat.c:107:llog_cat_new_log()) Skipped 1163 previous similar messages
      [775037.155600] Lustre: 132805:0:(llog_test.c:1716:llog_test_10()) 10e: wrote 6126 records then 450 failed with ENOSPC
      [775037.155602] Lustre: 132805:0:(llog_test.c:1718:llog_test_10()) 10e: print the catalog entries.. we expect 4
      [775037.155605] Lustre: 132805:0:(llog_cat.c:912:llog_cat_process_or_fork()) MGS: catlog [0x10:0xa:0x0] crosses index zero
      [775037.155917] Lustre: 132805:0:(llog_test.c:1755:llog_test_10()) 10e: catalog successfully wrap around, last_idx 1, first 1
      [775037.219881] Lustre: 132805:0:(llog_test.c:1772:llog_test_10()) 10f: Cancel 6576 records, see one log zapped
      [775037.577808] Lustre: 132805:0:(llog_test.c:1786:llog_test_10()) 10f: print the catalog entries.. we expect 3
      [775037.589954] Lustre: 132805:0:(llog_test.c:717:cat_print_cb()) seeing record at index 3 - [0x1:0x582:0x0] in log [0xa:0x10:0x0]
      [775037.604653] Lustre: 132805:0:(llog_test.c:717:cat_print_cb()) Skipped 6 previous similar messages
      [775037.678510] Lustre: 132805:0:(llog_test.c:1817:llog_test_10()) 10f: write 6576 more log records
      [775038.225026] Lustre: 132805:0:(llog_cat.c:107:llog_cat_new_log()) MGS: there are no more free slots in catalog 4b6f578a
      [775038.238250] Lustre: 132805:0:(llog_cat.c:107:llog_cat_new_log()) Skipped 449 previous similar messages
      [775038.320788] Lustre: 132805:0:(llog_test.c:1844:llog_test_10()) 10f: wrote 6128 records then 448 failed with ENOSPC
      [775038.395270] Lustre: 132805:0:(llog_test.c:1891:llog_test_10()) 10g: Cancel 6576 records, see one log zapped
      [775038.407482] Lustre: 132805:0:(llog_cat.c:912:llog_cat_process_or_fork()) MGS: catlog [0x10:0xa:0x0] crosses index zero
      [775038.421514] Lustre: 132805:0:(llog_cat.c:912:llog_cat_process_or_fork()) Skipped 2 previous similar messages
      [775038.779843] Lustre: 132805:0:(llog_test.c:1903:llog_test_10()) 10g: print the catalog entries.. we expect 3
      [775038.853643] Lustre: 132805:0:(llog_test.c:1933:llog_test_10()) 10g: Cancel 6576 records, see one log zapped
      [775039.264871] Lustre: 132805:0:(llog_test.c:1947:llog_test_10()) 10g: print the catalog entries.. we expect 2
      [775039.346343] Lustre: 132805:0:(llog_test.c:1985:llog_test_10()) 10g: Cancel 6576 records, see one log zapped
      [775039.971483] Lustre: 132805:0:(llog_test.c:1999:llog_test_10()) 10g: print the catalog entries.. we expect 1
      [775039.983666] Lustre: 132805:0:(llog_test.c:717:cat_print_cb()) seeing record at index 2 - [0x1:0xbd4:0x0] in log [0xa:0x10:0x0]
      [775039.998529] Lustre: 132805:0:(llog_test.c:717:cat_print_cb()) Skipped 7 previous similar messages
      [775040.009858] Lustre: 132805:0:(llog_test.c:2025:llog_test_10()) 10g: llh_cat_idx has also successfully wrapped!
      [775040.022558] Lustre: 132827:0:(llog_test.c:1471:cat_check_old_cb()) seeing record at index 2 - [0x1:0xbd4:0x0] in log [0xa:0x10:0x0]
      [775040.522484] Lustre: 132805:0:(llog_test.c:2049:llog_test_10()) 10h: write 6576 more log records
      [775040.533959] LustreError: 132805:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 1317 sleeping
      [775041.040490] LustreError: 132827:0:(libcfs_fail.h:174:cfs_race()) cfs_fail_race id 1317 waking
      [775041.052429] LustreError: 132805:0:(libcfs_fail.h:172:cfs_race()) cfs_fail_race id 1317 awake: rc=0
      [775042.065547] LustreError: 132827:0:(libcfs_fail.h:174:cfs_race()) cfs_fail_race id 1317 waking
      [775042.077455] Lustre: 132827:0:(llog_test.c:1471:cat_check_old_cb()) seeing record at index 3 - [0x1:0xd95:0x0] in log [0xa:0x10:0x0]
      [775042.620337] LustreError: 132805:0:(libcfs_fail.h:174:cfs_race()) cfs_fail_race id 1317 waking
      [775043.665049] Lustre: 132805:0:(llog_test.c:2076:llog_test_10()) 10h: wrote 6576 records then 0 failed with ENOSPC
      [775043.678591] Lustre: 132805:0:(llog_test.c:2089:llog_test_10()) 10: put newly-created catalog
      [775048.485922] Lustre: Failing over lustre-MDT0000
      [775048.826535] Lustre: lustre-MDT0000-mdc-ffff93e1334f8800: Connection to lustre-MDT0000 (at 192.168.1.43@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      [775048.828042] Lustre: lustre-MDT0000: Not available for connect from 192.168.1.43@o2ib (stopping)
      [775048.828044] Lustre: Skipped 1 previous similar message
      [775048.870755] Lustre: Skipped 5 previous similar messages
      [775051.025623] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.1.44@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
      [775051.050217] LustreError: Skipped 8 previous similar messages
      [775053.833343] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.1.43@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
      [775053.858005] LustreError: Skipped 4 previous similar messages
      [775055.573506] Lustre: 133018:0:(client.c:2219:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572101222/real 1572101222]  req@ffff93e6db225e80 x1647732533300288/t0(0) o251->MGC192.168.1.43@o2ib@192.168.1.43@o2ib:26/25 lens 224/224 e 0 to 1 dl 1572101228 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0'
      [775056.244805] Lustre: server umount lustre-MDT0000 complete
      [775056.252338] Lustre: Skipped 1 previous similar message
      [775058.228240] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.1.44@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
      [775058.253082] LustreError: Skipped 1 previous similar message
      [775060.832493] Lustre: 96774:0:(client.c:2219:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572101227/real 1572101227]  req@ffff93efd3d01b00 x1647732533300480/t0(0) o400->MGC10.8.1.43@tcp@192.168.1.43@o2ib:26/25 lens 224/224 e 0 to 1 dl 1572101234 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/67:1.0'
      [775060.873760] LustreError: 166-1: MGC10.8.1.43@tcp: Connection to MGS (at 192.168.1.43@o2ib) was lost; in progress operations using this service will fail
      [775060.998268] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
      [775068.014018] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
      [775068.407230] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.1.44@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
      [775068.432587] LustreError: Skipped 16 previous similar messages
      [775068.435250] Lustre: osd-ldiskfs create tunables for lustre-MDT0000
      [775068.439936] Lustre: MGS: Connection restored to 192.168.1.43@o2ib (at 192.168.1.43@o2ib)
      [775068.440811] Lustre: Evicted from MGS (at 192.168.1.43@o2ib) after server handle changed from 0xe81233dce4975310 to 0xe81233dce599169d
      [775068.864860] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
      [775068.943806] Lustre: lustre-MDD0000: changelog on
      [775068.955439] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect
      [775070.702608] Lustre: DEBUG MARKER: wolf-43.wolf.hpdd.intel.com: executing set_default_debug -1 all 144
      [775073.526974] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 3 clients reconnect
      [775073.540410] Lustre: Skipped 1 previous similar message
      [775073.541294] LustreError: 96713:0:(import.c:1308:ptlrpc_connect_interpret()) lustre-MDT0000_UUID: went back in time (transno 77309508847 was previously committed, server now claims 77309508809)!
      [775073.545670] LustreError: 96713:0:(mdc_request.c:687:mdc_replay_open()) @@@ cannot properly replay without open data  req@ffff93ef53ea7080 x1647732385018048/t77309464729(77309464729) o101->lustre-MDT0000
      -mdc-ffff93e1334f8800@192.168.1.43@o2ib:12/10 lens 616/600 e 0 to 0 dl 1572101253 ref 2 fl Interpret:RPQU/4/0 rc 301/301 job:'lfs.0'
      [775073.545686] LustreError: 96713:0:(mdc_request.c:687:mdc_replay_open()) Skipped 427 previous similar messages
      [775073.782535] LustreError: 83054:0:(osd_oi.c:760:osd_oi_insert()) dm-0: the FID [0x200009871:0x2753:0x0] is used by two objects: 258/618431990 17250/3465196012
      [775073.803750] LustreError: 83054:0:(tgt_lastrcvd.c:1273:tgt_last_rcvd_update()) lustre-MDT0000: replay transno 85899376118 failed: rc = -17
      [775073.822683] LustreError: 83054:0:(mdt_open.c:312:mdt_prep_ma_buf_from_rep()) ASSERTION( ma->ma_lmv == ((void *)0) && ma->ma_lmm == ((void *)0) ) failed:
      [775073.842941] LustreError: 83054:0:(mdt_open.c:312:mdt_prep_ma_buf_from_rep()) LBUG
      [775073.853655] Pid: 83054, comm: mdt00_013 3.10.0-862.14.4.el7_lustre_ClientSymlink_279c264.x86_64 #1 SMP Thu Oct 17 10:54:24 UTC 2019
      [775073.871178] Call Trace:
      [775073.875992]  [<ffffffffc0dfd8ac>] libcfs_call_trace+0x8c/0xc0 [libcfs]
      [775073.885380]  [<ffffffffc0dfd95c>] lbug_with_loc+0x4c/0xa0 [libcfs]
      [775073.894253]  [<ffffffffc1741027>] mdt_prep_ma_buf_from_rep.isra.34+0xe7/0xf0 [mdt]
      [775073.904613]  [<ffffffffc1749d7d>] mdt_reint_open+0x22bd/0x3280 [mdt]
      [775073.913475]  [<ffffffffc173c883>] mdt_reint_rec+0x83/0x210 [mdt]
      [775073.921922]  [<ffffffffc1716920>] mdt_reint_internal+0x7b0/0xba0 [mdt]
      [775073.930845]  [<ffffffffc1723362>] mdt_intent_open+0x82/0x3a0 [mdt]
      [775073.939329]  [<ffffffffc1719c3a>] mdt_intent_opc+0x1ba/0xb40 [mdt]
      [775073.947781]  [<ffffffffc1721c04>] mdt_intent_policy+0x1a4/0x360 [mdt]
      [775073.956497]  [<ffffffffc10eee16>] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc]
      [775073.965548]  [<ffffffffc11173b6>] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc]
      [775073.974941]  [<ffffffffc11a0d92>] tgt_enqueue+0x62/0x210 [ptlrpc]
      [775073.983257]  [<ffffffffc11a4d4a>] tgt_request_handle+0x97a/0x1620 [ptlrpc]
      [775073.992419]  [<ffffffffc114b936>] ptlrpc_server_handle_request+0x256/0xb10 [ptlrpc]
      [775074.002425]  [<ffffffffc114f46c>] ptlrpc_main+0xbac/0x1540 [ptlrpc]
      [775074.010850]  [<ffffffffba0bdf21>] kthread+0xd1/0xe0
      [775074.017678]  [<ffffffffba7255f7>] ret_from_fork_nospec_end+0x0/0x39
      [775074.026054]  [<ffffffffffffffff>] 0xffffffffffffffff
      [775074.032984] Kernel panic - not syncing: LBUG
      

      My first crash-dump analysis findings, indicate that this LBUG has occurred for the replay of the open(O_LOV_DELAY_CREATE) for the only file "$DIR/f24u.sanity" in sanity/test_24u, during the MGS (and thus MDS if combined like in my configuration...) stop/start sequence at the end of sanity/test_60a.

      It is still unclear for me, why this old file open replay has been attempted, but more debugging indicates that there is a path in mdt_reint_open()/mdt_open_by_fid()/mdt_finish_open()/mdt_mfd_open()/mdd_open()/mdd_open_sanity_check() where mdt_prep_ma_buf_from_rep() can be called twice, first from mdt_open_by_fid() (which is called by mdt_reint_open() because it is a replay) and then also from mdt_reint_open() because of ENOENT being returned from mdt_finish_open()/mdt_mfd_open()/mdd_open()/mdd_open_sanity_check() call sequence as the consequence that mdt_object exists but mdd_object is dead.

      I first have thought that this could be induced by a side effect from my own patch, but it do not modify any of the involved code/routines, and there been already at least a previous occurence, as per LU-9363!

      So, I wonder if this LASSERT(ma->ma_lmv == NULL && ma->ma_lmm == NULL) in mdt_prep_ma_buf_from_rep() is really necessary, or if we should better return and do nothing if the condition is met ...

      I will try to push a patch soon.

      Attachments

        Issue Links

          Activity

            People

              bruno Bruno Faccini (Inactive)
              bruno Bruno Faccini (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: