Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.8.0
    • Lustre 2.7.0, Lustre 2.8.0
    • 3
    • 9223372036854775807

    Description

      TEI-3148 brought out a problem with sanity-hsm test 500 causing a memory leak. sanity-hsm test 500 calls the tests in llapi_hsm_test.c. In particular, test #2 of the llapi_hsm_test.c tests is causing or bringing out the memory leak. The interesting thing is that, for ZFS, the memory leak causes Lustre not to be shut down and the test will time out after an hour. For ldiskfs runs, this is not true, the tests shut down cleanly. In all cases, no error is detected and the test status is PASS. This ticket is to report the memory leak cause by llapi_hsm_test test #2.

      I ran a series of tests to pinpoint what test was causing the leak. For the last test, I hard coded that test 500 would fail in the hopes that dumping the logs would tell us something, but a quick look over the logs didn’t show me anything obviously wrong. The test patch is at http://review.whamcloud.com/#/c/14337/ and the latest results that this patch produced with only running sanity-hsm test 500 and subtest #2 is at https://testing.hpdd.intel.com/test_sessions/cff1ffc8-e91e-11e4-9e6e-5254006e85c2. From the end of the suite_stdout log, we see

      17:09:46:Stopping /mnt/ost2 (opts:-f) on shadow-25vm3
      17:09:46:CMD: shadow-25vm3 umount -d -f /mnt/ost2
      17:09:57:CMD: shadow-25vm3 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      17:09:57:CMD: shadow-25vm3 ! zpool list -H lustre-ost2 >/dev/null 2>&1 ||
      17:09:57:			grep -q ^lustre-ost2/ /proc/mounts ||
      17:09:57:			zpool export  lustre-ost2
      17:09:57:0022
      17:09:57:CMD: shadow-25vm2.shadow.whamcloud.com lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      17:09:57:LustreError: 10994:0:(class_obd.c:680:cleanup_obdclass()) obd_memory max: 3236953, leaked: 48
      17:09:57:
      17:09:57:Memory leaks detected
      18:10:18:********** Timeout by autotest system **********
      

      For a review-dne-part-2 master failure, results are at https://testing.hpdd.intel.com/test_sets/b89ca232-e8d4-11e4-9e6e-5254006e85c2 . From the suite_stdout_log:

      07:17:53:Stopping /mnt/ost8 (opts:-f) on shadow-48vm4
      07:17:53:CMD: shadow-48vm4 umount -d -f /mnt/ost8
      07:17:53:CMD: shadow-48vm4 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:17:53:0022
      07:17:53:CMD: shadow-48vm6.shadow.whamcloud.com lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:18:05:LustreError: 17219:0:(class_obd.c:680:cleanup_obdclass()) obd_memory max: 54130094, leaked: 48
      07:18:05:
      07:18:05:Memory leaks detected
      07:18:05:sanity-hsm returned 0
      07:18:05:running: ost-pools 
      07:18:05:run_suite ost-pools /usr/lib64/lustre/tests/ost-pools.sh
      

      In a b2_7 sanity-hsm run, the memory leak is reported twice at https://testing.hpdd.intel.com/test_sets/0fca08aa-ca31-11e4-b9cd-5254006e85c2 in the site_stdout logs:

      07:29:53:CMD: shadow-5vm11 umount -d -f /mnt/ost2
      07:30:04:CMD: shadow-5vm11 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:30:04:CMD: shadow-5vm11 ! zpool list -H lustre-ost2 >/dev/null 2>&1 ||
      07:30:04:			grep -q ^lustre-ost2/ /proc/mounts ||
      07:30:04:			zpool export  lustre-ost2
      07:30:04:0022
      07:30:04:CMD: shadow-5vm1.shadow.whamcloud.com lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:30:16:LustreError: 32001:0:(class_obd.c:680:cleanup_obdclass()) obd_memory max: 23480928, leaked: 48
      07:30:16:
      07:30:16:Memory leaks detected
      07:30:16:sanity-hsm returned 0
      07:30:16:Stopping clients: shadow-5vm10,shadow-5vm1.shadow.whamcloud.com /mnt/lustre (opts:)
      07:30:16:CMD: shadow-5vm10,shadow-5vm1.shadow.whamcloud.com running=\$(grep -c /mnt/lustre' ' /proc/mounts);
      07:30:16:if [ \$running -ne 0 ] ; then
      07:30:16:echo Stopping client \$(hostname) /mnt/lustre opts:;
      07:30:16:lsof /mnt/lustre || need_kill=no;
      07:30:16:if [ x != x -a x\$need_kill != xno ]; then
      07:30:16:    pids=\$(lsof -t /mnt/lustre | sort -u);
      07:30:16:    if [ -n \"\$pids\" ]; then
      07:30:16:             kill -9 \$pids;
      07:30:16:    fi
      07:30:16:fi;
      07:30:16:while umount  /mnt/lustre 2>&1 | grep -q busy; do
      07:30:16:    echo /mnt/lustre is still busy, wait one second && sleep 1;
      07:30:16:done;
      07:30:16:fi
      07:30:16:Stopping clients: shadow-5vm10,shadow-5vm1.shadow.whamcloud.com /mnt/lustre2 (opts:)
      07:30:16:CMD: shadow-5vm10,shadow-5vm1.shadow.whamcloud.com running=\$(grep -c /mnt/lustre2' ' /proc/mounts);
      07:30:16:if [ \$running -ne 0 ] ; then
      07:30:16:echo Stopping client \$(hostname) /mnt/lustre2 opts:;
      07:30:16:lsof /mnt/lustre2 || need_kill=no;
      07:30:16:if [ x != x -a x\$need_kill != xno ]; then
      07:30:16:    pids=\$(lsof -t /mnt/lustre2 | sort -u);
      07:30:16:    if [ -n \"\$pids\" ]; then
      07:30:16:             kill -9 \$pids;
      07:30:16:    fi
      07:30:16:fi;
      07:30:16:while umount  /mnt/lustre2 2>&1 | grep -q busy; do
      07:30:16:    echo /mnt/lustre2 is still busy, wait one second && sleep 1;
      07:30:16:done;
      07:30:16:fi
      07:30:16:CMD: shadow-5vm12 grep -c /mnt/mds1' ' /proc/mounts
      07:30:16:CMD: shadow-5vm12 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:30:16:CMD: shadow-5vm12 ! zpool list -H lustre-mdt1 >/dev/null 2>&1 ||
      07:30:16:			grep -q ^lustre-mdt1/ /proc/mounts ||
      07:30:16:			zpool export  lustre-mdt1
      07:30:16:CMD: shadow-5vm11 grep -c /mnt/ost1' ' /proc/mounts
      07:30:16:CMD: shadow-5vm11 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:30:16:CMD: shadow-5vm11 ! zpool list -H lustre-ost1 >/dev/null 2>&1 ||
      07:30:16:			grep -q ^lustre-ost1/ /proc/mounts ||
      07:30:16:			zpool export  lustre-ost1
      07:30:16:CMD: shadow-5vm11 grep -c /mnt/ost2' ' /proc/mounts
      07:30:16:CMD: shadow-5vm11 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:30:16:CMD: shadow-5vm11 ! zpool list -H lustre-ost2 >/dev/null 2>&1 ||
      07:30:16:			grep -q ^lustre-ost2/ /proc/mounts ||
      07:30:16:			zpool export  lustre-ost2
      07:30:16:0022
      07:30:16:error: dl: No such file or directory opening /proc/fs/lustre/devices
      07:30:16:opening /dev/obd failed: No such device
      07:30:16:hint: the kernel modules may not be loaded
      07:30:16:Error getting device list: No such device: check dmesg.
      07:30:16:CMD: shadow-5vm1.shadow.whamcloud.com lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      07:30:16:LustreError: 32001:0:(class_obd.c:680:cleanup_obdclass()) obd_memory max: 23480928, leaked: 48
      07:30:16:
      07:30:16:mv: cannot stat `/tmp/debug': No such file or directory
      07:30:16:Memory leaks detected
      

      This problem looks to be from the day test 500 was added to sanity-hsm. The first run of test 500 in Maloo with logs is on 2014-11-27 is at https://testing.hpdd.intel.com/test_sets/d1cdb39a-7610-11e4-ad19-5254006e85c2. This run exhibits the memory leak problem.

      Attachments

        Activity

          [LU-6485] sanity-hsm test 500 memory leak

          the patch against b2_7_fe is tracked at http://review.whamcloud.com/#/c/20083/

          hongchao.zhang Hongchao Zhang added a comment - the patch against b2_7_fe is tracked at http://review.whamcloud.com/#/c/20083/
          pjones Peter Jones added a comment -

          Landed for 2.8

          pjones Peter Jones added a comment - Landed for 2.8

          Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14638/
          Subject: LU-6485 libcfs: embed kr_data into kkuc_reg
          Project: fs/lustre-release
          Branch: master
          Current Patch Set:
          Commit: 7f67aa42f9123caef3cee714f1e2cee3c6848892

          gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14638/ Subject: LU-6485 libcfs: embed kr_data into kkuc_reg Project: fs/lustre-release Branch: master Current Patch Set: Commit: 7f67aa42f9123caef3cee714f1e2cee3c6848892

          Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: http://review.whamcloud.com/14638
          Subject: LU-6485 libcfs: distinguish kernelcomm by serial
          Project: fs/lustre-release
          Branch: master
          Current Patch Set: 1
          Commit: 790de2ae7690fe5ae6c71813d5160713242c907c

          gerrit Gerrit Updater added a comment - Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: http://review.whamcloud.com/14638 Subject: LU-6485 libcfs: distinguish kernelcomm by serial Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 790de2ae7690fe5ae6c71813d5160713242c907c

          status update: the cause of memory leak is clear, and the patch is under way

          hongchao.zhang Hongchao Zhang added a comment - status update: the cause of memory leak is clear, and the patch is under way
          pjones Peter Jones added a comment -

          Hongchao

          Could you please look into this issue?

          Thanks

          Peter

          pjones Peter Jones added a comment - Hongchao Could you please look into this issue? Thanks Peter

          I was just going to comment that debugging memory leaks in Lustre is fairly straight forward (at least until the upstream kernel forces us to rip out all the OBD_ALLOC*() macros). Install the libcfs module and turn on allocation tracing and increase the debug buffer size lctl set_param debug=+malloc debug_mb=500 (or use debug_daemon if the test is long and generates a lot of logs). Then, run the problematic test and clean up, and dump the kernel debug log before removing the libcfs module (where kernel debug logs are handled). This is done automatically by the test framework by lustre_rmmod to dump the $TMP/debug file just before libcfs is removed.

          Once you have a debug log, then run perl lustre/tests/leak_finder.pl <debug_log> and grep for Leak: messages. If you don't do a full mount/unmount around your test you may get some spurious lines from leak_finder.pl, which can be excluded based on the allocation size.

          If there isn't already a wiki page that describes this process it might be a good idea to create one?

          adilger Andreas Dilger added a comment - I was just going to comment that debugging memory leaks in Lustre is fairly straight forward (at least until the upstream kernel forces us to rip out all the OBD_ALLOC*() macros). Install the libcfs module and turn on allocation tracing and increase the debug buffer size lctl set_param debug=+malloc debug_mb=500 (or use debug_daemon if the test is long and generates a lot of logs). Then, run the problematic test and clean up, and dump the kernel debug log before removing the libcfs module (where kernel debug logs are handled). This is done automatically by the test framework by lustre_rmmod to dump the $TMP/debug file just before libcfs is removed. Once you have a debug log, then run perl lustre/tests/leak_finder.pl <debug_log> and grep for Leak: messages. If you don't do a full mount/unmount around your test you may get some spurious lines from leak_finder.pl, which can be excluded based on the allocation size. If there isn't already a wiki page that describes this process it might be a good idea to create one?

          James, this is probably a struct kkuc_ct_data (it has the right size and smell). In mdc_precleanup() we call libcfs_kkuc_group_rem(0, KUC_GRP_HSM, NULL) which will leak any registered kcds. We should try to align this with what we do in lmv_hsm_ct_unregister(). But I'm not sure this would be completely correct.

          jhammond John Hammond added a comment - James, this is probably a struct kkuc_ct_data (it has the right size and smell). In mdc_precleanup() we call libcfs_kkuc_group_rem(0, KUC_GRP_HSM, NULL) which will leak any registered kcds. We should try to align this with what we do in lmv_hsm_ct_unregister(). But I'm not sure this would be completely correct.

          People

            hongchao.zhang Hongchao Zhang
            jamesanunez James Nunez (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: