Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-18049

ost-pools test_25, sanity-sec test_31: crash in ext4_htree_store_dirent kmalloc

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.16.0
    • Lustre 2.16.0
    • 3
    • 9223372036854775807

    Description

      This issue was created by maloo for Andreas Dilger <adilger@whamcloud.com>

      This issue relates to the following test suite run:
      https://testing.whamcloud.com/test_sets/3576290d-064a-4573-a087-75b59fff6df7

      test_25 failed with the following error:

      trevis-106vm10, trevis-106vm11 crashed during ost-pools test_25
      

      Test session details:
      clients: https://build.whamcloud.com/job/lustre-master/4542 - 5.14.0-362.24.1.el9_3.x86_64
      servers: https://build.whamcloud.com/job/lustre-b_es6_0/666 - 4.18.0-477.27.1.el8_lustre.ddn17.x86_64

      Two clients both crashed in ext4_htree_store_dirent() (NOT ldiskfs) in kmalloc, so it looks like some kind of client-side memory corruption?

      [27299.419062] Lustre: MGC10.240.44.44@tcp: Connection restored to  (at 10.240.44.44@tcp)
      [27299.448580] LustreError: 886364:0:(client.c:3288:ptlrpc_replay_interpret()) @@@ status 301, old was 0  req@ffff902e1402e3c0 x1802432488249280/t691489734702(691489734702) o101->lustre-MDT0000-mdc-ffff902e04449800@10.240.44.44@tcp:12/10 lens 520/608 e 0 to 0 dl 1718935667 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'lfs.0' uid:0 gid:0
      [27300.013294] Lustre: lustre-MDT0000-mdc-ffff902e04449800: Connection restored to  (at 10.240.44.44@tcp)
      [27305.358931] Lustre: 886365:0:(client.c:2334:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1718935641/real 1718935641]  req@ffff902e547d9380 x1802432536672704/t0(0) o400->lustre-MDT0000-mdc-ffff902e04449800@10.240.44.44@tcp:12/10 lens 224/224 e 0 to 1 dl 1718935657 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0
      [27306.104959] BUG: unable to handle page fault for address: ffff902ee5338778
      [27306.105697] #PF: supervisor read access in kernel mode
      [27306.106204] #PF: error_code(0x0000) - not-present page
      [27306.107607] CPU: 1 PID: 1109213 Comm: bash Kdump: loaded Tainted: G           OE     -------  ---  5.14.0-362.24.1.el9_3.x86_64 #1
      [27306.108653] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      [27306.109209] RIP: 0010:__kmalloc+0x11b/0x370
      [27306.118475] Call Trace:
      [27306.118756]  <TASK>
      [27306.120625]  ? __die_body.cold+0x8/0xd
      [27306.121006]  ? page_fault_oops+0x134/0x170
      [27306.121437]  ? kernelmode_fixup_or_oops+0x84/0x110
      [27306.121944]  ? exc_page_fault+0xa8/0x150
      [27306.122371]  ? asm_exc_page_fault+0x22/0x30
      [27306.122806]  ? ext4_htree_store_dirent+0x36/0x100 [ext4]
      [27306.123359]  ? __kmalloc+0x11b/0x370
      [27306.123740]  ext4_htree_store_dirent+0x36/0x100 [ext4]
      [27306.124269]  htree_dirblock_to_tree+0x1ab/0x310 [ext4]
      [27306.124809]  ext4_htree_fill_tree+0x203/0x3b0 [ext4]
      [27306.125333]  ext4_dx_readdir+0x10d/0x360 [ext4]
      [27306.125817]  ext4_readdir+0x392/0x550 [ext4]
      [27306.126275]  iterate_dir+0x17c/0x1c0
      [27306.126711]  __x64_sys_getdents64+0x80/0x120
      [27306.128187]  do_syscall_64+0x5c/0x90
      

      VVVVVVV DO NOT REMOVE LINES BELOW, Added by Maloo for auto-association VVVVVVV
      ost-pools test_25 - trevis-106vm10, trevis-106vm11 crashed during ost-pools test_25

      Attachments

        Issue Links

          Activity

            [LU-18049] ost-pools test_25, sanity-sec test_31: crash in ext4_htree_store_dirent kmalloc

            "James Simmons <jsimmons@infradead.org>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/56493
            Subject: LU-18049 mgc: clear nidlist pointer
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: bc9cf03d2a2d96d862b8ff0686eb4fd5d796b0b9

            gerrit Gerrit Updater added a comment - "James Simmons <jsimmons@infradead.org>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/56493 Subject: LU-18049 mgc: clear nidlist pointer Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: bc9cf03d2a2d96d862b8ff0686eb4fd5d796b0b9

            The 2.15 clients is receiving mgs_nidtbl_entry with large NIDs (20 bytes) instead of the older 64bit NID values so its over flowing the memory. Old clients expect 64 bit NID values only.

            simmonsja this is the opposite case. This is a new client (patch v2_15_58-45-ge4d2d4ff74 and later) running with 2.14 or 2.15 servers, so the older servers should definitely not sending large NIDs (nor could they even understand IPv6 NIDs).

            adilger Andreas Dilger added a comment - The 2.15 clients is receiving mgs_nidtbl_entry with large NIDs (20 bytes) instead of the older 64bit NID values so its over flowing the memory. Old clients expect 64 bit NID values only. simmonsja this is the opposite case. This is a new client (patch v2_15_58-45-ge4d2d4ff74 and later) running with 2.14 or 2.15 servers, so the older servers should definitely not sending large NIDs (nor could they even understand IPv6 NIDs).

            I believe I know what is happening. The 2.15 clients is receiving mgs_nidtbl_entry with large NIDs (20 bytes) instead of the older 64bit NID values so its over flowing the memory. Old clients expect 64 bit NID values only. The question is why the MGS is sending the large NID to the client mgc. In master branch mgc_process_recovery_log() fills in struct mgs_config_body to send to the MGS and we get back struct mgs_config_res which contains the NID info. When creating struct mgs_config_body we set the field mcb_rec_nid_size to the NID size which tells the MGS to send large NIDs. The idea for older clients was mcb_rec_nid_size was originally mcb_nm_cur_pass which should only be set for nodemap logs, not recovery logs which in that case should be zero which tells newer MGS to send 64 bit NID size instead of large NIDs. Also in master recovery logs and nodemap logs are handled separately. That is not the case for 2.15 clients. I suspect mcb_nm_cur_pass is being set for recovery logs as well on 2.15 clients so MGS running master see mcb_nm_cur_pass for recovery log as signal to use large NIDs. Sergey can you confirm this is what is happening.

            If this is happening as a work around we should check mcb_cur_pass == sizeof(struct lnet_nid). If not set to zero. That makes the bug less likely but not 100%. If mcb_nm_cur_pass is being set for recovery log case on client we need patch to set it always to zero. That is proper fix. 

            simmonsja James A Simmons added a comment - I believe I know what is happening. The 2.15 clients is receiving mgs_nidtbl_entry with large NIDs (20 bytes) instead of the older 64bit NID values so its over flowing the memory. Old clients expect 64 bit NID values only. The question is why the MGS is sending the large NID to the client mgc. In master branch mgc_process_recovery_log() fills in struct mgs_config_body to send to the MGS and we get back struct mgs_config_res which contains the NID info. When creating struct mgs_config_body we set the field mcb_rec_nid_size to the NID size which tells the MGS to send large NIDs. The idea for older clients was mcb_rec_nid_size was originally mcb_nm_cur_pass which should only be set for nodemap logs, not recovery logs which in that case should be zero which tells newer MGS to send 64 bit NID size instead of large NIDs. Also in master recovery logs and nodemap logs are handled separately. That is not the case for 2.15 clients. I suspect mcb_nm_cur_pass is being set for recovery logs as well on 2.15 clients so MGS running master see mcb_nm_cur_pass for recovery log as signal to use large NIDs. Sergey can you confirm this is what is happening. If this is happening as a work around we should check mcb_cur_pass == sizeof(struct lnet_nid). If not set to zero. That makes the bug less likely but not 100%. If mcb_nm_cur_pass is being set for recovery log case on client we need patch to set it always to zero. That is proper fix. 

            Did you look at the +malloc debug logs to see if the corrupted address matches something in the mgc functions modified by that patch? It may be that some extra CDEBUG(D_MALLOC, ...) lines to print the addresses of the fields in that function.

            adilger Andreas Dilger added a comment - Did you look at the +malloc debug logs to see if the corrupted address matches something in the mgc functions modified by that patch? It may be that some extra CDEBUG(D_MALLOC, ...) lines to print the addresses of the fields in that function.
            scherementsev Sergey Cheremencev added a comment - - edited

            I could reproduce it on my local testing system. It has the a different back trace, but I believe still the same sympthoms. It failed just after several runs of sanity-sec_31 in a cycle(client v2_15_58-46 + server 2.14).

            [51444.997787] Lustre: Unmounted lustre-client
            [51445.513711] LNet: Removed LNI 192.168.1.82@tcp999
            [51448.012522] general protection fault: 0000 [#1] SMP 
            [51448.013216] Modules linked in: lustre(OE) ofd(OE) osp(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey loop jbd2 mbcache sunrpc iosf_mbi crc32_pclmul snd_intel8x0 ghash_clmulni_intel snd_ac97_codec ac97_bus ppdev snd_seq snd_seq_device aesni_intel lrw snd_pcm gf128mul sg glue_helper ablk_helper cryptd pcspkr snd_timer snd parport_pc parport video soundcore i2c_piix4 ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic ata_generic pata_acpi vmwgfx drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ahci ttm libahci ata_piix drm libata crct10dif_pclmul crct10dif_common e1000 crc32c_intel serio_raw drm_panel_orientation_quirks dm_mirror
            [51448.019903]  dm_region_hash dm_log dm_mod [last unloaded: libcfs]
            [51448.021331] CPU: 1 PID: 9863 Comm: systemd-udevd Kdump: loaded Tainted: G           OE  ------------   3.10.0-1160.49.1.el7_lustre.x86_64 #1
            [51448.023377] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
            [51448.024086] task: ffff9991a021e300 ti: ffff99918462c000 task.ti: ffff99918462c000
            [51448.024758] RIP: 0010:[<ffffffffbbe02f88>]  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0
            [51448.028746] RSP: 0018:ffff99918462fd18  EFLAGS: 00010202
            [51448.029501] RAX: ffff999135dfdaf0 RBX: 2c01a8c0e70301f0 RCX: 0000000000000000
            [51448.030109] RDX: 0000000000000000 RSI: ffff9991b4e76300 RDI: ffff99917d1e9540
            [51448.030835] RBP: ffff99918462fd58 R08: 00007fec8e4bd000 R09: 0000000000000000
            [51448.031478] R10: ffff99917d1e9560 R11: ffff99917d1e9b80 R12: ffff99917d1e9540
            [51448.032120] R13: ffff99918f2fbdf8 R14: ffff999135dfdaf0 R15: ffff9991b4e762d0
            [51448.032907] FS:  0000000000000000(0000) GS:ffff9991bc040000(0000) knlGS:0000000000000000
            [51448.033444] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
            [51448.034145] CR2: 00007fec8cb0e504 CR3: 00000000a389a000 CR4: 00000000000606e0
            [51448.034655] Call Trace:
            [51448.036556]  [<ffffffffbbdf0ac8>] free_pgtables+0xa8/0x120
            [51448.037670]  [<ffffffffbbdfd656>] exit_mmap+0xc6/0x1a0
            [51448.038670]  [<ffffffffbbc97b87>] mmput+0x67/0xf0
            [51448.039169]  [<ffffffffbbca1908>] do_exit+0x288/0xa30
            [51448.041924]  [<ffffffffbc390678>] ? __do_page_fault+0x238/0x500
            [51448.042947]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a
            [51448.043398]  [<ffffffffbbca212f>] do_group_exit+0x3f/0xa0
            [51448.044037]  [<ffffffffbbca21a4>] SyS_exit_group+0x14/0x20
            [51448.044449]  [<ffffffffbc395f92>] system_call_fastpath+0x25/0x2a
            [51448.044994]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a
            [51448.045429] Code: 67 f0 4c 89 7d c8 48 8d 59 f0 75 3a e9 82 00 00 00 66 90 48 8b 7d c8 e8 47 3a 1a 00 48 8b 3d b0 3d fc 00 4c 89 e6 e8 b8 3e 02 00 <48> 8b 43 10 48 8d 4b 10 48 89 4d c8 48 83 e8 10 4c 39 e9 74 53 
            [51448.046774] RIP  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0
            [51448.047162]  RSP <ffff99918462fd18> 

            Here is the problem In RBX. There should be a valid address instead of 2c01a8c0e70301f0. It is not hard to notice that 0x2c01a8c0 is a reversed IP 192.168.1.44(address of my server).

            scherementsev Sergey Cheremencev added a comment - - edited I could reproduce it on my local testing system. It has the a different back trace, but I believe still the same sympthoms. It failed just after several runs of sanity-sec_31 in a cycle(client v2_15_58-46 + server 2.14). [51444.997787] Lustre: Unmounted lustre-client [51445.513711] LNet: Removed LNI 192.168.1.82@tcp999 [51448.012522] general protection fault: 0000 [#1] SMP  [51448.013216] Modules linked in: lustre(OE) ofd(OE) osp(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey loop jbd2 mbcache sunrpc iosf_mbi crc32_pclmul snd_intel8x0 ghash_clmulni_intel snd_ac97_codec ac97_bus ppdev snd_seq snd_seq_device aesni_intel lrw snd_pcm gf128mul sg glue_helper ablk_helper cryptd pcspkr snd_timer snd parport_pc parport video soundcore i2c_piix4 ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic ata_generic pata_acpi vmwgfx drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ahci ttm libahci ata_piix drm libata crct10dif_pclmul crct10dif_common e1000 crc32c_intel serio_raw drm_panel_orientation_quirks dm_mirror [51448.019903]  dm_region_hash dm_log dm_mod [last unloaded: libcfs] [51448.021331] CPU: 1 PID: 9863 Comm: systemd-udevd Kdump: loaded Tainted: G           OE  ------------   3.10.0-1160.49.1.el7_lustre.x86_64 #1 [51448.023377] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [51448.024086] task: ffff9991a021e300 ti: ffff99918462c000 task.ti: ffff99918462c000 [51448.024758] RIP: 0010:[<ffffffffbbe02f88>]  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0 [51448.028746] RSP: 0018:ffff99918462fd18  EFLAGS: 00010202 [51448.029501] RAX: ffff999135dfdaf0 RBX: 2c01a8c0e70301f0 RCX: 0000000000000000 [51448.030109] RDX: 0000000000000000 RSI: ffff9991b4e76300 RDI: ffff99917d1e9540 [51448.030835] RBP: ffff99918462fd58 R08: 00007fec8e4bd000 R09: 0000000000000000 [51448.031478] R10: ffff99917d1e9560 R11: ffff99917d1e9b80 R12: ffff99917d1e9540 [51448.032120] R13: ffff99918f2fbdf8 R14: ffff999135dfdaf0 R15: ffff9991b4e762d0 [51448.032907] FS:  0000000000000000(0000) GS:ffff9991bc040000(0000) knlGS:0000000000000000 [51448.033444] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [51448.034145] CR2: 00007fec8cb0e504 CR3: 00000000a389a000 CR4: 00000000000606e0 [51448.034655] Call Trace: [51448.036556]  [<ffffffffbbdf0ac8>] free_pgtables+0xa8/0x120 [51448.037670]  [<ffffffffbbdfd656>] exit_mmap+0xc6/0x1a0 [51448.038670]  [<ffffffffbbc97b87>] mmput+0x67/0xf0 [51448.039169]  [<ffffffffbbca1908>] do_exit+0x288/0xa30 [51448.041924]  [<ffffffffbc390678>] ? __do_page_fault+0x238/0x500 [51448.042947]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a [51448.043398]  [<ffffffffbbca212f>] do_group_exit+0x3f/0xa0 [51448.044037]  [<ffffffffbbca21a4>] SyS_exit_group+0x14/0x20 [51448.044449]  [<ffffffffbc395f92>] system_call_fastpath+0x25/0x2a [51448.044994]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a [51448.045429] Code: 67 f0 4c 89 7d c8 48 8d 59 f0 75 3a e9 82 00 00 00 66 90 48 8b 7d c8 e8 47 3a 1a 00 48 8b 3d b0 3d fc 00 4c 89 e6 e8 b8 3e 02 00 <48> 8b 43 10 48 8d 4b 10 48 89 4d c8 48 83 e8 10 4c 39 e9 74 53 [51448.046774] RIP  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0 [51448.047162]  RSP <ffff99918462fd18> Here is the problem In RBX. There should be a valid address instead of 2c01a8c0e70301f0. It is not hard to notice that 0x2c01a8c0 is a reversed IP 192.168.1.44(address of my server).

            Patch v2_15_58-46-ge2539c0667 failed all sessions.
            Patch v2_15_58-45-ge4d2d4ff74 failed all sessions.
            Patch v2_15_58-44-g14ed4a6f8f passed all sessions.

            So it is clear the patch v2_15_58-45-ge4d2d4ff74 "LU-13306 mgc: handle large NID formats" is the source of the memory corruption.

            Sergey will see if he can reproduce this locally to improve debugging.

            I've added memory debug logging to the -45 test patch to see if this can help identify the source of the corruption.

            adilger Andreas Dilger added a comment - Patch v2_15_58-46-ge2539c0667 failed all sessions. Patch v2_15_58-45-ge4d2d4ff74 failed all sessions. Patch v2_15_58-44-g14ed4a6f8f passed all sessions. So it is clear the patch v2_15_58-45-ge4d2d4ff74 " LU-13306 mgc: handle large NID formats " is the source of the memory corruption. Sergey will see if he can reproduce this locally to improve debugging. I've added memory debug logging to the -45 test patch to see if this can help identify the source of the corruption.

            While it is only a 50-50 chance right now, it looks likely that patch https://review.whamcloud.com/50750 "LU-13306 mgc: handle large NID formats" is the source of the memory corruption in this test.

            If this can be reproduced on a local test system (outside of autotest) with the slab debugging enabled, that is likely the most direct path to figuring out what is actually corrupting the memory.

            adilger Andreas Dilger added a comment - While it is only a 50-50 chance right now, it looks likely that patch https://review.whamcloud.com/50750 " LU-13306 mgc: handle large NID formats " is the source of the memory corruption in this test. If this can be reproduced on a local test system (outside of autotest) with the slab debugging enabled, that is likely the most direct path to figuring out what is actually corrupting the memory.

            scherementsev, I'm not sure if this is practical to do or not, but one option might be to crank up the debug log size (e.g. 1GB) and enable "+malloc" debugging to get a record of what memory is allocated/freed by Lustre, and then use this to compare against the memory address in the core dump that is corrupted.

            This would potentially allow seeing what Lustre allocation is immediately before/after the corrupted slab item, and this could tell us what code is overwriting its own allocation and clobbering the neighboring slab object.

            Note I also filed ATM-3293 to implement a mechanism to pass kernel command-line options to the system during boot, like:

            Test-Parameters: kernelopt="slab_nomerge slab_debug=FZPU,kmalloc-64"
            

            so that we can enable the slab redzone / poisoning functionality on the kmalloc-64 slab. But until that is imlemented I don't think there is any slab debugging that can be enabled from within the running VM.

            adilger Andreas Dilger added a comment - scherementsev , I'm not sure if this is practical to do or not, but one option might be to crank up the debug log size (e.g. 1GB) and enable " +malloc " debugging to get a record of what memory is allocated/freed by Lustre, and then use this to compare against the memory address in the core dump that is corrupted. This would potentially allow seeing what Lustre allocation is immediately before/after the corrupted slab item, and this could tell us what code is overwriting its own allocation and clobbering the neighboring slab object. Note I also filed ATM-3293 to implement a mechanism to pass kernel command-line options to the system during boot, like: Test-Parameters: kernelopt="slab_nomerge slab_debug=FZPU,kmalloc-64" so that we can enable the slab redzone / poisoning functionality on the kmalloc-64 slab. But until that is imlemented I don't think there is any slab debugging that can be enabled from within the running VM.

            Patch v2_15_58-42-ga54a20665a has passed all sessions.
            Patch v2_15_58-44-g14ed4a6f8f is still running after 2h so it is likely to also pass.
            Patch v2_15_58-47-gaebb405e32 has failed all sessions.

            I will push the two finished patches to 45 and 46:
            v2_15_58-47-gaebb405e32 LU-10499 pcc: use foreign layout for PCCRO on server side FAIL
            v2_15_58-46-ge2539c0667 LU-17088 dom: don't create different size DOM component
            v2_15_58-45-ge4d2d4ff74 LU-13306 mgc: handle large NID formats
            v2_15_58-44-g14ed4a6f8f LU-16837 llite: handle unknown layout component PASS

            adilger Andreas Dilger added a comment - Patch v2_15_58-42-ga54a20665a has passed all sessions. Patch v2_15_58-44-g14ed4a6f8f is still running after 2h so it is likely to also pass. Patch v2_15_58-47-gaebb405e32 has failed all sessions. I will push the two finished patches to 45 and 46: v2_15_58-47-gaebb405e32 LU-10499 pcc: use foreign layout for PCCRO on server side FAIL v2_15_58-46-ge2539c0667 LU-17088 dom: don't create different size DOM component v2_15_58-45-ge4d2d4ff74 LU-13306 mgc: handle large NID formats v2_15_58-44-g14ed4a6f8f LU-16837 llite: handle unknown layout component PASS

            Patch v2_15_58-57-g4ba4976f52 has crashed all sessions. I will move it to v2_15_58-44-g14ed4a6f8f (just before large NID patch).

            adilger Andreas Dilger added a comment - Patch v2_15_58-57-g4ba4976f52 has crashed all sessions. I will move it to v2_15_58-44-g14ed4a6f8f (just before large NID patch).
            adilger Andreas Dilger added a comment - - edited

            Patch v2_15_58-57-g4ba4976f52 build finished, prioritized test sessions in the queue, though it is unclear if this will tell us anything new (likely to crash).
            Patch v2_15_58-52-g58a1c2c321 crashed all 9 sessions after 55-65 min, which means suspect patch is likely older. Mostly eliminates v2_15_58-60 (cgroups).
            Patch v2_15_58-31-g5f3bc2ce7c is still running all sessions after 70-85 min, might be an indication that it will pass.

            Next is v2_15_58-47-gaebb405e32 repurpose of the -52 patch.

            Between -42 and -47 is v2_15_58-45-ge4d2d4ff74 "LU-13306 mgc: handle large NID formats" which Sergey previously pointed out as a likely culprit for interop crashing.

            Note that there are two later patches with "Fixes: LU-13306" (v2_15_59-38-g8276ade19d and v2_15_60-5-g015f645651) but they reference a different patch. If necessary, we could probably merge those into the test patches, but they are fairly far away from the recent testing that I'm not sure they are involved...

            It may be significant that none of the recent crashes I've looked at have shown ext4-related stacks, only mount/config crashes.

            adilger Andreas Dilger added a comment - - edited Patch v2_15_58-57-g4ba4976f52 build finished, prioritized test sessions in the queue, though it is unclear if this will tell us anything new (likely to crash). Patch v2_15_58-52-g58a1c2c321 crashed all 9 sessions after 55-65 min, which means suspect patch is likely older. Mostly eliminates v2_15_58-60 (cgroups). Patch v2_15_58-31-g5f3bc2ce7c is still running all sessions after 70-85 min, might be an indication that it will pass. Next is v2_15_58-47-gaebb405e32 repurpose of the -52 patch. Between -42 and -47 is v2_15_58-45-ge4d2d4ff74 " LU-13306 mgc: handle large NID formats " which Sergey previously pointed out as a likely culprit for interop crashing. Note that there are two later patches with " Fixes: LU-13306 " (v2_15_59-38-g8276ade19d and v2_15_60-5-g015f645651) but they reference a different patch. If necessary, we could probably merge those into the test patches, but they are fairly far away from the recent testing that I'm not sure they are involved... It may be significant that none of the recent crashes I've looked at have shown ext4-related stacks, only mount/config crashes.

            People

              scherementsev Sergey Cheremencev
              qian_wc Qian Yingjin
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: