Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-18049

ost-pools test_25, sanity-sec test_31: crash in ext4_htree_store_dirent kmalloc

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.16.0
    • Lustre 2.16.0
    • 3
    • 9223372036854775807

    Description

      This issue was created by maloo for Andreas Dilger <adilger@whamcloud.com>

      This issue relates to the following test suite run:
      https://testing.whamcloud.com/test_sets/3576290d-064a-4573-a087-75b59fff6df7

      test_25 failed with the following error:

      trevis-106vm10, trevis-106vm11 crashed during ost-pools test_25
      

      Test session details:
      clients: https://build.whamcloud.com/job/lustre-master/4542 - 5.14.0-362.24.1.el9_3.x86_64
      servers: https://build.whamcloud.com/job/lustre-b_es6_0/666 - 4.18.0-477.27.1.el8_lustre.ddn17.x86_64

      Two clients both crashed in ext4_htree_store_dirent() (NOT ldiskfs) in kmalloc, so it looks like some kind of client-side memory corruption?

      [27299.419062] Lustre: MGC10.240.44.44@tcp: Connection restored to  (at 10.240.44.44@tcp)
      [27299.448580] LustreError: 886364:0:(client.c:3288:ptlrpc_replay_interpret()) @@@ status 301, old was 0  req@ffff902e1402e3c0 x1802432488249280/t691489734702(691489734702) o101->lustre-MDT0000-mdc-ffff902e04449800@10.240.44.44@tcp:12/10 lens 520/608 e 0 to 0 dl 1718935667 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'lfs.0' uid:0 gid:0
      [27300.013294] Lustre: lustre-MDT0000-mdc-ffff902e04449800: Connection restored to  (at 10.240.44.44@tcp)
      [27305.358931] Lustre: 886365:0:(client.c:2334:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1718935641/real 1718935641]  req@ffff902e547d9380 x1802432536672704/t0(0) o400->lustre-MDT0000-mdc-ffff902e04449800@10.240.44.44@tcp:12/10 lens 224/224 e 0 to 1 dl 1718935657 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0
      [27306.104959] BUG: unable to handle page fault for address: ffff902ee5338778
      [27306.105697] #PF: supervisor read access in kernel mode
      [27306.106204] #PF: error_code(0x0000) - not-present page
      [27306.107607] CPU: 1 PID: 1109213 Comm: bash Kdump: loaded Tainted: G           OE     -------  ---  5.14.0-362.24.1.el9_3.x86_64 #1
      [27306.108653] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      [27306.109209] RIP: 0010:__kmalloc+0x11b/0x370
      [27306.118475] Call Trace:
      [27306.118756]  <TASK>
      [27306.120625]  ? __die_body.cold+0x8/0xd
      [27306.121006]  ? page_fault_oops+0x134/0x170
      [27306.121437]  ? kernelmode_fixup_or_oops+0x84/0x110
      [27306.121944]  ? exc_page_fault+0xa8/0x150
      [27306.122371]  ? asm_exc_page_fault+0x22/0x30
      [27306.122806]  ? ext4_htree_store_dirent+0x36/0x100 [ext4]
      [27306.123359]  ? __kmalloc+0x11b/0x370
      [27306.123740]  ext4_htree_store_dirent+0x36/0x100 [ext4]
      [27306.124269]  htree_dirblock_to_tree+0x1ab/0x310 [ext4]
      [27306.124809]  ext4_htree_fill_tree+0x203/0x3b0 [ext4]
      [27306.125333]  ext4_dx_readdir+0x10d/0x360 [ext4]
      [27306.125817]  ext4_readdir+0x392/0x550 [ext4]
      [27306.126275]  iterate_dir+0x17c/0x1c0
      [27306.126711]  __x64_sys_getdents64+0x80/0x120
      [27306.128187]  do_syscall_64+0x5c/0x90
      

      VVVVVVV DO NOT REMOVE LINES BELOW, Added by Maloo for auto-association VVVVVVV
      ost-pools test_25 - trevis-106vm10, trevis-106vm11 crashed during ost-pools test_25

      Attachments

        Issue Links

          Activity

            [LU-18049] ost-pools test_25, sanity-sec test_31: crash in ext4_htree_store_dirent kmalloc

            I believe I know what is happening. The 2.15 clients is receiving mgs_nidtbl_entry with large NIDs (20 bytes) instead of the older 64bit NID values so its over flowing the memory. Old clients expect 64 bit NID values only. The question is why the MGS is sending the large NID to the client mgc. In master branch mgc_process_recovery_log() fills in struct mgs_config_body to send to the MGS and we get back struct mgs_config_res which contains the NID info. When creating struct mgs_config_body we set the field mcb_rec_nid_size to the NID size which tells the MGS to send large NIDs. The idea for older clients was mcb_rec_nid_size was originally mcb_nm_cur_pass which should only be set for nodemap logs, not recovery logs which in that case should be zero which tells newer MGS to send 64 bit NID size instead of large NIDs. Also in master recovery logs and nodemap logs are handled separately. That is not the case for 2.15 clients. I suspect mcb_nm_cur_pass is being set for recovery logs as well on 2.15 clients so MGS running master see mcb_nm_cur_pass for recovery log as signal to use large NIDs. Sergey can you confirm this is what is happening.

            If this is happening as a work around we should check mcb_cur_pass == sizeof(struct lnet_nid). If not set to zero. That makes the bug less likely but not 100%. If mcb_nm_cur_pass is being set for recovery log case on client we need patch to set it always to zero. That is proper fix. 

            simmonsja James A Simmons added a comment - I believe I know what is happening. The 2.15 clients is receiving mgs_nidtbl_entry with large NIDs (20 bytes) instead of the older 64bit NID values so its over flowing the memory. Old clients expect 64 bit NID values only. The question is why the MGS is sending the large NID to the client mgc. In master branch mgc_process_recovery_log() fills in struct mgs_config_body to send to the MGS and we get back struct mgs_config_res which contains the NID info. When creating struct mgs_config_body we set the field mcb_rec_nid_size to the NID size which tells the MGS to send large NIDs. The idea for older clients was mcb_rec_nid_size was originally mcb_nm_cur_pass which should only be set for nodemap logs, not recovery logs which in that case should be zero which tells newer MGS to send 64 bit NID size instead of large NIDs. Also in master recovery logs and nodemap logs are handled separately. That is not the case for 2.15 clients. I suspect mcb_nm_cur_pass is being set for recovery logs as well on 2.15 clients so MGS running master see mcb_nm_cur_pass for recovery log as signal to use large NIDs. Sergey can you confirm this is what is happening. If this is happening as a work around we should check mcb_cur_pass == sizeof(struct lnet_nid). If not set to zero. That makes the bug less likely but not 100%. If mcb_nm_cur_pass is being set for recovery log case on client we need patch to set it always to zero. That is proper fix. 

            Did you look at the +malloc debug logs to see if the corrupted address matches something in the mgc functions modified by that patch? It may be that some extra CDEBUG(D_MALLOC, ...) lines to print the addresses of the fields in that function.

            adilger Andreas Dilger added a comment - Did you look at the +malloc debug logs to see if the corrupted address matches something in the mgc functions modified by that patch? It may be that some extra CDEBUG(D_MALLOC, ...) lines to print the addresses of the fields in that function.
            scherementsev Sergey Cheremencev added a comment - - edited

            I could reproduce it on my local testing system. It has the a different back trace, but I believe still the same sympthoms. It failed just after several runs of sanity-sec_31 in a cycle(client v2_15_58-46 + server 2.14).

            [51444.997787] Lustre: Unmounted lustre-client
            [51445.513711] LNet: Removed LNI 192.168.1.82@tcp999
            [51448.012522] general protection fault: 0000 [#1] SMP 
            [51448.013216] Modules linked in: lustre(OE) ofd(OE) osp(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey loop jbd2 mbcache sunrpc iosf_mbi crc32_pclmul snd_intel8x0 ghash_clmulni_intel snd_ac97_codec ac97_bus ppdev snd_seq snd_seq_device aesni_intel lrw snd_pcm gf128mul sg glue_helper ablk_helper cryptd pcspkr snd_timer snd parport_pc parport video soundcore i2c_piix4 ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic ata_generic pata_acpi vmwgfx drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ahci ttm libahci ata_piix drm libata crct10dif_pclmul crct10dif_common e1000 crc32c_intel serio_raw drm_panel_orientation_quirks dm_mirror
            [51448.019903]  dm_region_hash dm_log dm_mod [last unloaded: libcfs]
            [51448.021331] CPU: 1 PID: 9863 Comm: systemd-udevd Kdump: loaded Tainted: G           OE  ------------   3.10.0-1160.49.1.el7_lustre.x86_64 #1
            [51448.023377] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
            [51448.024086] task: ffff9991a021e300 ti: ffff99918462c000 task.ti: ffff99918462c000
            [51448.024758] RIP: 0010:[<ffffffffbbe02f88>]  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0
            [51448.028746] RSP: 0018:ffff99918462fd18  EFLAGS: 00010202
            [51448.029501] RAX: ffff999135dfdaf0 RBX: 2c01a8c0e70301f0 RCX: 0000000000000000
            [51448.030109] RDX: 0000000000000000 RSI: ffff9991b4e76300 RDI: ffff99917d1e9540
            [51448.030835] RBP: ffff99918462fd58 R08: 00007fec8e4bd000 R09: 0000000000000000
            [51448.031478] R10: ffff99917d1e9560 R11: ffff99917d1e9b80 R12: ffff99917d1e9540
            [51448.032120] R13: ffff99918f2fbdf8 R14: ffff999135dfdaf0 R15: ffff9991b4e762d0
            [51448.032907] FS:  0000000000000000(0000) GS:ffff9991bc040000(0000) knlGS:0000000000000000
            [51448.033444] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
            [51448.034145] CR2: 00007fec8cb0e504 CR3: 00000000a389a000 CR4: 00000000000606e0
            [51448.034655] Call Trace:
            [51448.036556]  [<ffffffffbbdf0ac8>] free_pgtables+0xa8/0x120
            [51448.037670]  [<ffffffffbbdfd656>] exit_mmap+0xc6/0x1a0
            [51448.038670]  [<ffffffffbbc97b87>] mmput+0x67/0xf0
            [51448.039169]  [<ffffffffbbca1908>] do_exit+0x288/0xa30
            [51448.041924]  [<ffffffffbc390678>] ? __do_page_fault+0x238/0x500
            [51448.042947]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a
            [51448.043398]  [<ffffffffbbca212f>] do_group_exit+0x3f/0xa0
            [51448.044037]  [<ffffffffbbca21a4>] SyS_exit_group+0x14/0x20
            [51448.044449]  [<ffffffffbc395f92>] system_call_fastpath+0x25/0x2a
            [51448.044994]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a
            [51448.045429] Code: 67 f0 4c 89 7d c8 48 8d 59 f0 75 3a e9 82 00 00 00 66 90 48 8b 7d c8 e8 47 3a 1a 00 48 8b 3d b0 3d fc 00 4c 89 e6 e8 b8 3e 02 00 <48> 8b 43 10 48 8d 4b 10 48 89 4d c8 48 83 e8 10 4c 39 e9 74 53 
            [51448.046774] RIP  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0
            [51448.047162]  RSP <ffff99918462fd18> 

            Here is the problem In RBX. There should be a valid address instead of 2c01a8c0e70301f0. It is not hard to notice that 0x2c01a8c0 is a reversed IP 192.168.1.44(address of my server).

            scherementsev Sergey Cheremencev added a comment - - edited I could reproduce it on my local testing system. It has the a different back trace, but I believe still the same sympthoms. It failed just after several runs of sanity-sec_31 in a cycle(client v2_15_58-46 + server 2.14). [51444.997787] Lustre: Unmounted lustre-client [51445.513711] LNet: Removed LNI 192.168.1.82@tcp999 [51448.012522] general protection fault: 0000 [#1] SMP  [51448.013216] Modules linked in: lustre(OE) ofd(OE) osp(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey loop jbd2 mbcache sunrpc iosf_mbi crc32_pclmul snd_intel8x0 ghash_clmulni_intel snd_ac97_codec ac97_bus ppdev snd_seq snd_seq_device aesni_intel lrw snd_pcm gf128mul sg glue_helper ablk_helper cryptd pcspkr snd_timer snd parport_pc parport video soundcore i2c_piix4 ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic ata_generic pata_acpi vmwgfx drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ahci ttm libahci ata_piix drm libata crct10dif_pclmul crct10dif_common e1000 crc32c_intel serio_raw drm_panel_orientation_quirks dm_mirror [51448.019903]  dm_region_hash dm_log dm_mod [last unloaded: libcfs] [51448.021331] CPU: 1 PID: 9863 Comm: systemd-udevd Kdump: loaded Tainted: G           OE  ------------   3.10.0-1160.49.1.el7_lustre.x86_64 #1 [51448.023377] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [51448.024086] task: ffff9991a021e300 ti: ffff99918462c000 task.ti: ffff99918462c000 [51448.024758] RIP: 0010:[<ffffffffbbe02f88>]  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0 [51448.028746] RSP: 0018:ffff99918462fd18  EFLAGS: 00010202 [51448.029501] RAX: ffff999135dfdaf0 RBX: 2c01a8c0e70301f0 RCX: 0000000000000000 [51448.030109] RDX: 0000000000000000 RSI: ffff9991b4e76300 RDI: ffff99917d1e9540 [51448.030835] RBP: ffff99918462fd58 R08: 00007fec8e4bd000 R09: 0000000000000000 [51448.031478] R10: ffff99917d1e9560 R11: ffff99917d1e9b80 R12: ffff99917d1e9540 [51448.032120] R13: ffff99918f2fbdf8 R14: ffff999135dfdaf0 R15: ffff9991b4e762d0 [51448.032907] FS:  0000000000000000(0000) GS:ffff9991bc040000(0000) knlGS:0000000000000000 [51448.033444] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [51448.034145] CR2: 00007fec8cb0e504 CR3: 00000000a389a000 CR4: 00000000000606e0 [51448.034655] Call Trace: [51448.036556]  [<ffffffffbbdf0ac8>] free_pgtables+0xa8/0x120 [51448.037670]  [<ffffffffbbdfd656>] exit_mmap+0xc6/0x1a0 [51448.038670]  [<ffffffffbbc97b87>] mmput+0x67/0xf0 [51448.039169]  [<ffffffffbbca1908>] do_exit+0x288/0xa30 [51448.041924]  [<ffffffffbc390678>] ? __do_page_fault+0x238/0x500 [51448.042947]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a [51448.043398]  [<ffffffffbbca212f>] do_group_exit+0x3f/0xa0 [51448.044037]  [<ffffffffbbca21a4>] SyS_exit_group+0x14/0x20 [51448.044449]  [<ffffffffbc395f92>] system_call_fastpath+0x25/0x2a [51448.044994]  [<ffffffffbc395ed5>] ? system_call_after_swapgs+0xa2/0x13a [51448.045429] Code: 67 f0 4c 89 7d c8 48 8d 59 f0 75 3a e9 82 00 00 00 66 90 48 8b 7d c8 e8 47 3a 1a 00 48 8b 3d b0 3d fc 00 4c 89 e6 e8 b8 3e 02 00 <48> 8b 43 10 48 8d 4b 10 48 89 4d c8 48 83 e8 10 4c 39 e9 74 53 [51448.046774] RIP  [<ffffffffbbe02f88>] unlink_anon_vmas+0x58/0x1b0 [51448.047162]  RSP <ffff99918462fd18> Here is the problem In RBX. There should be a valid address instead of 2c01a8c0e70301f0. It is not hard to notice that 0x2c01a8c0 is a reversed IP 192.168.1.44(address of my server).

            Patch v2_15_58-46-ge2539c0667 failed all sessions.
            Patch v2_15_58-45-ge4d2d4ff74 failed all sessions.
            Patch v2_15_58-44-g14ed4a6f8f passed all sessions.

            So it is clear the patch v2_15_58-45-ge4d2d4ff74 "LU-13306 mgc: handle large NID formats" is the source of the memory corruption.

            Sergey will see if he can reproduce this locally to improve debugging.

            I've added memory debug logging to the -45 test patch to see if this can help identify the source of the corruption.

            adilger Andreas Dilger added a comment - Patch v2_15_58-46-ge2539c0667 failed all sessions. Patch v2_15_58-45-ge4d2d4ff74 failed all sessions. Patch v2_15_58-44-g14ed4a6f8f passed all sessions. So it is clear the patch v2_15_58-45-ge4d2d4ff74 " LU-13306 mgc: handle large NID formats " is the source of the memory corruption. Sergey will see if he can reproduce this locally to improve debugging. I've added memory debug logging to the -45 test patch to see if this can help identify the source of the corruption.

            While it is only a 50-50 chance right now, it looks likely that patch https://review.whamcloud.com/50750 "LU-13306 mgc: handle large NID formats" is the source of the memory corruption in this test.

            If this can be reproduced on a local test system (outside of autotest) with the slab debugging enabled, that is likely the most direct path to figuring out what is actually corrupting the memory.

            adilger Andreas Dilger added a comment - While it is only a 50-50 chance right now, it looks likely that patch https://review.whamcloud.com/50750 " LU-13306 mgc: handle large NID formats " is the source of the memory corruption in this test. If this can be reproduced on a local test system (outside of autotest) with the slab debugging enabled, that is likely the most direct path to figuring out what is actually corrupting the memory.

            scherementsev, I'm not sure if this is practical to do or not, but one option might be to crank up the debug log size (e.g. 1GB) and enable "+malloc" debugging to get a record of what memory is allocated/freed by Lustre, and then use this to compare against the memory address in the core dump that is corrupted.

            This would potentially allow seeing what Lustre allocation is immediately before/after the corrupted slab item, and this could tell us what code is overwriting its own allocation and clobbering the neighboring slab object.

            Note I also filed ATM-3293 to implement a mechanism to pass kernel command-line options to the system during boot, like:

            Test-Parameters: kernelopt="slab_nomerge slab_debug=FZPU,kmalloc-64"
            

            so that we can enable the slab redzone / poisoning functionality on the kmalloc-64 slab. But until that is imlemented I don't think there is any slab debugging that can be enabled from within the running VM.

            adilger Andreas Dilger added a comment - scherementsev , I'm not sure if this is practical to do or not, but one option might be to crank up the debug log size (e.g. 1GB) and enable " +malloc " debugging to get a record of what memory is allocated/freed by Lustre, and then use this to compare against the memory address in the core dump that is corrupted. This would potentially allow seeing what Lustre allocation is immediately before/after the corrupted slab item, and this could tell us what code is overwriting its own allocation and clobbering the neighboring slab object. Note I also filed ATM-3293 to implement a mechanism to pass kernel command-line options to the system during boot, like: Test-Parameters: kernelopt="slab_nomerge slab_debug=FZPU,kmalloc-64" so that we can enable the slab redzone / poisoning functionality on the kmalloc-64 slab. But until that is imlemented I don't think there is any slab debugging that can be enabled from within the running VM.

            Patch v2_15_58-42-ga54a20665a has passed all sessions.
            Patch v2_15_58-44-g14ed4a6f8f is still running after 2h so it is likely to also pass.
            Patch v2_15_58-47-gaebb405e32 has failed all sessions.

            I will push the two finished patches to 45 and 46:
            v2_15_58-47-gaebb405e32 LU-10499 pcc: use foreign layout for PCCRO on server side FAIL
            v2_15_58-46-ge2539c0667 LU-17088 dom: don't create different size DOM component
            v2_15_58-45-ge4d2d4ff74 LU-13306 mgc: handle large NID formats
            v2_15_58-44-g14ed4a6f8f LU-16837 llite: handle unknown layout component PASS

            adilger Andreas Dilger added a comment - Patch v2_15_58-42-ga54a20665a has passed all sessions. Patch v2_15_58-44-g14ed4a6f8f is still running after 2h so it is likely to also pass. Patch v2_15_58-47-gaebb405e32 has failed all sessions. I will push the two finished patches to 45 and 46: v2_15_58-47-gaebb405e32 LU-10499 pcc: use foreign layout for PCCRO on server side FAIL v2_15_58-46-ge2539c0667 LU-17088 dom: don't create different size DOM component v2_15_58-45-ge4d2d4ff74 LU-13306 mgc: handle large NID formats v2_15_58-44-g14ed4a6f8f LU-16837 llite: handle unknown layout component PASS

            Patch v2_15_58-57-g4ba4976f52 has crashed all sessions. I will move it to v2_15_58-44-g14ed4a6f8f (just before large NID patch).

            adilger Andreas Dilger added a comment - Patch v2_15_58-57-g4ba4976f52 has crashed all sessions. I will move it to v2_15_58-44-g14ed4a6f8f (just before large NID patch).
            adilger Andreas Dilger added a comment - - edited

            Patch v2_15_58-57-g4ba4976f52 build finished, prioritized test sessions in the queue, though it is unclear if this will tell us anything new (likely to crash).
            Patch v2_15_58-52-g58a1c2c321 crashed all 9 sessions after 55-65 min, which means suspect patch is likely older. Mostly eliminates v2_15_58-60 (cgroups).
            Patch v2_15_58-31-g5f3bc2ce7c is still running all sessions after 70-85 min, might be an indication that it will pass.

            Next is v2_15_58-47-gaebb405e32 repurpose of the -52 patch.

            Between -42 and -47 is v2_15_58-45-ge4d2d4ff74 "LU-13306 mgc: handle large NID formats" which Sergey previously pointed out as a likely culprit for interop crashing.

            Note that there are two later patches with "Fixes: LU-13306" (v2_15_59-38-g8276ade19d and v2_15_60-5-g015f645651) but they reference a different patch. If necessary, we could probably merge those into the test patches, but they are fairly far away from the recent testing that I'm not sure they are involved...

            It may be significant that none of the recent crashes I've looked at have shown ext4-related stacks, only mount/config crashes.

            adilger Andreas Dilger added a comment - - edited Patch v2_15_58-57-g4ba4976f52 build finished, prioritized test sessions in the queue, though it is unclear if this will tell us anything new (likely to crash). Patch v2_15_58-52-g58a1c2c321 crashed all 9 sessions after 55-65 min, which means suspect patch is likely older. Mostly eliminates v2_15_58-60 (cgroups). Patch v2_15_58-31-g5f3bc2ce7c is still running all sessions after 70-85 min, might be an indication that it will pass. Next is v2_15_58-47-gaebb405e32 repurpose of the -52 patch. Between -42 and -47 is v2_15_58-45-ge4d2d4ff74 " LU-13306 mgc: handle large NID formats " which Sergey previously pointed out as a likely culprit for interop crashing. Note that there are two later patches with " Fixes: LU-13306 " (v2_15_59-38-g8276ade19d and v2_15_60-5-g015f645651) but they reference a different patch. If necessary, we could probably merge those into the test patches, but they are fairly far away from the recent testing that I'm not sure they are involved... It may be significant that none of the recent crashes I've looked at have shown ext4-related stacks, only mount/config crashes.

            Patch on v2_15_58-62-g1aec91ac64 crashed all sessions after 45-90 minutes (up to 6 iterations on one session) with stacks like LU-18230, but none with ext4 in them.
            Patch on v2_15_58-52-g58a1c2c321 and has just started testing a few minutes ago.
            Patch on v2_15_58-42-ga54a20665a is still in the test queue, though I have prioritized those test sessions.
            Patch on v2_15_58-31-g5f3bc2ce7c just started testing 20 minutes ago as well, so too early to know.
            Patch on v2_15_58-57-g4ba4976f52 just rebased, but at least the build queue is now empty (test queue has over 180 sessions in it).

            adilger Andreas Dilger added a comment - Patch on v2_15_58-62-g1aec91ac64 crashed all sessions after 45-90 minutes (up to 6 iterations on one session) with stacks like LU-18230 , but none with ext4 in them. Patch on v2_15_58-52-g58a1c2c321 and has just started testing a few minutes ago. Patch on v2_15_58-42-ga54a20665a is still in the test queue, though I have prioritized those test sessions. Patch on v2_15_58-31-g5f3bc2ce7c just started testing 20 minutes ago as well, so too early to know. Patch on v2_15_58-57-g4ba4976f52 just rebased, but at least the build queue is now empty (test queue has over 180 sessions in it).

            green , despite the fact we are trying to find the issue doing bisection I've looked into several vmcores.
            As I see it always fails here(at least in 3 vmcores I had a chance to look into):

            3238 static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru,
            3239                 gfp_t gfpflags, int node, unsigned long addr, size_t orig_size)
            3240 {
                 .... 
            3295         } else {
            3296                 void *next_object = get_freepointer_safe(s, object);
            3297

            It is clear that the object here is broken. It adds to object s->offset, so now we have a fast way to find which slab was affected just looking at RAX and RCX.

            Example:

            https://testing.whamcloud.com/test_sets/d1dea7be-129c-4b3d-ba7e-e76a19e90d7a
            RAX: 2c7e5a86dcc759a8 RBX: 0000000000000dc0 RCX: 2c7e5a86dcc759c8
            0x2c7e5a86dcc759c8 - 0x2c7e5a86dcc759a8 = 32 // kmalloc-64
            
            https://testing.whamcloud.com/test_sets/1a9f2b90-4ad5-4124-aa7e-abdb6f7992df
            RAX: ffff968f67c95519 RBX: 0000000000000dc0 RCX: ffff968f67c95539
            0xffff968f67c95539 - 0xffff968f67c95519 = 32 // kmalloc-64
            
            https://testing.whamcloud.com/test_sets/4d16dcfd-a86b-407c-8cd4-2281bcc25872
            RAX: ffff8c0101f81c2b RBX: 0000000000000d40 RCX: ffff8c0101f81c4b
            0xffff8c0101f81c4b - 0xffff8c0101f81c2b = 32 // kmalloc-64

            This algorithm definitely works for at least 2  below back traces.

            [  631.425995] RIP: 0010:__kmalloc+0x11b/0x370
              9 [  631.426441] Code: 8b 51 08 48 8b 01 48 83 79 10 00 48 89 04 24 0f 84 b8 01 00 00 48 85 c0 0f 84 af 01 00 00 41 8b 4c 24 28 49 8b 3c 24 48 01 c1 <48> 8b 19 48 89 ce 49 33 9c 24 b8 00 00 00 48 8d 8a 00 20 00 00 48
             10 [  631.427963] RSP: 0018:ffffba54812bbbe0 EFLAGS: 00010202
             11 [  631.428453] RAX: 2c7e5a86dcc759a8 RBX: 0000000000000dc0 RCX: 2c7e5a86dcc759c8
             12 [  631.429089] RDX: 000000004fbb8000 RSI: 0000000000000dc0 RDI: 00000000000370a0
             13 [  631.429714] RBP: 0000000000000000 R08: ffffba54812bbc78 R09: 00000000fd89642e
             14 [  631.430358] R10: 00000000636f6262 R11: 000000000b0b0b0b R12: ffff949b41042500
             15 [  631.431000] R13: ffffffffc0627316 R14: 0000000000000dc0 R15: 000000000000003c
             16 [  631.431622] FS:  00007f4c2cf3f740(0000) GS:ffff949bffc00000(0000) knlGS:0000000000000000
             17 [  631.432327] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
             18 [  631.432858] CR2: 000055c0dc40eaf8 CR3: 0000000006276001 CR4: 00000000000606f0
             19 [  631.433482] Call Trace:
             20 [  631.433755]  <TASK>
             30 [  631.438189]  ext4_htree_store_dirent+0x36/0x100 [ext4]
             31 [  631.438708]  htree_dirblock_to_tree+0x1ab/0x310 [ext4]
             32 [  631.439249]  ext4_htree_fill_tree+0xca/0x3b0 [ext4]
             34 [  631.440218]  ext4_dx_readdir+0x10d/0x360 [ext4]
             35 [  631.440692]  ext4_readdir+0x392/0x550 [ext4]
             36 [  631.441156]  iterate_dir+0x17c/0x1c0
             37 [  631.441534]  __x64_sys_getdents64+0x80/0x120
             39 [  631.442342]  do_syscall_64+0x5c/0x90
             43 [  631.443994]  entry_SYSCALL_64_after_hwframe+0x72/0xdc 
            
            [  439.455143] RIP: 0010:kmem_cache_alloc_trace+0xfe/0x300
            [  439.455749] Code: 8b 51 08 48 8b 01 48 83 79 10 00 48 89 04 24 0f 84 91 01 00 00 48 85 c0 0f 84 88 01 00 00 41 8b 4c 24 28 49 8b 3c 24 48 01 c1 <48> 8b 19 48 89 ce 49 33 9c 24 b8 00 00 00 48 8d 8a 00 20 00 00 48
            [  439.457506] RSP: 0018:ffff9cd7c073fd20 EFLAGS: 00010286
            [  439.458054] RAX: ffff8c0101f81c2b RBX: 0000000000000d40 RCX: ffff8c0101f81c4b
            [  439.458790] RDX: 000000005017c001 RSI: 0000000000000d40 RDI: 00000000000370a0
            [  439.459512] RBP: 0000000000000000 R08: ffff8c01c3370000 R09: ffff8c02c5f5378c
            [  439.460229] R10: ffffffffffffffff R11: 000000000000000f R12: ffff8c01c1042500
            [  439.460958] R13: ffffffffc0b8011f R14: 0000000000000d40 R15: 0000000000000038
            [  439.461693] FS:  0000000000000000(0000) GS:ffff8c027fd00000(0000) knlGS:0000000000000000
            [  439.462498] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
            [  439.463091] CR2: ffff8c0101f81c4b CR3: 0000000004b3a001 CR4: 00000000001706e0
            [  439.463825] Call Trace:
            [  439.464122]  <TASK>
            [  439.469393]  ksocknal_send_hello_v2+0x2f/0x550 [ksocklnd]
            [  439.469957]  ksocknal_create_conn+0x2ad/0x1940 [ksocklnd]
            [  439.471100]  ksocknal_connect+0x18f/0x6d0 [ksocklnd]
            [  439.471632]  ksocknal_connd+0x51b/0x7a0 [ksocklnd]
            [  439.473271]  kthread+0xe0/0x100
            [  439.474056]  ret_from_fork+0x2c/0x50

            Some vmcores even say about the problem with a free pointer when showing basic kmalloc information. But it is not always:

            crash> kmem -s | grep bad
            kmem: kmalloc-64: slab: fffff3624016b380 invalid freepointer: 2c7e5a86dcc759c8 

             

            However, there is still a number of sanity-sec_31 failures with another back traces(I saw at least 4 different) without ext4 and kmalloc inside. Seems the the problem randomly corrupt the memory.

            scherementsev Sergey Cheremencev added a comment - green , despite the fact we are trying to find the issue doing bisection I've looked into several vmcores. As I see it always fails here(at least in 3 vmcores I had a chance to look into): 3238 static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, 3239                 gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) 3240 { .... 3295         } else { 3296                 void *next_object = get_freepointer_safe(s, object); 3297 It is clear that the object here is broken. It adds to object s->offset, so now we have a fast way to find which slab was affected just looking at RAX and RCX. Example: https://testing.whamcloud.com/test_sets/d1dea7be-129c-4b3d-ba7e-e76a19e90d7a RAX: 2c7e5a86dcc759a8 RBX: 0000000000000dc0 RCX: 2c7e5a86dcc759c8 0x2c7e5a86dcc759c8 - 0x2c7e5a86dcc759a8 = 32 // kmalloc-64 https://testing.whamcloud.com/test_sets/1a9f2b90-4ad5-4124-aa7e-abdb6f7992df RAX: ffff968f67c95519 RBX: 0000000000000dc0 RCX: ffff968f67c95539 0xffff968f67c95539 - 0xffff968f67c95519 = 32 // kmalloc-64 https://testing.whamcloud.com/test_sets/4d16dcfd-a86b-407c-8cd4-2281bcc25872 RAX: ffff8c0101f81c2b RBX: 0000000000000d40 RCX: ffff8c0101f81c4b 0xffff8c0101f81c4b - 0xffff8c0101f81c2b = 32 // kmalloc-64 This algorithm definitely works for at least 2  below back traces. [  631.425995] RIP: 0010:__kmalloc+0x11b/0x370   9 [  631.426441] Code: 8b 51 08 48 8b 01 48 83 79 10 00 48 89 04 24 0f 84 b8 01 00 00 48 85 c0 0f 84 af 01 00 00 41 8b 4c 24 28 49 8b 3c 24 48 01 c1 <48> 8b 19 48 89 ce 49 33 9c 24 b8 00 00 00 48 8d 8a 00 20 00 00 48 10 [  631.427963] RSP: 0018:ffffba54812bbbe0 EFLAGS: 00010202 11 [  631.428453] RAX: 2c7e5a86dcc759a8 RBX: 0000000000000dc0 RCX: 2c7e5a86dcc759c8 12 [  631.429089] RDX: 000000004fbb8000 RSI: 0000000000000dc0 RDI: 00000000000370a0 13 [  631.429714] RBP: 0000000000000000 R08: ffffba54812bbc78 R09: 00000000fd89642e 14 [  631.430358] R10: 00000000636f6262 R11: 000000000b0b0b0b R12: ffff949b41042500 15 [  631.431000] R13: ffffffffc0627316 R14: 0000000000000dc0 R15: 000000000000003c 16 [  631.431622] FS:  00007f4c2cf3f740(0000) GS:ffff949bffc00000(0000) knlGS:0000000000000000 17 [  631.432327] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 18 [  631.432858] CR2: 000055c0dc40eaf8 CR3: 0000000006276001 CR4: 00000000000606f0 19 [  631.433482] Call Trace: 20 [  631.433755]  <TASK> 30 [  631.438189]  ext4_htree_store_dirent+0x36/0x100 [ext4] 31 [  631.438708]  htree_dirblock_to_tree+0x1ab/0x310 [ext4] 32 [  631.439249]  ext4_htree_fill_tree+0xca/0x3b0 [ext4] 34 [  631.440218]  ext4_dx_readdir+0x10d/0x360 [ext4] 35 [  631.440692]  ext4_readdir+0x392/0x550 [ext4] 36 [  631.441156]  iterate_dir+0x17c/0x1c0 37 [  631.441534]  __x64_sys_getdents64+0x80/0x120 39 [  631.442342]  do_syscall_64+0x5c/0x90 43 [  631.443994]  entry_SYSCALL_64_after_hwframe+0x72/0xdc [  439.455143] RIP: 0010:kmem_cache_alloc_trace+0xfe/0x300 [  439.455749] Code: 8b 51 08 48 8b 01 48 83 79 10 00 48 89 04 24 0f 84 91 01 00 00 48 85 c0 0f 84 88 01 00 00 41 8b 4c 24 28 49 8b 3c 24 48 01 c1 <48> 8b 19 48 89 ce 49 33 9c 24 b8 00 00 00 48 8d 8a 00 20 00 00 48 [  439.457506] RSP: 0018:ffff9cd7c073fd20 EFLAGS: 00010286 [  439.458054] RAX: ffff8c0101f81c2b RBX: 0000000000000d40 RCX: ffff8c0101f81c4b [  439.458790] RDX: 000000005017c001 RSI: 0000000000000d40 RDI: 00000000000370a0 [  439.459512] RBP: 0000000000000000 R08: ffff8c01c3370000 R09: ffff8c02c5f5378c [  439.460229] R10: ffffffffffffffff R11: 000000000000000f R12: ffff8c01c1042500 [  439.460958] R13: ffffffffc0b8011f R14: 0000000000000d40 R15: 0000000000000038 [  439.461693] FS:  0000000000000000(0000) GS:ffff8c027fd00000(0000) knlGS:0000000000000000 [  439.462498] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [  439.463091] CR2: ffff8c0101f81c4b CR3: 0000000004b3a001 CR4: 00000000001706e0 [  439.463825] Call Trace: [  439.464122]  <TASK> [  439.469393]  ksocknal_send_hello_v2+0x2f/0x550 [ksocklnd] [  439.469957]  ksocknal_create_conn+0x2ad/0x1940 [ksocklnd] [  439.471100]  ksocknal_connect+0x18f/0x6d0 [ksocklnd] [  439.471632]  ksocknal_connd+0x51b/0x7a0 [ksocklnd] [  439.473271]  kthread+0xe0/0x100 [  439.474056]  ret_from_fork+0x2c/0x50 Some vmcores even say about the problem with a free pointer when showing basic kmalloc information. But it is not always: crash> kmem -s | grep bad kmem: kmalloc-64: slab: fffff3624016b380 invalid freepointer: 2c7e5a86dcc759c8   However, there is still a number of sanity-sec_31 failures with another back traces(I saw at least 4 different) without ext4 and kmalloc inside. Seems the the problem randomly corrupt the memory.

            People

              scherementsev Sergey Cheremencev
              qian_wc Qian Yingjin
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: