Details

    • Bug
    • Resolution: Duplicate
    • Minor
    • None
    • Lustre 2.14.0
    • None
    • 3
    • 9223372036854775807

    Description

       

      [  117.007016] BUG: unable to handle kernel NULL pointer dereference at 0000000000000102
      [  117.008158] IP: [<ffffffffc0ab0356>] nid_keycmp+0x6/0x30 [obdclass]
      [  117.009044] PGD 0 
      [  117.009334] Oops: 0000 [#1] SMP 
      [  117.009796] Modules linked in: mdd(OE) lod(OE) mdt(OE) osp(OE) ofd(OE) lfsck(OE) ost(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lustre(OE) lmv(OE) mdc(OE) lov(OE) osc(OE) fid(OE) fld(OE) ko2iblnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rdma_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) sunrpc iTCO_wdt iTCO_vendor_support ppdev nfit libnvdimm iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd i2c_i801 lpc_ich joydev pcspkr parport_pc sg i6300esb parport ip_tables ext4 mbcache jbd2 sr_mod sd_mod cdrom crc_t10dif crct10dif_generic mlx5_ib(OE) ib_uverbs(OE) ib_core(OE) bochs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm virtio_net net_failover ahci virtio_blk failover
      [  117.020496]  virtio_scsi(OE) mlx5_core(OE) libahci drm bnxt_en libata mlxfw(OE) psample ptp crct10dif_pclmul crct10dif_common pps_core crc32c_intel auxiliary(OE) mlx_compat(OE) virtio_pci serio_raw devlink virtio_ring virtio drm_panel_orientation_quirks dm_mirror dm_region_hash dm_log dm_mod
      [  117.024227] CPU: 6 PID: 22239 Comm: ll_ost01_001 Kdump: loaded Tainted: G           OE  ------------ T 3.10.0-1160.71.1.el7_lustre.ddn17.x86_64 #1
      [  117.025941] Hardware name: DDN SFA400NVX2E, BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
      [  117.028520] task: ffff888cb9f08000 ti: ffff888c00d88000 task.ti: ffff888c00d88000
      [  117.030935] RIP: 0010:[<ffffffffc0ab0356>]  [<ffffffffc0ab0356>] nid_keycmp+0x6/0x30 [obdclass]
      [  117.033549] RSP: 0018:ffff888c00d8bb10  EFLAGS: 00010202
      [  117.035649] RAX: 00000000000000b8 RBX: ffff886a7532ae50 RCX: 00000000b94fb39a
      [  117.037936] RDX: 00000000000000ba RSI: 0000000000000002 RDI: ffff888c00d8bb48
      [  117.040214] RBP: ffff888c00d8bb88 R08: ffff886a7532ae50 R09: 0000000000000038
      [  117.042573] R10: ffffffffc0a96efd R11: fffff1404d579e00 R12: ffff888baa120cb8
      [  117.045048] R13: ffff888b783ac400 R14: ffff888bda00f800 R15: ffff888bda5f0000
      [  117.047322] FS:  0000000000000000(0000) GS:ffff888db1180000(0000) knlGS:0000000000000000
      [  117.049650] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  117.051704] CR2: 0000000000000102 CR3: 0000002250072000 CR4: 0000000000760fe0
      [  117.053922] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  117.056107] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  117.058390] PKRU: 00000000
      [  117.059938] Call Trace:
      [  117.063537]  [<ffffffffc0d8165a>] target_handle_connect+0x22ba/0x2c80 [ptlrpc]
      [  117.065718]  [<ffffffffc0e2bf4a>] tgt_request_handle+0x72a/0x18c0 [ptlrpc]
      ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc]
      [  117.071850]  [<ffffffffc0dcdf83>] ptlrpc_server_handle_request+0x253/0xc40 [ptlrpc]
      [  117.075869]  [<ffffffffc0dd2d7a>] ptlrpc_main+0xc4a/0x1cb0 [ptlrpc]
      [  117.083399]  [<ffffffff8a6c5f91>] kthread+0xd1/0xe0
      [  117.087007]  [<ffffffff8ad99ddd>] ret_from_fork_nospec_begin+0x7/0x21
      [  117.090630] Code: c1 ce 10 29 f0 89 c6 31 c2 c1 ce 1c 29 f2 31 d1 c1 ca 12 29 d1 31 c8 c1 c9 08 29 c8 5b 5d c3 66 0f 1f 44 00 00 0f 1f 44 00 00 55 <48> 8b 96 00 01 00 00 48 8b 47 08 48 89 e5 5d 48 8b 00 48 39 42
      [  117.096140] RIP  [<ffffffffc0ab0356>] nid_keycmp+0x6/0x30 [obdclass]
      [  117.097932]  RSP <ffff888c00d8bb10>
      [  117.099278] CR2: 0000000000000102 

      Attachments

        Issue Links

          Activity

            [LU-16930] BUG: nid_keycmp+0x6

            rhashtable was corrupted due to the bug in Quota Pools code - see LU-17034

            scherementsev Sergey Cheremencev added a comment - rhashtable was corrupted due to the bug in Quota Pools code - see LU-17034
            scherementsev Sergey Cheremencev added a comment - - edited

            I believe that all kernel panics described in this ticket are caused by LU-17034. So please read the description in 17034 before continue.

            I've analysed at least 10 different vmcores with corrupted rhashtables. In 80% of all cases I saw that the value stored in a low byte(usually 3 or 7) was increased at 7(0xA or 0xE). If suppose that 2 buckets in bucket table matches struct lqe_glbl_entry, value 0x3 will be equal to lge_edquot==1 and lge_qunit_set==1 and 0x7 to lge_eduqot==1, lge_qunit_set==1 and lgd.lge_qunit_nu==1.

            struct lqe_glbl_entry {
                    __u64                    lge_qunit;
                    unsigned long            lge_edquot:1,
                                             /* true when minimum qunit is set */
                                             lge_qunit_set:1,
                                             /* qunit or edquot is changed - need
                                             * to send glimpse to appropriate slave */
                                             lge_qunit_nu:1,
                                             lge_edquot_nu:1;
            }; 

            I've written a simple user mode program(test.c in attchment) that is partially copying the code from qmt_seed_glbe_all to show how 0x3 and 0x7 could be changed to 0xA and 0xE.
            See the results:

            [root@vm2 ~]# ./a.out 
            rhashtable bkt 0x3(lge_edquot==1, lge_qunit_set==1), edquot==0
            rhashtable bkt 0xa
            
            rhashtable bkt 0x3(lge_edquot==1, lge_qunit_set==1), edquot==1
            rhashtable bkt 0x3
            
            rhashtable bkt 0x7(lge_edquot==1, lge_qunit_set==1), edquot==0
            rhashtable bkt 0xe
            
            rhashtable bkt 0x7(lge_edquot==1, lge_qunit_set==1), edquot==1
            rhashtable bkt 0x7
            

            It explains why we often saw values increased at 7. However, it could be increased not only at 7 - I've just provided an example of the most popular case.

            Note, that sometimes lqe_qunit also could be set outside the array causing to corrupt 8 bytes of neighbor memory region with something like 1024,4096, ...

            scherementsev Sergey Cheremencev added a comment - - edited I believe that all kernel panics described in this ticket are caused by LU-17034 . So please read the description in 17034 before continue. I've analysed at least 10 different vmcores with corrupted rhashtables. In 80% of all cases I saw that the value stored in a low byte(usually 3 or 7) was increased at 7(0xA or 0xE). If suppose that 2 buckets in bucket table matches struct lqe_glbl_entry, value 0x3 will be equal to lge_edquot==1 and lge_qunit_set==1 and 0x7 to lge_eduqot==1, lge_qunit_set==1 and lgd.lge_qunit_nu==1. struct lqe_glbl_entry {         __u64                    lge_qunit;         unsigned long            lge_edquot:1,                                  /* true when minimum qunit is set */                                  lge_qunit_set:1,                                  /* qunit or edquot is changed - need                                  * to send glimpse to appropriate slave */                                  lge_qunit_nu:1,                                  lge_edquot_nu:1; }; I've written a simple user mode program(test.c in attchment) that is partially copying the code from qmt_seed_glbe_all to show how 0x3 and 0x7 could be changed to 0xA and 0xE. See the results: [root@vm2 ~]# ./a.out rhashtable bkt 0x3(lge_edquot==1, lge_qunit_set==1), edquot==0 rhashtable bkt 0xa rhashtable bkt 0x3(lge_edquot==1, lge_qunit_set==1), edquot==1 rhashtable bkt 0x3 rhashtable bkt 0x7(lge_edquot==1, lge_qunit_set==1), edquot==0 rhashtable bkt 0xe rhashtable bkt 0x7(lge_edquot==1, lge_qunit_set==1), edquot==1 rhashtable bkt 0x7 It explains why we often saw values increased at 7. However, it could be increased not only at 7 - I've just provided an example of the most popular case. Note, that sometimes lqe_qunit also could be set outside the array causing to corrupt 8 bytes of neighbor memory region with something like 1024,4096, ...

            One more occurrence of the same problem:

            [117224.294543] BUG: unable to handle kernel NULL pointer dereference at 000000000000002a
            [117224.295928] IP: [<ffffffffb13b7476>] rht_deferred_worker+0x226/0x430
            ...
            [117224.342181]  [<ffffffffb10c32ef>] process_one_work+0x17f/0x440
            [117224.343597]  [<ffffffffb10c4436>] worker_thread+0x126/0x3c0
            [117224.346515]  [<ffffffffb10cb621>] kthread+0xd1/0xe0
            [117224.349319]  [<ffffffffb17c61dd>] ret_from_fork_nospec_begin+0x7/0x21
            [117224.352394] Code: 8d 04 0b 48 8b 00 a8 01 0f 85 1d 01 00 00 48 8b 18 f6 c3 01 0f 85 06 01 00 00 49 89 c6 eb 0c 66 0f 1f 44 00 00 49 89 de 4c 89 e3 <4c> 8b 23 41 f6 c4 01 74 f1 41 0f b7 57 ce 49 8b 47 e8 48 89 df 
            [117224.357641] RIP  [<ffffffffb13b7476>] rht_deferred_worker+0x226/0x430
            [117224.359263]  RSP <ffff9843e3977da0>
            [117224.360509] CR2: 000000000000002a 
            
            crash> obd_export.exp_nid_hash ffff9862543a6800
              exp_nid_hash = {
                rhead = {
                  next = 0x2a
                },
                next = 0x0
              },
            crash> l *(rht_deferred_worker+0x226)
            0xffffffffb13b7476 is in rht_deferred_worker (lib/rhashtable.c:275).
            270     
            271             err = -ENOENT;
            272     
            273             rht_for_each(entry, old_tbl, old_hash) {
            274                     err = 0;
            275                     next = rht_dereference_bucket(entry->next, old_tbl, old_hash);
            276     
            277                     if (rht_is_a_nulls(next))
            278                             break;
            

            If failed again due to wrong rhead.next value == 0x2a. It should be 0x23 and again the difference is 7.
            This time it is seen in the logs that there was the last put for this export but no destroy_export. Probably destroy message is just stolen.

            00000020:00000080:18.0:1691676633.094263:0:25873:0:(genops.c:985:class_export_put()) final put ffff9862543a6800/d3484309-ce72-216b-074d-759e96edefd8
            00000020:00080000:5.0:1691684259.028627:0:15635:0:(obd_config.c:264:obd_nid_add()) scratch-OST00ca: added exp ffff9862543a6800 nid 1407377771022124: rc = 0 
            scherementsev Sergey Cheremencev added a comment - One more occurrence of the same problem: [117224.294543] BUG: unable to handle kernel NULL pointer dereference at 000000000000002a [117224.295928] IP: [<ffffffffb13b7476>] rht_deferred_worker+0x226/0x430 ... [117224.342181] [<ffffffffb10c32ef>] process_one_work+0x17f/0x440 [117224.343597] [<ffffffffb10c4436>] worker_thread+0x126/0x3c0 [117224.346515] [<ffffffffb10cb621>] kthread+0xd1/0xe0 [117224.349319] [<ffffffffb17c61dd>] ret_from_fork_nospec_begin+0x7/0x21 [117224.352394] Code: 8d 04 0b 48 8b 00 a8 01 0f 85 1d 01 00 00 48 8b 18 f6 c3 01 0f 85 06 01 00 00 49 89 c6 eb 0c 66 0f 1f 44 00 00 49 89 de 4c 89 e3 <4c> 8b 23 41 f6 c4 01 74 f1 41 0f b7 57 ce 49 8b 47 e8 48 89 df [117224.357641] RIP [<ffffffffb13b7476>] rht_deferred_worker+0x226/0x430 [117224.359263] RSP <ffff9843e3977da0> [117224.360509] CR2: 000000000000002a crash> obd_export.exp_nid_hash ffff9862543a6800 exp_nid_hash = { rhead = { next = 0x2a }, next = 0x0 }, crash> l *(rht_deferred_worker+0x226) 0xffffffffb13b7476 is in rht_deferred_worker (lib/rhashtable.c:275). 270 271 err = -ENOENT; 272 273 rht_for_each(entry, old_tbl, old_hash) { 274 err = 0; 275 next = rht_dereference_bucket(entry->next, old_tbl, old_hash); 276 277 if (rht_is_a_nulls(next)) 278 break; If failed again due to wrong rhead.next value == 0x2a. It should be 0x23 and again the difference is 7. This time it is seen in the logs that there was the last put for this export but no destroy_export. Probably destroy message is just stolen. 00000020:00000080:18.0:1691676633.094263:0:25873:0:(genops.c:985:class_export_put()) final put ffff9862543a6800/d3484309-ce72-216b-074d-759e96edefd8 00000020:00080000:5.0:1691684259.028627:0:15635:0:(obd_config.c:264:obd_nid_add()) scratch-OST00ca: added exp ffff9862543a6800 nid 1407377771022124: rc = 0
            scherementsev Sergey Cheremencev added a comment - - edited

            neilb , we can't reproduce this issue. Just one our customer periodically has this kind of failures.

            I also think that with high probability this could be caused by a wrong code outside lustre. It is interesting that in a part of failures the bucket table itself is not corrupted at the moment of crash. Instead of that exp_nid_hash.rhash->next is corrupted. This could be the result of reading wrong value from the table and using it as null marker in a bucket.

            scherementsev Sergey Cheremencev added a comment - - edited neilb , we can't reproduce this issue. Just one our customer periodically has this kind of failures. I also think that with high probability this could be caused by a wrong code outside lustre. It is interesting that in a part of failures the bucket table itself is not corrupted at the moment of crash. Instead of that exp_nid_hash.rhash->next is corrupted. This could be the result of reading wrong value from the table and using it as null marker in a bucket.
            neilb Neil Brown added a comment -

            When you are testing with that patch do you get the corruption of the rhashtable chain?

            If you do, then I cannot guess what is happening.

            If you don't - that would make sense.  The patch only tests for use-after-free in lustre code.  I don't think the use-after-free is happening in the lustre code.  I think it is happening in some OTHER code that happens to be running on the same machine and lustre is being corrupted by a bug somewhere else.

             

            neilb Neil Brown added a comment - When you are testing with that patch do you get the corruption of the rhashtable chain? If you do, then I cannot guess what is happening. If you don't - that would make sense.  The patch only tests for use-after-free in lustre code.  I don't think the use-after-free is happening in the lustre code.  I think it is happening in some OTHER code that happens to be running on the same machine and lustre is being corrupted by a bug somewhere else.  

            I'm constantly testing Lustre with https://review.whamcloud.com/c/fs/lustre-release/+/51245 forwadring different memory allocations via alloc_page() mapped then to a dedicated address space, but can't hit use-after-free.

            bzzz Alex Zhuravlev added a comment - I'm constantly testing Lustre with https://review.whamcloud.com/c/fs/lustre-release/+/51245 forwadring different memory allocations via alloc_page() mapped then to a dedicated address space, but can't hit use-after-free.
            neilb Neil Brown added a comment -

            This looks a lot like a use-after-free bug.  There is no way that the rhead.next field would ever be set to the values it is getting, so it must be getting those values because some code thinks that byte of memory is not in rhead.next.

            So either lustre is continuing to read from it after freeing the memory, or some other code is continuing to write after freeing their memory.  In either case, that other code appears to be performing a "+7" operation!

            I found 2 possible places were lustre can read the memory after freeing it.

            1/ the memory is freed directly rather than using RCU.  So it could already be free when some other thread is walking past the memory in the rhash table.

            2/ ldlm_flock_lookup_cb does a "refcount_inc()" (in class_export_get()), but the final ref might already have been put, and the memory might be about to be freed.  It should do refcount_inc_not_zero()

            These should both be fixed, but I don't see how that can contribute to the observed symptoms.  In particular the details provided in the "more complicated" case show (if I'm interpreting them properly) that the object is still in the rhashtable.  The above two issues would not leave it there.

            So my hypothesis is that some other code or driver on the crashing machine is corrupting data used by lustre.  One way to test this would be to change the code which allocates a 'struct obd_export' to allocate twice as much memory.  This should cause it to be allocated from a different slab, and so reduce the risk of reusing memory that the order driver is abusing.

            But that is just a guess.

            neilb Neil Brown added a comment - This looks a lot like a use-after-free bug.  There is no way that the rhead.next field would ever be set to the values it is getting, so it must be getting those values because some code thinks that byte of memory is not in rhead.next. So either lustre is continuing to read from it after freeing the memory, or some other code is continuing to write after freeing their memory.  In either case, that other code appears to be performing a "+7" operation! I found 2 possible places were lustre can read the memory after freeing it. 1/ the memory is freed directly rather than using RCU.  So it could already be free when some other thread is walking past the memory in the rhash table. 2/ ldlm_flock_lookup_cb does a "refcount_inc()" (in class_export_get()), but the final ref might already have been put, and the memory might be about to be freed.  It should do refcount_inc_not_zero() These should both be fixed, but I don't see how that can contribute to the observed symptoms.  In particular the details provided in the "more complicated" case show (if I'm interpreting them properly) that the object is still in the rhashtable.  The above two issues would not leave it there. So my hypothesis is that some other code or driver on the crashing machine is corrupting data used by lustre.  One way to test this would be to change the code which allocates a 'struct obd_export' to allocate twice as much memory.  This should cause it to be allocated from a different slab, and so reduce the risk of reusing memory that the order driver is abusing. But that is just a guess.

            "Sergey Cheremencev <scherementsev@ddn.com>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/51780
            Subject: LU-16930 obd: trigger a panic if rhltable corrupted
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: a0df4cab734895af784e3826876183b939ae3e2d

            gerrit Gerrit Updater added a comment - "Sergey Cheremencev <scherementsev@ddn.com>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/51780 Subject: LU-16930 obd: trigger a panic if rhltable corrupted Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: a0df4cab734895af784e3826876183b939ae3e2d
            scherementsev Sergey Cheremencev added a comment - - edited

            More complicated case but with the same sympthoms.

            crash> bt
            PID: 22239  TASK: ffff888cb9f08000  CPU: 6   COMMAND: "ll_ost01_001"
             #0 [ffff888c00d8b770] machine_kexec at ffffffff8a6662f4
             #1 [ffff888c00d8b7d0] __crash_kexec at ffffffff8a722b62
             #2 [ffff888c00d8b8a0] crash_kexec at ffffffff8a722c50
             #3 [ffff888c00d8b8b8] oops_end at ffffffff8ad91798
             #4 [ffff888c00d8b8e0] no_context at ffffffff8a675d14
             #5 [ffff888c00d8b930] __bad_area_nosemaphore at ffffffff8a675fe2
             #6 [ffff888c00d8b980] bad_area_nosemaphore at ffffffff8a676104
             #7 [ffff888c00d8b990] __do_page_fault at ffffffff8ad94750
             #8 [ffff888c00d8ba00] trace_do_page_fault at ffffffff8ad94a26
             #9 [ffff888c00d8ba40] do_async_page_fault at ffffffff8ad93fa2
            #10 [ffff888c00d8ba60] async_page_fault at ffffffff8ad907a8
                [exception RIP: nid_keycmp+6]
                RIP: ffffffffc0ab0356  RSP: ffff888c00d8bb10  RFLAGS: 00010202
                RAX: 00000000000000b8  RBX: ffff886a7532ae50  RCX: 00000000b94fb39a
                RDX: 00000000000000ba  RSI: 0000000000000002  RDI: ffff888c00d8bb48
                RBP: ffff888c00d8bb88   R8: ffff886a7532ae50   R9: 0000000000000038
                R10: ffffffffc0a96efd  R11: fffff1404d579e00  R12: ffff888baa120cb8 // obd_export
                R13: ffff888b783ac400  R14: ffff888bda00f800  R15: ffff888bda5f0000
                ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
            #11 [ffff888c00d8bb18] obd_nid_add at ffffffffc0ab2b8e [obdclass]
            #12 [ffff888c00d8bb90] target_handle_connect at ffffffffc0d8165a [ptlrpc]
            #13 [ffff888c00d8bca0] tgt_request_handle at ffffffffc0e2bf4a [ptlrpc]
            #14 [ffff888c00d8bd30] ptlrpc_server_handle_request at ffffffffc0dcdf83 [ptlrpc]
            #15 [ffff888c00d8bde8] ptlrpc_main at ffffffffc0dd2d7a [ptlrpc]
            #16 [ffff888c00d8bec8] kthread at ffffffff8a6c5f91
            #17 [ffff888c00d8bf50] ret_from_fork_nospec_begin at ffffffff8ad99ddd
            
            target_handle_connect adds following export:
            crash> obd_export.exp_client_uuid ffff888b783ac400 // R13
              exp_client_uuid = {
                uuid = "694aac0a-4105-5656-4ce4-51df4a388c1a\000\000\000"
              },
            crash> p ((struct obd_export *)0xffff888b783ac400)->exp_connection.c_peer.nid
            $7 = 1407377771021236
            
            crash> obd_device.obd_nid_hash ffff888bda5f0000 | grep tbl
                  tbl = 0xffff888bda00f800,
            
            crash> obd_export.exp_nid_hash ffff888baa120c00
              exp_nid_hash = {
                rhead = {
                  next = 0xba
                },
                next = 0xffff888c1e67f4b8
              },  
            crash> obd_export.exp_nid_hash 0xffff888c1e67f400
              exp_nid_hash = { 
                rhead = { 
                  next = 0xb3
                },  
                next = 0x0
              },
            
            looking for 0xb3 crash> rd 0xffff888bda00f880 128 | grep 000000b1
            ffff888bda00fb40:  00000000000000b1 ffff888c1915ecb8   ................
            crash>
            crash> obd_export.exp_nid_hash ffff888c1915ec00
              exp_nid_hash = {
                rhead = {
                  next = 0xffff888baa120cb8
                },
                next = 0x0
              },
            Let's go again from the beginning:
            crash> obd_export.exp_nid_hash ffff888c1915ec00
              exp_nid_hash = {
                rhead = {
                  next = 0xffff888baa120cb8
                },  
                next = 0x0 
              },
            crash> p ((struct obd_export *)0xffff888c1915ec00)->exp_connection.c_peer.nid
            $3 = 1407377771020507
            
            crash> obd_export.exp_nid_hash 0xffff888baa120c00
              exp_nid_hash = { 
                rhead = { 
                  next = 0xba
                },  
                next = 0xffff888c1e67f4b8
              },
            crash> p ((struct obd_export *)0xffff888baa120c00)->exp_connection.c_peer.nid
            $4 = 1407377771020299
            
            crash> obd_export.exp_nid_hash 0xffff888c1e67f400
              exp_nid_hash = { 
                rhead = { 
                  next = 0xb3
                },  
                next = 0x0 
              },
            crash> p ((struct obd_export *)0xffff888c1e67f400)->exp_connection.c_peer.nid
            $5 = 1407377771020299
            

            As could be seen from above again we have 0xba instead of 0xb3. Thus it doesn't stop iterating and refers to 0xba like an address.

            One more thing from 2 latest investigations. In 2 cases the next is larger then it should be at 7,i.e.:

            0xba instead of 0xb3
            0x1be instead of 0xb7
            • 0x1be instead of 0xb7 - see comment
            scherementsev Sergey Cheremencev added a comment - - edited More complicated case but with the same sympthoms. crash> bt PID: 22239  TASK: ffff888cb9f08000  CPU: 6   COMMAND: "ll_ost01_001"  #0 [ffff888c00d8b770] machine_kexec at ffffffff8a6662f4  #1 [ffff888c00d8b7d0] __crash_kexec at ffffffff8a722b62  #2 [ffff888c00d8b8a0] crash_kexec at ffffffff8a722c50  #3 [ffff888c00d8b8b8] oops_end at ffffffff8ad91798  #4 [ffff888c00d8b8e0] no_context at ffffffff8a675d14  #5 [ffff888c00d8b930] __bad_area_nosemaphore at ffffffff8a675fe2  #6 [ffff888c00d8b980] bad_area_nosemaphore at ffffffff8a676104  #7 [ffff888c00d8b990] __do_page_fault at ffffffff8ad94750  #8 [ffff888c00d8ba00] trace_do_page_fault at ffffffff8ad94a26  #9 [ffff888c00d8ba40] do_async_page_fault at ffffffff8ad93fa2 #10 [ffff888c00d8ba60] async_page_fault at ffffffff8ad907a8     [exception RIP: nid_keycmp+6]     RIP: ffffffffc0ab0356  RSP: ffff888c00d8bb10  RFLAGS: 00010202     RAX: 00000000000000b8  RBX: ffff886a7532ae50  RCX: 00000000b94fb39a     RDX: 00000000000000ba  RSI: 0000000000000002  RDI: ffff888c00d8bb48     RBP: ffff888c00d8bb88   R8: ffff886a7532ae50   R9: 0000000000000038     R10: ffffffffc0a96efd  R11: fffff1404d579e00  R12: ffff888baa120cb8 // obd_export     R13: ffff888b783ac400  R14: ffff888bda00f800  R15: ffff888bda5f0000     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018 #11 [ffff888c00d8bb18] obd_nid_add at ffffffffc0ab2b8e [obdclass] #12 [ffff888c00d8bb90] target_handle_connect at ffffffffc0d8165a [ptlrpc] #13 [ffff888c00d8bca0] tgt_request_handle at ffffffffc0e2bf4a [ptlrpc] #14 [ffff888c00d8bd30] ptlrpc_server_handle_request at ffffffffc0dcdf83 [ptlrpc] #15 [ffff888c00d8bde8] ptlrpc_main at ffffffffc0dd2d7a [ptlrpc] #16 [ffff888c00d8bec8] kthread at ffffffff8a6c5f91 #17 [ffff888c00d8bf50] ret_from_fork_nospec_begin at ffffffff8ad99ddd target_handle_connect adds following export: crash> obd_export.exp_client_uuid ffff888b783ac400 // R13   exp_client_uuid = {     uuid = "694aac0a-4105-5656-4ce4-51df4a388c1a\000\000\000"   }, crash> p ((struct obd_export *)0xffff888b783ac400)->exp_connection.c_peer.nid $7 = 1407377771021236 crash> obd_device.obd_nid_hash ffff888bda5f0000 | grep tbl       tbl = 0xffff888bda00f800, crash> obd_export.exp_nid_hash ffff888baa120c00   exp_nid_hash = {     rhead = {       next = 0xba     },     next = 0xffff888c1e67f4b8   },   crash> obd_export.exp_nid_hash 0xffff888c1e67f400   exp_nid_hash = {      rhead = {        next = 0xb3     },       next = 0x0   }, looking for 0xb3 crash> rd 0xffff888bda00f880 128 | grep 000000b1 ffff888bda00fb40:  00000000000000b1 ffff888c1915ecb8   ................ crash> crash> obd_export.exp_nid_hash ffff888c1915ec00   exp_nid_hash = {     rhead = {       next = 0xffff888baa120cb8     },     next = 0x0   }, Let's go again from the beginning: crash> obd_export.exp_nid_hash ffff888c1915ec00   exp_nid_hash = {     rhead = {       next = 0xffff888baa120cb8   },       next = 0x0    }, crash> p ((struct obd_export *)0xffff888c1915ec00)->exp_connection.c_peer.nid $3 = 1407377771020507 crash> obd_export.exp_nid_hash 0xffff888baa120c00   exp_nid_hash = {      rhead = {        next = 0xba     },       next = 0xffff888c1e67f4b8   }, crash> p ((struct obd_export *)0xffff888baa120c00)->exp_connection.c_peer.nid $4 = 1407377771020299 crash> obd_export.exp_nid_hash 0xffff888c1e67f400   exp_nid_hash = {      rhead = {        next = 0xb3     },       next = 0x0    }, crash> p ((struct obd_export *)0xffff888c1e67f400)->exp_connection.c_peer.nid $5 = 1407377771020299 As could be seen from above again we have 0xba instead of 0xb3. Thus it doesn't stop iterating and refers to 0xba like an address. One more thing from 2 latest investigations. In 2 cases the next is larger then it should be at 7,i.e.: 0xba instead of 0xb3 0x1be instead of 0xb7 0x1be instead of 0xb7 - see comment

            scherementsev can you please attach full set of backtrackes for the last case? lustre log?

            bzzz Alex Zhuravlev added a comment - scherementsev can you please attach full set of backtrackes for the last case? lustre log?

            People

              scherementsev Sergey Cheremencev
              scherementsev Sergey Cheremencev
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: