[LU-4380] data corruption when copy a file to a new directory (sles11sp2 only) Created: 12/Dec/13  Updated: 13/Feb/14  Resolved: 14/Jan/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.1
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Jay Lan (Inactive) Assignee: Bob Glossman (Inactive)
Resolution: Cannot Reproduce Votes: 0
Labels: None
Environment:

server: centos 2.1.5 server OR centos 2.4.1 server
client: sles11sp2 2.4.1 client

Source can be found at github.com/jlan/lustre-nas. The tag for the client is 2.4.1-1nasC.


Attachments: Text File LU-4380-debug.patch     File LU4380.dbg.20121230.resend.tgz     File LU4380.dbg.20121230.tgz     File LU4380.dbg.20131224    
Issue Links:
Duplicate
duplicates LU-3219 FIEMAP does not sync data or return c... Resolved
Severity: 3
Rank (Obsolete): 12006

 Description   

Users reported a data corruption problem. We have a test script to reproduce the problem.

When run in a Lustre file system with a sles11sp2 host as the remote host, the script fails (sum reports 00000). It works if the remote host is running sles11sp1 or CentOS.

— cut here for test5.sh —
#!/bin/sh

host=${1:-endeavour2}
rm -fr zz hosts
cp /etc/hosts hosts
#fsync hosts
ssh $host "cd $PWD && mkdir -p zz && cp hosts zz/"
sum hosts zz/hosts
— cut here —

Good result:
./test5.sh r301i0n0
61609 41 hosts
61609 41 zz/hosts

Bad result:
./test5.sh r401i0n2
61609 41 hosts
00000 41 zz/hosts

Notes:

  • If the copied file is small enough (e.g., /etc/motd), the script succeeds.
  • If you uncomment the fsync, the script succeeds.
  • When it fails, stat reports no blocks have been allocated to the zz/hosts file:

$ stat zz/hosts
File: `zz/hosts'
Size: 41820 Blocks: 0 IO Block: 2097152 regular file
Device: 914ef3a8h/2437870504d Inode: 163153538715835056 Links: 1
Access: (0644/rw-rr-) Uid: (10491/dtalcott) Gid: ( 1179/ cstaff)
Access: 2013-12-12 09:24:46.000000000 -0800
Modify: 2013-12-12 09:24:46.000000000 -0800
Change: 2013-12-12 09:24:46.000000000 -0800

  • If you run in an NFS file system, the script usually succeeds, but sometimes reports a no such file error on the sum of zz/hosts. After a few seconds, though, the file appears, with the correct sum. (Typical NFS behavior.)
  • Acts the same on nbp7 and nbp8.


 Comments   
Comment by Jay Lan (Inactive) [ 12/Dec/13 ]

More date point. These client worked correctly:
centos: 2.4.0-3nasC tag.
sles11sp1: 2.1.5-1nasC tag.

Comment by Peter Jones [ 12/Dec/13 ]

Bob

Could you please try and reproduce this issue?

Thanks

Peter

Comment by Andreas Dilger [ 14/Dec/13 ]

Is the "cp" on SLES11SP2 using the FIEMAP ioctl to determine if the file has data in it? This sounds like an old bug (LU-2580 and LU-3219) that was already fixed. Is http://review.whamcloud.com/6585 included on this client?

Comment by Bob Glossman (Inactive) [ 16/Dec/13 ]

Looking at the tagged version source 2.4.1-1nasC from github.com/jlan/lustre-nas mentioned it appears the fix for LU-3219, http://review.whamcloud.com/6377, is present in the client version being used.

So far I haven't been able to reproduce this in a small test environment. I suspect the /etc/hosts file I have isn't nearly big enough to show the problem. I think I will need a bigger test file.

It seems to me a text file like /etc/hosts should be really full of continuous text and wouldn't trigger problems of FIEMAP related to sparse files. Wouldn't expect such a file to be sparse.

Comment by Bob Glossman (Inactive) [ 16/Dec/13 ]

went to a 2.4.1 client, still can't reproduce the problem.

You specify the remote client, the one your script ssh's to, is sles11sp2. You never mention what version of lustre or kernel the local client, the one your script starts out on and does local cp's on, is.

Comment by Jay Lan (Inactive) [ 17/Dec/13 ]

Hi Bob,

Here is more input from our admin who came up with the reproducer. In the following
testing he used an adjacent sles11sp2 as local node and remote node.

== quote on ==
I'm not surprised they could not reproduce.

I used various hosts as local or remote targets. It always failed when the remote host was sles11sp2 and succeeded in other cases.

I have one additional failure variant:

$ ssh r401i0n0 "cd $PWD && ./test5.sh" r401i0n1
61393 1670 hosts
48836 1670 zz/hosts

$ cmp -l hosts zz/hosts | head
1048577 151 0
1048578 61 0
1048579 156 0
1048580 66 0
1048581 55 0
1048582 151 0
1048583 142 0
1048584 60 0
1048585 40 0
1048586 162 0

Here, the local and remote hosts are adjacent sles11sp2 nodes. Instead of the second copy of the file being completely empty, the missing blocks start after exactly 1 MiB. I tried tweaking the stripe sizes of the source and destination directories, but the result was the same.

I then used a bigger file. The result is that all 1 MiB chunks but the last, partial one get copied okay. But, remember that if the source file is very small, it gets copied completely okay also.
== quote off ==

Comment by Andreas Dilger [ 18/Dec/13 ]

Jay, could you please run strace as part of your reproducer and attach the output from a failed run, to see whether the cp is using FIEMAP, and what results it gets back. It may be that cp is not even trying to copy the data if it gets back a result that indicates the file is sparse or something.

Comment by Bob Glossman (Inactive) [ 18/Dec/13 ]

Continued efforts to reproduce the problem locally haven't panned out. went to bigger test file, went to 2.4.1 clients, went to launching from one sles11sp2 client onto another as described. all cases succeeded, no failures seen. I must be doing something significantly different, but not sure what.

Comment by Jay Lan (Inactive) [ 18/Dec/13 ]

I passed along Andreas' request to the tester.

Comment by Jay Lan (Inactive) [ 18/Dec/13 ]

Data from our admin:

== quote on ==
This gets very interesting. Here is the good stuff from the strace from the cp that happens on the remote host:

stat("zz/",

{st_dev=makedev(3827, 595112), st_ino=163703357997847957, st_mode=S_ IFDIR|0755, st_nlink=2, st_uid=10491, st_gid=1179, st_blksize=4096, st_blocks=8, st_size=4096, st_atime=2013/12/18-11:30:20, st_mtime=2013/12/18-11:30:20, st_ctime=2013/12/18-11:30:20}

) = 0
stat("hosts",

{st_dev=makedev(3827, 595112), st_ino=163571199807331126, st_mode=S_IFREG|0644, st_nlink=1, st_uid=10491, st_gid=1179, st_blksize=4194304, st_blocks=14336, st_size=8037670, st_atime=2013/12/18-11:30:20, st_mtime=2013/12/18-11:30:20, st_ctime=2013/12/18-11:30:20}

) = 0
stat("zz/hosts", 0x7fffffffe6c0) = -1 ENOENT (No such file or directory)
open("hosts", O_RDONLY) = 3
fstat(3,

{st_dev=makedev(3827, 595112), st_ino=163571199807331126, st_mode=S_IFREG|0644, st_nlink=1, st_uid=10491, st_gid=1179, st_blksize=4194304, st_blocks=14336, st_size=8037670, st_atime=2013/12/18-11:30:20, st_mtime=2013/12/18-11:30:20, st_ctime=2013/12/18-11:30:20}

) = 0
open("zz/hosts", O_WRONLY|O_CREAT|O_EXCL, 0644) = 4
fstat(4,

{st_dev=makedev(3827, 595112), st_ino=163703357997847959, st_mode=S_IFREG|0644, st_nlink=1, st_uid=10491, st_gid=1179, st_blksize=4194304, st_blocks=0, st_size=0, st_atime=2013/12/18-11:30:20, st_mtime=2013/12/18-11:30:20, st_ctime=2013/12/18-11:30:20}

) = 0
mmap(NULL, 4202496, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fffec518000
ioctl(3, 0xc020660b, 0x7fffffffd390) = 0
read(3, "\37\213\10\0\373\353\202R\2\3\324<\375W\333\306\262\375\25\375\25\33!\32p\20\376H \201\3249"..., 4194304) = 4194304
write(4, "\37\213\10\0\373\353\202R\2\3\324<\375W\333\306\262\375\25\375\25\33!\32p\20\376H \201\3249"..., 4194304) = 4194304
read(3, "r\342B\316~\206g\324\35dn\263P\324.\302QAn\205\352\267\3640\370G\205L\222\17\242\327"..., 3145728) = 3145728
write(4, "r\342B\316~\206g\324\35dn\263P\324.\302QAn\205\352\267\3640\370G\205L\222\17\242\327"..., 3145728) = 3145728
ftruncate(4, 8037670) = 0
close(4) = 0
close(3) = 0

Now, if you study this, you see that cp did a read/write of 4 MiB and then a read/wrete of 3 MiB, and then uses ftruncate to set the size of the destination file to the st_size reported by the fstat(3, ...) call. Where did cp come up with 7 MiB as the amount to copy? From the st_blocks field in the fstat call. Apparently, the sles11sp2 cp has been "upgraded" to pay attention to st_blocks, rather than just do the copy.

== quote off ==

Comment by Bob Glossman (Inactive) [ 18/Dec/13 ]

What is your specific kernel version in your sles11sp2? I ask because over time there have been several. Wondering if maybe different vfs level changes in the version you are using could explain why I'm not seeing the problem reproducing.

Comment by Jay Lan (Inactive) [ 18/Dec/13 ]

The sles11sp2 version we are running in production is 3.0.74-0.6.6.2.

Comment by Bob Glossman (Inactive) [ 18/Dec/13 ]

I believe the ioctl(3, 0xc020660b, 0x7fffffffd390) shown in the strace output is a FS_IOC_FIEMAP ioctl. suspect that is where cp is getting the sizes to read/write. interesting that it matches the file allocation size of 512b blocks reported in the st_blocks of the stat call and is smaller than the st_size reported in the stat call.

Comment by Bob Glossman (Inactive) [ 18/Dec/13 ]

all the sles versions I'm testing with are newer than that. I have some 3.0.93-0.5, the most recent version of sles11sp2 we build and ship on, and some 3.0.101-05, the most recent kernel update version for sles11sp2.

Comment by Bob Glossman (Inactive) [ 18/Dec/13 ]

just to collect some additional data could you please add the --sparse=never option to your cp commands, see if that avoids failures, and again get straces on the cp.

Comment by Jay Lan (Inactive) [ 18/Dec/13 ]

Here is the second part of Dale's reply in response to Andreas' strace request. I did not include the second part in first attemp. He actually did try with --sparse=never.

== quote on ==
So, there are two bugs here. First, Lustre did not update st_blocks for the source file soon enough. Second, sles11sp2's cp is too "smart" for its own good.

FWIW:

  • I used the sles11sp1 version of cp under sles11sp2 and it produced a correct copy, in spite of the bad st_blocks value.
  • I tried adding the --sparse=never option to cp to see if I could get it to ignore st_blocks. That made it even stupider: It copied the 7 MiB as before, then explicitly filled the rest of st_size with zeros.
    == quote off ==
Comment by Jay Lan (Inactive) [ 19/Dec/13 ]

Hi Bob,

'/bin/cp' command is packaged in coreutils in sles11sp2.
My version is coreutils-8.12-6.23.1. What version is yours?

Comment by Niu Yawei (Inactive) [ 19/Dec/13 ]

This looks like the same problem as LU-2580.

Some data of the source file 'host' is still cached on client but not flushed back to OST, so the st_blocks reproted by stat is less than actual file size, 'cp' then think that is a sparse file and tries to copy only the extents get by fiemap ioctl.

So what we need to figure out is: If the 'cp' in sles11sp2 calls fiemap with FIEMAP_FLAG_SYNC flag to make sure all the cached data flush back before getting extents?

Comment by Niu Yawei (Inactive) [ 19/Dec/13 ]

I checked the source code of coreutils-8.12 from gnu.org, looks FIEMAP_FLAG_SYNC is always set for reading extent, not sure if there is any difference with the copy on sles11sp2. (not sure where to get the source code of coreutils for sles11sp2)

Bob, I guess your coreutils version isn't same as Jay's, that's why you can't reproduce the problem. Could you try coreutils-8.12-6.23.1?

Comment by Bob Glossman (Inactive) [ 19/Dec/13 ]

I have coreutils-8.12-6.25.29.1 on sles11sp2.

Comment by Bob Glossman (Inactive) [ 19/Dec/13 ]

Tried backing down to the -6.23.1 coreutils version. Still couldn't make the problem happen. Looks like the binary cp is identical between the 2 versions anyway, I checked.
Package diffs must be elsewhere.

Comment by Jay Lan (Inactive) [ 19/Dec/13 ]

Niu, LU-2580 refered to fixes to LU-2267 and LU-2286. We have both patches in our 2.4.1 branch.

Comment by Niu Yawei (Inactive) [ 24/Dec/13 ]

Jay, could you try to reproduce with D_TRACE log enabled, let's try to see if sync flag is specified in fiemap call from the lustre log?

  • echo +trace > /proc/sys/lnet/debug
  • lctl debug_daemon start $tmpfile 300
  • lctl mark "=== cp test ==="
  • cp test
  • lctl mark "=== cp test end ==="
  • lctl debug_daemon stop
  • lctl debug_file $tmpfile $logfile
  • attach the $logfile in this ticket.
Comment by Niu Yawei (Inactive) [ 24/Dec/13 ]

It's better to have this patch applied when collecting debug logs.

Comment by Jay Lan (Inactive) [ 24/Dec/13 ]

Attached is the debug output Niu requested. I did not run the test with Niu's patch though since I need to get authorization to put in new binary into production system.

Comment by Jay Lan (Inactive) [ 24/Dec/13 ]

I was asked to check with you guys if "to have Lustre not implement the FIEMAP ioctl" can be a good quick workaround?

Note that in our case, the writer is on one host and the reader is on a different one. Is this why FIEMAP_FLAG_SYNC has no effect: The _SYNC flag is on the reader host, but the cached data are on the writer host?

Comment by Niu Yawei (Inactive) [ 25/Dec/13 ]


Note that in our case, the writer is on one host and the reader is on a different one. Is this why FIEMAP_FLAG_SYNC has no effect: The _SYNC flag is on the reader host, but the cached data are on the writer host?

Ah, I was thinking it's on same client, but fix of LU-3219 would force the writer to do flush before the reader calls fiemap. (Andreas & Bob mentioned it above)
Then we may need logs on both clients and ost, could you rerun the test and collect logs on two clients and the osts (with the objects stripped on)? and please enable D_TRACE, D_DLMTRACE and D_CACHE this time.

Comment by Jay Lan (Inactive) [ 30/Dec/13 ]

The tar gz file contains
LU4380.dbg.rd.20131230
LU4380.dbg.wr.20131230
Hmm, hope I did not get it wrong. The rd file is the local file where command was executed and the wr file was meant to be the remote file where file was created.

I did not include a debug trace for the OSS. The 'lfs getstripe zz/hosts' showed:
zz/hosts
lmm_magic: 0x0BD10BD0
lmm_seq: 0x247f48d69
lmm_object_id: 0x69e
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_stripe_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 80
obdidx objid objid group
80 3494638 0x3552ee 0

and there are 26 OSTs on that fs. So, does it fall on oss3, if it starts from oss1?

Do I have to turn on +trace +dlmtrace +cache on the complete oss?

Comment by Jay Lan (Inactive) [ 30/Dec/13 ]

I tried to run the test again, with debugging on OSS. The debug output did not contain lctl marks. The 300M specified in debug_daemon was not big enough.

Comment by Jay Lan (Inactive) [ 30/Dec/13 ]

I tried to run the test on an lustre filesystem that use older hardware but much less activities. I set debug file size to 2G.

The problem was that "lctl debug_daemon stop" hanged until the 2G ran out. The debug file missed most part of the test. Same thing when I specified 1G

Comment by Niu Yawei (Inactive) [ 31/Dec/13 ]

Thank you Jay.

I guess wr is the client which execute the test script (which cp hosts file from /etc/hosts)? because I see the blocking ast from wr log:

00010000:00000001:28.0F:1388433400.002345:0:95598:0:(ldlm_lockd.c:1694:ldlm_handle_bl_callback()) Process entered
00000100:00000001:22.0:1388433400.002345:0:4805:0:(events.c:407:reply_out_callback()) Process leaving
00000100:00000001:29.0:1388433400.002346:0:54104:0:(service.c:1571:ptlrpc_server_hpreq_fini()) Process entered
00000100:00000001:29.0:1388433400.002346:0:54104:0:(service.c:1582:ptlrpc_server_hpreq_fini()) Process leaving
00000100:00000001:29.0:1388433400.002347:0:54104:0:(service.c:2078:ptlrpc_server_handle_request()) Process leaving (rc=1 : 1 : 1)
00000400:00000001:29.0:1388433400.002348:0:54104:0:(watchdog.c:448:lc_watchdog_disable()) Process entered
00010000:00010000:28.0:1388433400.002348:0:95598:0:(ldlm_lockd.c:1696:ldlm_handle_bl_callback()) ### client blocking AST callback handler ns: nbp8-OST0050-osc-ffff8807e657ec00 lock: ffff880aa9efb480/0x11c2317b4b200a63 lrc: 3/0,0 mode: PW/PW res: [0x3552ee:0x0:0x0].0 rrc: 1 type: EXT [0->18446744073709551615] (req 40960->18446744073709551615) flags: 0x420000000000 nid: local remote: 0x42e20173aa80c345 expref: -99 pid: 63286 timeout: 0 lvb_type: 1
00000400:00000001:29.0:1388433400.002349:0:54104:0:(watchdog.c:456:lc_watchdog_disable()) Process leaving
00010000:00010000:28.0:1388433400.002354:0:95598:0:(ldlm_lockd.c:1709:ldlm_handle_bl_callback()) Lock ffff880aa9efb480 already unused, calling callback (ffffffffa08f79e0)
00000020:00000001:28.0:1388433400.002372:0:95598:0:(cl_lock.c:357:cl_lock_get_trust()) acquiring trusted reference: 0 ffff88089dfea238 18446744072108337004
00000020:00000001:28.0:1388433400.002374:0:95598:0:(cl_lock.c:150:cl_lock_trace0()) got mutex: ffff88089dfea238@(1 ffff880f181f8340 1 5 0 0 0 0)(ffff880404adca70/1/1) at cl_lock_mutex_tail():668
00000020:00000001:28.0:1388433400.002377:0:95598:0:(cl_lock.c:1839:cl_lock_cancel()) Process entered
00000020:00010000:28.0:1388433400.002378:0:95598:0:(cl_lock.c:150:cl_lock_trace0()) cancel lock: ffff88089dfea238@(1 ffff880f181f8340 1 5 0 0 0 0)(ffff880404adca70/1/1) at cl_lock_cancel():1840
00000020:00000001:28.0:1388433400.002381:0:95598:0:(cl_lock.c:804:cl_lock_cancel0()) Process entered
00000008:00000001:28.0:1388433400.002382:0:95598:0:(osc_lock.c:1305:osc_lock_flush()) Process entered
00000008:00000001:28.0:1388433400.002383:0:95598:0:(osc_cache.c:2827:osc_cache_writeback_range()) Process entered
00000008:00000001:28.0:1388433400.002386:0:95598:0:(osc_cache.c:2770:osc_cache_wait_range()) Process entered
00000008:00000020:28.0:1388433400.002387:0:95598:0:(osc_cache.c:2807:osc_cache_wait_range()) obj ffff88105cc78408 ready 0|-|- wr 0|-|- rd 0|- sync file range.
00000008:00000001:28.0:1388433400.002388:0:95598:0:(osc_cache.c:2808:osc_cache_wait_range()) Process leaving (rc=0 : 0 : 0)
00000008:00000020:28.0:1388433400.002389:0:95598:0:(osc_cache.c:2923:osc_cache_writeback_range()) obj ffff88105cc78408 ready 0|-|- wr 0|-|- rd 0|- pageout [0, 18446744073709551615], 0.

I think obj ffff88105cc78408 should be the source hosts on lustre, and when remote client try to cp it to zz directory, blocking ast should be sent to local client.

The interesting thing is that I didn't see fiemap calls on remote client ('rd' client), maybe it did the copy by normal read. Anyway, I didn't see anything wrong from the log, did the test success or not?

Since the remote client didn't call fiemap, we don't need ost log for now, thank you.

Comment by Jay Lan (Inactive) [ 31/Dec/13 ]

Please discard LU4380.dbg.20121230.tgz. The two files contained in the tarball had confusing names. (besides it should be 2013 ) Here is the new tarball: LU4380.dbg.20131230.resend.tgz. Same two files with new names:
LU4380.dbg.local.20131230
LU4380.dbg.remote.20131230

The fiemap was actually happened at the remote client, the one which actually did file creation and contents copying.

I had a problem that I was not able to stop debug_daemon until good data were flushed out of the debug file at the OST side. You need to tell me how to address that problem so that I can produce OST log for you.

Comment by Niu Yawei (Inactive) [ 02/Jan/14 ]

I had a problem that I was not able to stop debug_daemon until good data were flushed out of the debug file at the OST side. You need to tell me how to address that problem so that I can produce OST log for you.

You can try to execute 'lctl clear' on OSS to clear the debug buffer before testing.

Comment by Jay Lan (Inactive) [ 07/Jan/14 ]

I was wrong in saying that the reproducer can be run against 2.4.1 centos server. It actually was 2.4.0 server with patches. The branch was nas-2.4.0-1 and tag was 2.4.0-3nasS.

We recently updated the 2.4.0 mds (for testing LU-4403). Well, I am not able to reproduce the problem any more. The patches I picked up were:
LU-4179 mdt: skip open lock enqueue during resent
LU-3992 libcfs: Fix NUMA emulated mode
LU-4139 quota: improve write performance when over softlimit
LU-4336 quota: improper assert in osc_quota_chkdq()
LU-4403 mds: extra lock during resend lock lookup
LU-4028 quota: improve lfs quota output

Which of the above could have changed the outcome?

Also, do you expect it to work correctly when running 2.4.1 client against 2.1.5 server? I am still able to reproduce against 2.1.5 server.

Comment by Niu Yawei (Inactive) [ 08/Jan/14 ]

Which of the above could have changed the outcome?

All the patches seems not related to this problem, and I don't see why mds upgrading can change the outcome (I think this is a problem related only to client and OST). Could you verify the clients and OSS version? Do they all have the patch 58444c4e9bc58e192f0bc0c163a5d51d42ba4255 (LU-3219)?

Also, do you expect it to work correctly when running 2.4.1 client against 2.1.5 server? I am still able to reproduce against 2.1.5 server.

Does the 2.1.5 server have the patch 58444c4e9bc58e192f0bc0c163a5d51d42ba4255 applied?

Comment by Mahmoud Hanafi [ 10/Jan/14 ]

I have gathered clean debug logs from localclient, remoteclient and oss. The files are to large to attach here. I have uploaded it to your ftp site 'ftp://ftp.whamcloud.com/uploads'

The filename is "LU_4380.debug.tgz"


$ tar tzvf LU_4380.debug.tgz
rw-rr- root/root 215807901 2014-01-10 09:45 lu-4380.out.LOCALHOST
rw-rr- root/root 1198791 2014-01-10 09:45 lu-4380.out.OSS
rw-rr- root/root 135327548 2014-01-10 09:45 lu-4380.out.REMOTEHOST


Comment by Niu Yawei (Inactive) [ 13/Jan/14 ]
00000010:00000001:0.0:1389375898.914586:0:15540:0:(ost_handler.c:1261:ost_get_info()) Process leaving (rc=0 : 0 : 0)

Mahmoud, looks your OST is running on 2.1.5 and it doesn't have the patch 58444c4e9bc58e192f0bc0c163a5d51d42ba4255 (LU-3219) applied, so data corruption is expected.

Comment by Jay Lan (Inactive) [ 14/Jan/14 ]

We tested the 2.1.5 server with LU-3219 patch and the problem went away.

Since somehow we are no longer able to reproduce the problem with our 2.4.0
server (yes, LU-3219 was included in 2.4.0 release), we can close this ticket. Thanks for your help!

Comment by Peter Jones [ 14/Jan/14 ]

ok - thanks Jay!

Generated at Sat Feb 10 01:42:12 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.