Only store uintmax for the number of entries in the regular end record
if it doesn't fit. p7zip 16.02 rejects zip files where the number of
entries in the regular end record is larger than the number of entries
counted in the central directory.
Fixes: 187485108
Test: TestZip64P7ZipRecords
Change-Id: I0d116e228a0ee26e4e95bb3f35771da236a056eb
If the length of a stored file is more than 2^32 and a data descriptor
is not being used then a ZIP64 extra is required in the file header to
store the uncompressed and compressed lengths.
Bug: 175055267
Test: TestCopyFromZip64
Change-Id: Id414b4c04f48aefabfd835bd8339333d36576375
When CopyFrom writes a zipentry, it strips extra fields and generates
data descriptors. When writing data descriptors, it only writes 64 bit
values if the relevant sizes are >4GB. In some cases, the sizes are <4GB
but 32 bit sizes are set to -1. In this situation, CopyFrom will write
incorrect local file header, resulting in a zip file that can't be
parsed by standard zip tools.
Test: Unit Tests
Bug: 161922066
Change-Id: I64319a80647013eaf7693cf8bf5c6120016913a3
The extended-timestap extra block changes between Local File Header and
Central Directory. We have to strip it out to avoid mis-filling during
zip2zip.
Bug: b/65455145
Test: add unit-test.
Change-Id: I17e3f6c10fd6a068019620b4426f6042f6fac317
Also filter out META-INF/TRANSITIVE dir, and report warnings when
merge_zips see duplicates entries with different CRC hash.
Bug: b/65455145
Test: m clean && m -j java (locally)
Change-Id: I47172ffa27df71f3280f35f6b540a7b5a0c14550
This was blindly copying the zip64 extra fields from the central
directory of the original zip file to the new one. The zip64 extra
fields depend on the contents of the header it's attached toa, and in
this case we were copying the zip64 file header offset from the central
directory entry into the destination local file header, which makes no
sense, especially since the offset changed in the new file.
So strip all zip64 extra entries, and we'll create them as necessary
when writing ou the new file.
Bug: 34704111
Test: zip2zip on the original target-files -> img that was broken
Test: m -j blueprint_tools (new android_test.go)
Change-Id: Ie3c0540b13d3afcf42f3d47fff319065952b126f
Fix error when building zip tests running go test ./...:
android/soong/third_party/zip/zip_test.go:13:2: use of internal package not allowed
Test: go test ./...
Change-Id: I4fd7317401fd3d9c95c6f11799c94c1eff25523e
This compresses multiple files in parallel, and will split up larger
files (5MB+) into smaller chunks (1MB) to compress in parallel.
There is a small size overhead to recombine the chunks, but it's only a
few bytes per chunk, so for a 1MB chunk, it's minimal.
Rough numbers, with everything in the page cache, this can compress
~4GB (1000 files) down to 1GB in 6.5 seconds, instead of 120 seconds with
the non-parallel soong_jar and 150 seconds with zip.
Go's DEFLATE algorithm is still a bit worse than zip's -- about 3.5%
larger file sizes, but for most of our "dist" targets that is fine.
Change-Id: Ie4886c7d0f954ace46e599156e35fea7e74d6dd7
This doesn't do any decompression / recompression, but just copies over
the already compressed contents. So it's similar to zip -U, but allows
rewriting of the paths.
The first expected usecase is to replace img_from_target_files during
the build, since it does the equivalent of this:
zip2zip -i <target-files.zip> -o <img.zip> OTA/android-info.txt:android-info.txt IMAGES/*:.
Except it decompresses and recompresses the images, which takes over a
minute instead of a few seconds.
Change-Id: I88d0df188635088783223873f78e193272dbdf1c
In preparation to patch in some custom functionality (parallel
compression and zero-decompress zip to zip copying)
Change-Id: I96a36efc09c715f6b55290af40ebfdde9ae72e33