-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload the package zip in chunks for GHA #1043
base: main
Are you sure you want to change the base?
Changes from all commits
cd70f77
7fa81d1
df0a632
db160c1
5f343b1
905209b
77e5c79
68f9a6b
41d09b3
d025512
126232c
18868f1
a1bf8c6
e6bf304
6faa588
8b15edd
1150fed
dbed226
ef45cb8
e8bd3a0
a29b364
6e27717
710d38f
78b6bf3
2568af0
d397db3
608a72b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -561,8 +561,7 @@ namespace vcpkg | |
StringView url, | ||
const std::vector<std::string>& secrets, | ||
View<std::string> headers, | ||
const Path& file, | ||
StringView method) | ||
const Path& file) | ||
{ | ||
static constexpr StringLiteral guid_marker = "9a1db05f-a65d-419b-aa72-037fb4d0672e"; | ||
|
||
|
@@ -589,7 +588,7 @@ namespace vcpkg | |
return std::move(maybe_res).error(); | ||
} | ||
|
||
auto http_cmd = Command{"curl"}.string_arg("-X").string_arg(method); | ||
auto http_cmd = Command{"curl"}.string_arg("-X").string_arg("PUT"); | ||
for (auto&& header : headers) | ||
{ | ||
http_cmd.string_arg("-H").string_arg(header); | ||
|
@@ -618,6 +617,61 @@ namespace vcpkg | |
return res; | ||
} | ||
|
||
ExpectedL<Unit> patch_file(const Filesystem& fs, | ||
StringView url, | ||
View<std::string> headers, | ||
const Path& file, | ||
std::size_t file_size, | ||
std::size_t chunk_size) | ||
{ | ||
static constexpr StringLiteral guid_marker = "9a1db05f-a65d-419b-aa72-037fb4d0672e"; | ||
|
||
Command base_cmd; | ||
base_cmd.string_arg("curl").string_arg("-X").string_arg("PATCH").string_arg("-w").string_arg( | ||
"\\n" + guid_marker.to_string() + "%{http_code}\n"); | ||
for (auto&& header : headers) | ||
{ | ||
base_cmd.string_arg("-H").string_arg(header); | ||
} | ||
base_cmd.string_arg(url); | ||
|
||
auto file_ptr = fs.open_for_read(file, VCPKG_LINE_INFO); | ||
std::vector<char> buffer(chunk_size); | ||
std::size_t bytes_read = 0; | ||
for (std::size_t i = 0; i < file_size; i += bytes_read) | ||
{ | ||
bytes_read = file_ptr.read(buffer.data(), sizeof(decltype(buffer)::value_type), chunk_size); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think a whole curl process launch per chunk like this is kind of a problem. I don't see reasonable ways to achieve the effect this PR wants without linking with libcurl. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suppose it could still be done like this but the chunk sizes would have to be bigger than make sense to denote as a single contiguous memory buffer; there should be more than one read / write per curl launch, etc....... and that sounds like a lot more work than linking with libcurl. |
||
if (!bytes_read) | ||
{ | ||
return msg::format_error( | ||
msgFileReadFailed, msg::path = file, msg::byte_offset = i, msg::count = chunk_size); | ||
} | ||
|
||
auto cmd = base_cmd; | ||
cmd.string_arg("-H") | ||
.string_arg(fmt::format("Content-Range: bytes {}-{}/{}", i, i + bytes_read - 1, file_size)) | ||
.string_arg("--data-binary") | ||
.string_arg("@-"); | ||
|
||
int code = 0; | ||
RedirectedProcessLaunchSettings launch_settings; | ||
launch_settings.stdin_content = {buffer.data(), bytes_read}; | ||
auto res = cmd_execute_and_stream_lines(cmd, launch_settings, [&code](StringView line) { | ||
if (Strings::starts_with(line, guid_marker)) | ||
{ | ||
code = std::strtol(line.data() + guid_marker.size(), nullptr, 10); | ||
} | ||
}); | ||
if (!res.get() || *res.get() != 0 || (code >= 100 && code < 200) || code >= 300) | ||
{ | ||
return msg::format_error( | ||
msgCurlFailedToPutHttp, msg::exit_code = res.value_or(-1), msg::url = url, msg::value = code); | ||
} | ||
} | ||
|
||
return Unit{}; | ||
} | ||
|
||
std::string format_url_query(StringView base_url, View<std::string> query_params) | ||
{ | ||
auto url = base_url.to_string(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that the default size is 450 MB. And no limits.
Is there an alternative to reading it into a buffer first just to forward it to anothers command stdin?
(Remembering all those Raspi people which struggle to build
vcpkg
due to low memory...)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original way was to split the file on disk, but that's pretty hacky I think.
But I can decrease the buffer size. I'm not sure what you mean with limit.
Are people really running a GitHub Runner server on a Raspi lmao 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, this is only the tool uploading the artifacts. Caching large artifacts is more important when build machine power is low.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay. What buffer size do you think I should use? I can't make it really small or else the upload will be way slower than it would otherwise be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. I see the trade-offs and barriers.
Changing curl (tool) is out of scope here.
If the interface remains running curl instead of calling into libcurl, then it would be best to fix the second point.
If this is too intrusive, it might be helpful to have a way for the user to change the buffer size, or at least to turn of the buffering in case of trouble.