actions / upload-artifact Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
The baseline behavior of the zip
utilty on Linux and macOS is to retain permissions.
However, when the upload-artifact
action zips a directory, it loses permissions, which subsequently breaks the artifacts for users and downstream tools.
Expected behavior: the permissions applied to assets in prior steps should be retained by the upload-artifact
zipper, and should be present in the resulting asset zip file.
@JayFoxRox This issue was tracking a way to get a link to an uploaded artifact. If you want some way to fetch the latest artifact, please file a new issue.
#27 (comment)
Hey there! I am opening a new issue as requested by @joshmgross. It would be incredibly helpful to have some way to fetch the latest version of an artifact.
For example, in my use case, I am using LaTeX for my resume and GitHub Actions to compile and create the PDF. It would be incredibly helpful to have a URL that fetches the most recent version. As mentioned in that thread, there is a workaround by pushing it to a release, which I will be using in the meantime but this would be a much more elegant solution.
Thanks!
While it was my fault that my Github action started to make empty zipballs, I would have preferred to be alerted of my bug way earlier. Therefor I suggest we add an option allow-empty-zipball
, and unless that is explicitly set to true
, we let the upload fail if the zipball would turn out empty.
I have a 900MB node_modules
directory with 86k files. Using actions/upload-artifact as-is takes about an hour and a half -- the logs appear to indicate that files are uploaded individually.
If I tar/gzip the directory first, the tar takes about 30 seconds, reducing it to 175MB, then the upload takes about 25 seconds.
Example of how actions/cache does it: https://github.com/actions/cache/blob/master/src/save.ts
Uploading from macOS github runner with v2-is-almost-here gives me:
Received http 503 during chunk upload, will retry at offset 0 after 10 seconds. Received http 503 during chunk upload, will retry at offset 0 after 10 seconds. ##[error]read ECONNRESET
Hi folks ! I would like to request a feature.
Here is an example from an issue we are currently having, where we want to upload artifacts on a certain condition. We want to upload screenshots that are generated with cypress but only when the test run has failed (cypress manages the folder creation if it needs to create screenshots).
Currently, we have to add another step to verify that the folder exists and based on the output of that step, perform the upload. This makes the workflow a bit difficult to manage because on a run of that workflow where everything is successful, the step that checks the existence of the screenshot folder will be in a failure state and that can be confusing.
- name: Check if screenshots folder exists
if: always()
run: test -d cypress/screenshots
- name: When present, upload screenshots of test failures
uses: actions/upload-artifact@v1
if: success()
with:
name: cypress-screenshots
path: cypress/screenshots
Could it be possible to have a implicit check, not as a default behaviour but with an option to skip the upload on a given condition ? Something like:
- name: When present, upload screenshots of test failures
uses: actions/upload-artifact@v1
with:
name: cypress-screenshots
path: cypress/screenshots
condition: folder-exists
folder-exists
could map to an internal check that equals to test -d
or any other equivalent. Different conditions could be implemented.
Maybe what I am asking it out of scope for this action but I thought I would take my chances.
Cheers !
When upload-artifact
is provided an existing zip file via its path:
argument:
with:
name: asset-package
path: asset-package.zip
It double-zips the zip, resulting a zip within a zip.
"You're using it wrong, don't provide a zip file!", might be a counter-argument, however zips might be preferred because:
"We can't trust your zipfile; what if it's some other file in disguise!", could be another argument:
content/type
check could be applied to confirm the file is a bonafide zip. If you still don't trust it, then the zip could be decompressed to /dev/null
and confirmed good, as opposed to double-zipping the zip.Yes, a work-around would be to unzip our resulting zip to a temporary directory, and then provide that to the asset uploader - however we're now wasting three rounds of zipping: initial zip, unzip, re-zip.
GitHub already gives developers full control over the content that they're uploading, so using a zipfile straight-away is just another means to provide the same content without a second layer of packaging.
It would be nice to have a way to exclude some files/directories from uploading. I would think about something like this:
steps:
- uses: actions/checkout@v1
- run: mkdir -p path/to/artifact
- run: echo hello > path/to/artifact/hello.txt
- run: echo hello > path/to/artifact/world.txt
- uses: actions/upload-artifact@v1
with:
name: my-artifact
path: path/to/artifact
exclude: .*rld.*
So only hello.txt will be archived/uploaded.
Might be ANT syntax is good.
I am creating my own actions as NodeJS/JavaScript and I am wondering how do I get the build run number, so I can use it to append/amend version numbers for artefacts.
Uploading artifact 'package-artifacts' from 'd:\a\Take-Out-The-Trash\Take-Out-The-Trash\output' for run #45
As this repo does not show the source for this as its dealt with the plugin publish
actions/toolkit#69
Can you give me a pointer on where we can use this information please
/cc @TingluoHuang
I have files in /home/runner/work/.../src/MyProject/bin/Release ... I tried copying this path but I get an error:
##[error]Value cannot be null. (Parameter 'name')
##[error]Exit code 1 returned from process: file name '/home/runner/runners/2.162.0/bin/Runner.PluginHost', arguments 'action "GitHub.Runner.Plugins.Artifact.PublishArtifact, Runner.Plugins"'.
Is there any way to programmatically retrieve the link to download the artifact?
eg. https://github.com/SiliconJungles/app/suites/574454195/artifacts/5483355
When I used CircleCI, I would build an Android APK, retrieve the download link & download token, then post the link onto discord so other members of the team could directly download the APK file without having to open the CI webpage and without having to log in.
Is there any way to achieve this with GitHub Actions?
I have a file permission problem with zip.
Here's what I have in github actions log:
And unzipping it with unzip myflow.zip -d .
on my Ubuntu 16.04 server would remove the x(execute permission) from file myflow
:
-rw-rw-r-- 1 ubuntu ubuntu 4.4K Sep 7 15:45 myflow
while the umask -S
result is:
u=rwx,g=rwx,o=rx
Seems the umask is not respected by unzip command which is quite annoying.
The upload is successful but files with colons in the name are missing in the artifact.
https://github.com/nickelc/azure-test/runs/311758340#step:5:8
Run actions/upload-artifact@v1
with:
name: docs
path: docs
Uploading artifact 'docs' from '/home/runner/work/azure-test/azure-test/docs' for run #80
Uploading 125 files
Total file: 125 ---- Processed file: 2 (1%)
Fail to upload '/home/runner/work/azure-test/azure-test/docs/modio::Filehash.md' due to 'TF10123: The path 'docs/modio::Filehash.md' contains the character ':'. Remove the ':' and try again.'.
GitHub.Services.Common.VssServiceException: TF10123: The path 'docs/modio::Filehash.md' contains the character ':'. Remove the ':' and try again.
The files are coming from a cloned wiki repository of a GitHub repository.
OG GitHub is not compatible with MS GitHub ๐
Sometimes it would be useful to publish artifact without archiving it, e.g. when artifact is a single file, or I created the archive by myself (for example, I want to archive to .tar.gz
on Ubuntu/macOS jobs and to .zip
on Windows).
Not sure if this is an appropriate place for this issue, but basically once you upload an artifact with this action, it will appear in the artifacts list. The problem is that it shows the file size uncompressed, despite being packed in an archive.
Hello,
I have a yml using a container to run steps:
runs-on: ubuntu-18.04
container:
image: my-container-on-docker-hub:latest
volumes:
- /github/home/.folder:/github/home/.folder
steps:
- name: ls
run: ls -la $HOME
- uses: actions/upload-artifact@master
with:
name: my-artifact
path: /github/home/.folder
Inside the container if I ls -la /github/home
I get:
total 16
drwxr-xr-x 4 1001 115 4096 Sep 12 23:47 .
drwxr-xr-x 4 root root 4096 Sep 12 23:47 ..
drwxr-xr-x 3 root root 4096 Sep 12 23:47 .folder
drwxr-xr-x 2 root root 4096 Sep 12 23:47 .ssh
However the upload-artifact
step returns:
Run actions/upload-artifact@master
with:
name: my-artifact
path: /github/home/.folder
##[error]Path does not exist /github/home/.folder
##[error]Exit code 1 returned from process: file name '/home/runner/runners/2.157.3/bin/Runner.PluginHost', arguments 'action "GitHub.Runner.Plugins.Artifact.PublishArtifact, Runner.Plugins"'.
I suspect that upload-artifact
runs outside of the container and that's why I tried using volumes
to make the data accessible on both sides, but it didn't help.
Is this an issue with volumes
or upload-artifact
?
Thanks
Hi,
This suggestion has been discussed here in the GitHub community forum. It can be considered as an extension of #45.
The current retention time for artifacts is 90 days. I suggest a more flexible artifact retention duration. I have setup nightly builds on my open source project. I certainly do not need those nightly builds to be retained for 90 days. Keeping the last 5 days (or so) is sufficient,
The current retention time of 90 days should be a default one and/or a maximum one. But a shorter period should be allowed, either in the settings of the repo or on an artifact basis, for instance something like this:
- name: Upload build
uses: actions/upload-artifact@master
with:
name: installer
path: installer.exe
retention-days: 5
I had a matrix build doing this:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [pypy2, pypy3, 3.7]
steps:
- uses: actions/checkout@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools setuptools_scm wheel
- name: Create packages
run: python setup.py sdist bdist_wheel
- uses: actions/upload-artifact@master
with:
name: dist
path: dist
i.e. building an artifact using each of 3 Python versions, and where the 3 versions were all creating the same file and uploading it as the same name.
Might be nice for this to error out saying an artifact with the given name was already uploaded during this run of the workflow.
This is a feature request to support $HOME in the path.
I'm trying to use the upload-artifact action as a step in a CI build. Most of my other steps are saving and storing files based on paths relative to $HOME, so it makes sense to be able to do the same here.
The workaround currently is to hardcode the path.
So currently I do something like this:
- uses: actions/upload-artifact@v1
with:
name: artifacts.zip
path: /home/runner/my_file
But would like to do this:
- uses: actions/upload-artifact@v1
with:
name: artifacts.zip
path: ${HOME}/my_file
We just started getting Cannot upload artifacts. You may have exceeded your spending limit for storing artifacts with Actions or Packages.
We have an artifact that takes about 5 minutes to produce, which is fed into a job that spins up multiple containers than run in parallel. The artifacts are only needed to feed these boxes, and can be removed after.
After 1 day, we started getting the message I wrote above. I have deleted every artifact I could find manually. Then I ran this action here that says it purged all my artifacts: https://github.com/kolpav/purge-artifacts-action
Clicking through as many previous workflows as I could, I found no examples of existing artifacts.
I continue to the get above message, and all merging has stopped on our project.
I need to know how to manage my artifacts so this doesn't happen again, and get my pipeline back up asap.
Thanks!
It would be nice to open html files in the browser instead of having to download and up zip the file. CircleCi does this. This is useful for test results or lint results and other types of reports.
Hello,
Some of my artifacts are transient. They transmit data between jobs but I don't want to keep them.
Ideally, I would like to use actions/upload-artifact
like this:
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: myartifact
path: mypath
temporary: true
The temporary
(or transient
) flag would mark the artifact as transient and automatically delete it when the workflow is done.
first of all thank you for this action ! it is very usefull !
secondly I have a feature request,
actions/upload-artifact
- uses: actions/upload-artifact@v1
with:
name: all_reports
path: all_reports
I would like to be able to change the name based on the GITHUB_REF env variables
I tried this but it wont work:
- uses: actions/upload-artifact@v1
with:
name: all_reports_${GITHUB_REF}
path: all_reports
I also have tried
name: all_reports_${{ GITHUB_REF }}
no luck
It is now required to provide a path for every artifact you upload. I have a repository where the artifacts are ending up in the root of the git repository (it is a fairly small project). I would expect the default path to be the git repository and a possible override of the path being relative to it, but this might not be true, it is not documented that clearly. Anyway, if the artifacts to upload are in a default path, specifying the path is redundant.
Hi,
This suggestion has been discussed here in the GitHub community forum.
In a Github Actions job, after an upload-artifact
step, I would like to get the URL of the published artifact in a subsequent step.
The idea is a job using the following steps:
How would you get the URL of the artifact in a subsequent step?
I know that there is an Actions API currently in development. But, here, the question is about passing information from the upload-artifact
step to the next step.
It could be something like this:
- name: Upload build
uses: actions/upload-artifact@master
with:
name: installer
path: installer.exe
env-url: FOOBAR
- name: Use URL for something
run: echo "${{ env.FOOBAR }}"
The last command would display something like:
https://github.com/user/repo/suites/123456/artifacts/789123
Currently facing this issue where I want the name of the PR in the name of the artifact, but
##[error]Artifact name is not valid: WeakAuras-Companion-PR-fix/discord-channel-Node10. It cannot contain '\', '/', "', ':', '<', '>', '|', '*', and '?'
any chance you guys can escape this?
Are there size limits on artifacts? Are there quotas for total number of artifacts? Do these count towards my 100GB hard limit on the size of my repo? (And, if you know, a similar question about pushing docker images to Github package registry?)
I'm looking to port a buildkite build to Github Actions - at the moment it generates some largish artifacts when it archives node_modules (to save time doing npm ci
from one parallel task to the next). I was thinking about pushing a dev docker image instead, but right now our image is an astonishing 1G, which seems like an excessively large thing to push to either artifacts or to a package registry on each build. :P
I'm working with two workflows in different platforms (A and B) and the workflow B needs an artifact of the workflow A. Also, I'm trying to avoid external services as S3 in order to make it simple.
Is there a way to get a link to the artifacts generated in the last build (latestbuild) or in the last passed build (lastsuccessfulbuild)?
Maybe something like this: https://github.com/{account}/{repo}/workflows/{workflowName}/builds/latest/artifacts/file.zip
It is possible to remove an uploaded artifact?
The v2-preview of upload-artifact is out and we need your help!
You can try it out by using actions/upload-artifact@v2-preview
Any associated code and documentation can be found here: https://github.com/actions/upload-artifact/tree/v2-preview
This issue is for general feedback and to report an bugs/issues during the preview
There is also a v2-preview
for download-artifact
๐ see: actions/download-artifact#23
Warning: At any time during the preview, there may be unannounced changes that can cause things to break. It is recommended not to use the preview of this action in critical workflows
The v1
versions of upload-artifact
(and download-artifact
) are plugins that are executed by the runner. The code for v1
can be found here: https://github.com/actions/runner/tree/master/src/Runner.Plugins/Artifact
The v1
code is written in C# and is tightly coupled to the runner, it also uses special APIs that only the runner can use to interact with artifacts. If any changes or updates had to be made related to artifacts, they had to done on the runner and a new release had to roll out that would take a significant amount of time. With v2, there is no dependency on the runner so it will be much easier and faster to make changes and accept community contributions (right now it was pretty much impossible).
The v2-preview
of upload-artifact has been rewritten from scratch using Typescript with a new set of APIs that allow it interact with artifacts (previously only the runner could do this). There is a new NPM package called @actions/artifact
that contains the core functionality to interact with artifacts (which this action uses for the most part). This NPM package is hosted in the actions/toolkit
repo so anyone can use this to interact with artifacts when developing actions. You can find the package here (lots of documentation and extra info):
https://www.npmjs.com/package/@actions/artifact
https://github.com/actions/toolkit/tree/master/packages/artifact
Since v2-preview
is effectively a total rewrite from v1
, There is a huge potential for bugs so it needs to be tested thoroughly before creating an actual v2
release. We need help testing the core functionality which includes:
There will be no new features added as part of the v2-preview
, we need to test the core functionality first and once a v2
release is out, then we can start chipping away at issues/features that we have been unable to address with v1
๐ For the moment, please don't submit PRs for any new features to the v2-preview
branch.
Some initial observations from some internal testing show that v2-preview
is slightly slower than v1
(around 10-15%). A large part of this discrepancy can be attributed to porting over from C# to Node. In C#, the runner would upload artifacts with Async Tasks and true multi-threading. With Node, the HTTP pipeline is not as efficient (we would love some help over in the toolkit repo for @actions/artifact
if anyone is really good with TS, Aysnc/Await and HTTP pipelining to hopefully make this better)
One of the really useful things about uploading artifacts is caching logs, coverage reports, et cetera. Those often have timestamps or UUIDs as names.
You can hack around it with a compression to a known name step, but, uploading and downloading artifacts by wildcard (or even just download-all) would be a huge benefit
I tried to retrieve the list of artifacts in a second job using GET /repos/:owner/:repo/actions/runs/:run_id/artifacts
after uploading some artifacts by actions/upload-artifact@v1
in the first job, but the artifacts won't show up. The API call will always return { "total_count": 0, "artifacts": [ ] }
.
I have the second job depends on the first job.
Is there a solution to this?
Ok, I'll admit this is a strange one... ๐
My build uses the upload-artifact action that creates a zip.
I am able to unzip that file with the macOS Finder and the Ubuntu 19.04 file manager UI.
However, if I try to extract the same zip file with the Ubuntu 18.04.2 LTS file manager (Nautilus) it fails:
The message reads: "There was an error extracting "socket-connect-bpf.zip". "Error opening file "{file}": Not a directory"
If I unzip the same file with:
unzip socket-connect-bpf.zip
it works as well...
I could reproduce this behavior on a second Ubuntu 18.04.2 install.
I don't know if the upload-artifact action could zip the files differently?
$ zipinfo Slides.zip
Archive: /Users/zw/Downloads/Slides.zip
Zip file size: 30757126 bytes, number of entries: 2
-rw---- 2.0 fat 0 b- stor 19-Dec-01 06:14 Slides/
-rw---- 2.0 fat 31608616 bl defN 19-Dec-01 06:14 Slides/Slides.pdf
2 files, 31608616 bytes uncompressed, 30756888 bytes compressed: 2.7%
Some extraction tools ignore the 0600 (f.ex., Archive Utility on the Mac), while others take it literally (funnily enough, f.ex., Safari's "Open 'safe' files after downloading") If it is kept, file managers may prevent access, which is annoying. For example, here's a directory at 0600:
I've also reproduced the same problem when downloading the artifact on Ubuntu.
This is not related to the chmod at time of upload; for instance, I added a chmod -R 0777
to the output before uploading it, and it had no effect.
The zips should be created without capturing permissions, or at least with a permission of 0644 if it is being set somehow.
Where does the upload go to? Is there any meta information returned on success/failure? Can we control where the artifact is uploaded?
Sorry silly questions but the readme is a little on the slim side :)
is multiple paths supported?
I'm not sure whether this issue is proper to be here. Please let me know where to forward if desired.
The official docs for action api says artifact download URL
expires in 1 minute.
Why is there such expiration and even so short?
What if someone needs a fixed, lasting URL without an expiration?
Thanks.
It is surprising that this repo (the scripts and documentation in this project) is licensed under MIT, since there is little to no relevant content to be licensed. However, I'm concerned about this from a practical point of view. I believe that most of the currently open issues (specially #3) could be fixed by the community, should we have any code to look at. It'd be really useful if the glue logic that gathers the artifacts and names them was open sourced, along with some minimal docs about the backend API that this action is using.
We used to copy the URL from https://github.com/actions/upload-artifact#where-does-the-upload-go to our website for nightly builds (which have short retention and frequent updates, and don't warrant a GitHub Releases push).
We intended to replace this with programmable download URLs that have been discussed in many issues on the repositories (either a latest/release.zip
or a third party service which asks the Actions API for the latest URL and redirects the user).
This stopped working very recently.
Until recently, the actions tab was only viewable for logged in users (confusing 404 HTTP error for guests), but the artifact download URLs were still public (working for logged in users and also guests).
Likewise, when the Actions API was released, API requests worked without any authentication. One could simply query the API for an artifact download URL and redirect the guests to it. - All of that worked without GitHub account and could have worked from JavaScript or a small lightweight webservice which redirects the end-user to the latest artifact download through HTTP redirects (I wrote https://github.com/JayFoxRox/GitHub-artifact-URL for this purpose).
However, within the last days these artifact download URLs were suddenly made private - they only work for registered GitHub users now. Everyone clicking the download button on our website (who isn't logged into GitHub) gets a confusing 404 error for direct artifact download URLs now.
Even my tool to redirect users doesn't work anymore because the Actions API also requires the API client to be authenticated now (also getting a confusing "not found" error otherwise). See JayFoxRox/GitHub-artifact-URL#4 ; even if I implemented authentication now, the download URL (we redirect to) would likely not work for guest users (such as end-users of our software, who don't have a GitHub account).
It really starts to feel like we are working against how GitHub Actions is intended to work (now, and in the future - none of which seems to be documented very well). It is clearly different from any other CI I have ever worked with; because Travis and AppVeyor had public artifact URLs with simple URLs, which could be easily linked from our website (Compare AppVeyor).
The GitHub documentation for artifacts says
Artifacts allow you to share data between jobs in a workflow and store data once that workflow has completed.
The first part is obvious; but the second part is really vague. Who shall access this stored data? Why? - My thinking has been (from experience with other CI systems): to share temporary builds, with users for early-testing ("continuous integration"), without pushing a release.
- name: publish
uses: actions/upload-artifact@master
with:
name: mypackage-$GITHUB_SHA
path: ./OUT/mypackage-$GITHUB_SHA
Say I wish to upload a screenshot made from puppeteer, can it be possible to upload just the image generated rather than a zip file only containing that image.
I am using a self hosted runner, on a windows box, behind a corporate firewall, and the running is running as a service account. I am able to run action-checkout, but i get this error below when trying to upload an artifact.
Thoughts?
SSL issues in my case typically stem from the proxy server in some way, but I have tried everything I know of, not sure how to verify the upload-artifact is using my proxy or if its just me.
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
---> System.Net.Sockets.SocketException (10054): An existing connection was forcibly closed by the remote host.
--- End of inner exception stack trace ---
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.GetResult(Int16 token)
at System.Net.FixedSizeReader.ReadPacketAsync(Stream transport, AsyncProtocolRequest request)
at System.Net.Security.SslStream.ThrowIfExceptional()
at System.Net.Security.SslStream.InternalEndProcessAuthentication(LazyAsyncResult lazyResult)
at System.Net.Security.SslStream.EndProcessAuthentication(IAsyncResult result)
at System.Net.Security.SslStream.EndAuthenticateAsClient(IAsyncResult asyncResult)
at System.Net.Security.SslStream.<>c.<AuthenticateAsClientAsync>b__65_1(IAsyncResult iar)
at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at GitHub.Services.Common.VssHttpRetryMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
at GitHub.Services.WebApi.VssHttpClientBase.SendAsync(HttpRequestMessage message, HttpCompletionOption completionOption, Object userState, CancellationToken cancellationToken)
at GitHub.Services.FileContainer.Client.FileContainerHttpClient.UploadFileAsync(Int64 containerId, String itemPath, Stream fileStream, Byte[] contentId, Int64 fileLength, Boolean isGzipped, Guid scopeIdentifier, CancellationToken cancellationToken, Int32 chunkSize, Int32 chunkRetryTimes, Boolean uploadFirstChunk, Object userState)
at GitHub.Services.FileContainer.Client.FileContainerHttpClient.UploadFileAsync(Int64 containerId, String itemPath, Stream fileStream, Guid scopeIdentifier, CancellationToken cancellationToken, Int32 chunkSize, Boolean uploadFirstChunk, Object userState, Boolean compressStream)
at GitHub.Runner.Plugins.Artifact.FileContainerServer.UploadAsync(RunnerActionPluginExecutionContext context, Int32 uploaderId, CancellationToken token)
File upload complete.
Uploaded '0' bytes from 'D:\actions-runner\_work\bdw-mart\bdw-mart\build\BDW.dacpac' to server
##[error]The SSL connection could not be established, see inner exception.
##[error]Exit code 1 returned from process: file name 'D:\actions-runner\bin\Runner.PluginHost.exe', arguments 'action "GitHub.Runner.Plugins.Artifact.PublishArtifact, Runner.Plugins"'.
Now that GitHub has a strict quota and charges on storage usage, the ability to remove artifacts is an essential feature. This is covered in #5, however, since Actions are automated, manual removal is too tedious.
I'd like to suggest supporting retention policies instead where after a period of time (specified in the job step), artifacts automatically clear themselves out to avoid wasting storage space.
For example, when sharing artifacts between jobs, they're only needed for an hour maximum, and for debugging tests they may not be needed for more than a day.
Here is a configuration example where expires
is the retention policy given in seconds.
- uses: actions/upload-artifact@v1
with:
name: my-artifact-for-one-hour
path: path/to/artifact
expires: 3600
- uses: actions/upload-artifact@v1
with:
name: my-artifact-for-one-day
path: path/to/another/artifact
expires: 86400
Without this, the entire idea of "Shared Storage" in GitHub Actions is unusable for Teams and Individuals with fixed budgets as the costs will just keep growing.
#5 will still be needed to remove artifacts that don't have a retention policy.
During the beta of GitHub Actions, a lot of documentation (including the README for this action) showed example YAML with master
being referenced:
uses: actions/upload-artifact@master
We currently encourage users to not use master
as a reference whenever an action is being used. Instead, one of the available tags should be used such as:
uses: actions/upload-artifact@v1
Tags are used alongside semantic versioning to provide a stable experience: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-actions#versioning-your-action
The master branch can abruptly change without notice during development which can cause workflows to unexpectedly fail. If you want to avoid unexpected changes or failures, you should use a tag when referring to a specific version of an action. Tags are added to stable versions that have undergone significant testing and should not change.
The v2
versions of download-artifact
and upload-artifact
are currently in development. Expect changes to start showing up in the v2-preview
branch. These new changes will eventually be merged into master
(we will communicate about this in the future) and that will have the potential to break your workflows if you are using @master
so you will have to react by updating your YAML to use a tag.
Our telemetry indicates a significant amount of users are using @master
. Good practice for all actions is to use a tag instead of referencing @master
.
Only started recently that I started getting Fail to upload '/home/runner/work/WeakAuras-Companion/WeakAuras-Companion/build/WeakAuras Companion-1.2.3.AppImage' due to 'Blob is incomplete (missing block). Blob: b92268561adee911b5e92818784a82c1, Expected Offset: 0, Actual Offset: 8388608'. GitHub.Services.WebApi.VssServiceResponseException: Blob is incomplete (missing block). Blob: b92268561adee911b5e92818784a82c1, Expected Offset: 0, Actual Offset: 8388608 ---> System.InvalidOperationException: Blob is incomplete (missing block). Blob: b92268561adee911b5e92818784a82c1, Expected Offset: 0, Actual Offset: 8388608
errors on my project https://github.com/WeakAuras/WeakAuras-Companion/runs/232817440
Any ideas?
Hi,
Thank you for this plugin. Everything was working correctly during the beta, however, uploading on windows appears to fail due to a project path being duplicated. I suspect the path should be d:\a\PDO\build\coverage
with only one PDO folder.
Run actions/upload-artifact@master
with:
name: coverage-report
path: build/coverage
##[error]Path does not exist d:\a\PDO\PDO\build\coverage
##[error]Exit code 1 returned from process: file name 'c:\runners\2.162.0\bin\Runner.PluginHost.exe', arguments 'action "GitHub.Runner.Plugins.Artifact.PublishArtifact, Runner.Plugins"'.
This is sort of similar to #3, but perhaps has a slightly easier path.
Right now, the action only support uploading a folder. This is not ideal, but I'm able to work with it. However, when zipping up the content, it includes the enclosing directory itself, resulting in a structure like this when downloading the zip file (from the UI, or from the download action):
some-folder.zip
โโโ some-folder
ย ย โโโ nested-folder
ย ย โย ย โโโ some-nested-file.txt
ย ย โโโ some-file.txt
โโโ another-file.txt
This is unnecessary for my use case. Ideally, I would want to zip file to have this structure:
some-folder.zip
โโโ nested-folder
โย ย โโโ some-nested-file.txt
โโโ some-file.txt
โโโ another-file.txt
The difference being that the enclosing folder ("some-folder") is no longer included in the zip file.
In my use case, this is important. I'm using GH Actions to build a chrome/firefox extension. Their submission form accept a zip file with a known structure. If I can omit the enclosing folder, I can always get what I want by restructuring the content of the folder I'm passing to this action. With the current setup, there is no way to accomplish that since there will always be an extraneous folder at the root of the zip file, which is not allowed.
Instead, I'll have to create the zip file in the format I wanted, put it into a folder, pass it to this action, then later download the zip-file-containing-a-folder-containing-a-zip-file from the UI, then upload that to the submission form.
Of course, it would be even nicer if this action can take an arbitrary list of files, including glob patterns, etc. But even without any of that, if we can omit the enclosing folder at the root, it is sufficient to create arbitrary artifacts archive (but making an additional temporary folder and copying things around).
In my testing it appears that this action sets executable permissions on all files in the directory that is uploaded. This is undesirable, in general.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.