Giter VIP home page Giter VIP logo

gcs-resource's People

Contributors

aemengo avatar belljustin avatar bsnchan avatar davewalter avatar dsharp-pivotal avatar frodenas avatar guidowb avatar jhvhs avatar ljfranklin avatar lubronzhan avatar robertjsullivan avatar shinji62 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gcs-resource's Issues

Print helpful information on `out` / put failure

Currently when an upload fails, the error message is .... error running command: googleapi: Error 404: Not Found, notFound

It would be nice to know the name of the bucket, the regex, and the file that we are trying to upload. In other words, it would be nice to know the gsutil cp command so that I could try and reproduce the issue locally.

Network unreachable

We seem to be getting errors network errors, both in the resource check, and in puts to the resource. The first error here is about failure to do auth, but it seems to just be wrapping a network error.

I'm testing this with the Concourse quickstart, on version 4.2.3, running in Docker 18 on a Mac.

It looks like this could be to do with IPv6 – some GitHub issues suggest it's not fully supported on Docker Desktop yet. If that is the case though, I'm not sure why gcs-resource would be getting an IPv6 address to use in the first place?

Would appreciate any help you might be able to provide. I'm not certain if this is a gcs-resource problem specifically, but so far this is the first time I've seen anything like this issue with any Concourse resources, so I thought this would be the best place to start.

~/concourse > fly -t thread cr -r thread/schema.build
error: check failed with exit status '1':
error running command: Get https://www.googleapis.com/storage/v1/b/thread-build-artifacts?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp [<ipv6-address>]:443: connect: network is unreachable

The config looks like this (trimmed unnecessary jobs and tasks for clarity):

---
resource_types:
  - name: gcs-resource
    type: docker-image
    source:
      repository: frodenas/gcs-resource

resources:
  - name: backend.git
    type: git
    source:
      uri: [email protected]:owner/repo.git
      branch: master
      private_key: ((github_private_key))

  - name: api-schema.build
    type: gcs-resource
    source:
      bucket: ((gcs_bucket))
      json_key: ((gcs_key))
      versioned_file: api-schema.build/local.graphql.json

jobs:
  - name: backend
    plan:
      - get: backend.git
        trigger: true
      - task: backend-build
        image: backend-build.docker
        config:
          platform: linux
          inputs:
            - name: backend.git
          outputs:
            - name: api-schema.build
          run:
            path: backend.git/scripts/output-schema.sh
      - put: api-schema.build
        params:
          file: api-schema.build/local.graphql.json

I'm fairly new to Concourse, so if there's a particular thing that I can provide to help debug this, please do let me know.

Using unpack in `get` task

Hi,

I have been using unpack as shown below for the get task:

- get: tar-ball
  params:
    unpack: true

However, I don't see the tar ball being unpacked/untarred. Am I missing something here?

Thanks,
Goutam Tadi

Issue with pulling/pushing to Google Cloud Storage

Concourse Version: v2.2.1

When pulling or pushing to GCS I recieve the following error:

json: cannot unmarshal number into Go value of type string

Resource config:

resource_types:
  - name: gcs-resource
    type: docker-image
    source:
      repository: frodenas/gcs-resource

resources:
  - name: plan
    type: gcs-resource
    source:
      bucket: cibucket
      json_key: {{gcs_service_account}}
      versioned_file: terraform.tfplan

This issue does not occur when using regexp instead of versioned_file.

Tag releases please

Hi,

Can you please consider tagging releases made to this gcs-resource? This would allow us to easily test changes made to this resource without risking breaking all of our production pipelines.

Thanks ...

Access a filename in a gcs bucket seems broken

Context, I need to modify an existing pipeline to grab the specific version of a file, in this case kubo-pipeline-store/dev-builds/kubo-deployment-0.37.0-dev.5.tgz

After looking at the readme for IN, I thought this would work:

- name: kubo-deployment
  type: gcs
  source:
    json_key: ((gcs-json-key))
    bucket: kubo-pipeline-store
    filename: dev-builds/kubo-deployment-0.37.0-dev.5.tgz

However, it did not. Here is my work-around:

- name: kubo-deployment
  type: gcs
  source:
    json_key: ((gcs-json-key))
    bucket: kubo-pipeline-store
    regexp: dev-builds/kubo-deployment-(.*).tgz
    version: 0.37.0-dev.5

error running command: stream error: stream ID 5; INTERNAL_ERROR

we have seen this error a few times in our pipeline while downloading files from GCP. It worked after re-trigger the builds. I can find a similar issue in https://github.com/googleapis/google-cloud-go/commits/987caa2d3895c188c40784ad94fe1dd681a8cf68, which is fixed by https://github.com/googleapis/google-cloud-go/commits/d19004dbb.

But this resource is using a deprecated pkg google.golang.org/api/storage/v1. and the new one cloud.google.com/go/storage is not backward compatible. so we wont't get the same fix by bumping the pkg.

Any chance switching to cloud.google.com/go/storage?

Version grabs latest, not specified version

We expect gcs-resource to pull in the version specified with the following code.

- name: harbor-tile
  type: gcs-resource
  source:
     regexp: harbor-container-registry-(.*).pivotal

...
    - get: harbor-tile
       params:
           version: 1.6.0-build.35

When there are two files in a bucket, however, the latest semver was pull in instead.

harbor-container-registry-1.6.0-build.35.pivotal
harbor-container-registry-1.6.3-build.3.pivotal

Here is a workaround:

- name: harbor-tile
  type: gcs-resource
  source:
     regexp: harbor-container-registry-(1.6.0-build.35).pivotal
...
    - get: harbor-tile
       params:
           version: 1.6.0-build.35

[Question] Object Delete

Hi,

I'm using this plugin to send intermediate artifacts to GCS, however, I'd love to be able to delete them once the pipeline finishes, I'm thinking something like this:

- put: artifact params: file: path/to/artifact delete: true

This would effectively delete the artifact from the bucket for cleanup, I see that the GCSClient already has the functionality built but I have zero experience on Concourse resources so I don't even know where to start doing this.

Can this be done without much modification to the resource?

skip_download on a get question..

hi, this is not so much an issue as it is a question.
trying to make sense of what i'm seeing.
i've been trying to use skip_download on a get as documented and it never seemed to work.
is this new release version 0.6 the first release to enable this feature?
why is the field of type string?
thanks in advance. / eitan

Unpack does not restore symlinks

I use this resource to cache my node_modules. Node_modules contain a .bin folder with with links to all binaries. When using theunpack:true parameter, these are all 0 bytes without symlinks.
I solve this now by manually running tar xf node_modules.tar in a task but it would be really nice if gcs-resource could do this for me.

ARM64 Support

Hi Team,
I am trying to use “gcs-resource” image over ARM64 but it seems it is not available.

I have updated below files to build and release an arm64 image:

In Makefile, I have used buildx to build and push the image for both platforms and also added an install tag to install buildx binary and created a new builder and switched to it.
In travis.yml added “dist:focal” and updated 'go' version to “1.12.6”.

Commit: odidev@72f4538

After the above changes the image was successfully uploaded in Dockerhub .

Dockerhub: https://hub.docker.com/repository/docker/odidev/gcs-resource

It will be helpful if the arm64 image is released. If interested, I will raise a PR with the above changes.

400 error

I am receiving the following error message when trying to use the put step:

1.86 MiB / 1.86 MiB [===================================] 100.00% 4.17 MiB/s 0s
error running command: googleapi: Error 400: Required, required

Here is my configuration:

- name: m2-bucket
  type: google-cloud-storage
  source:
    regex: m2_tar/m2/m2.tar.gz
    bucket: {{bucket_name}}
    json_key: {{bucket_private_key}}
- name: google-cloud-storage
  type: docker-image
  source:
    repository: frodenas/gcs-resource
    - put: m2-bucket
      params:
        file: ./m2_tar/m2/m2.tar.gz

Any ideas on what could be wrong? Thank you in advance!

check failing

Failing with this error... not sure what to make of it. If I've configured wrongly, or if there's a backwards compatibility issue. Running via BOSH, v2.4.0.

resource script '/opt/resource/check []' failed: exit status 1

stderr:
�[31merror building GCS client: invalid character '\n' in string literal
�[0m

Allow for timestamp version numbers including microseconds

I received this error after adding microseconds to my timestamp:

panic: version number was not valid: Expected component '20190310162427027000' from version segment '20190310162427027000' to be a parseable integer: strconv.Atoi: parsing "20190310162427027000": value out of range

using this Python snippet to generate the version for my timestamp:

datetime.datetime.now().strftime("%Y%m%d%H%M%S%f")

https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior

Updating versioned_file resource in two jobs at the same time results as same generation number

Hi, in our pipeline, we have two jobs that will update same gcs resource. This resource is a versioned_file

- name: gcs-cluster-info
  type: gcs
  source:
    json_key: ((gcs-json-key))
    bucket: ((pks.pipeline_store))
    versioned_file: ((pks.version))/cluster/info.yml

resource_types:
- name: gcs
  type: docker-image
  source:
    repository: frodenas/gcs-resource

- name: job1
  - put: gcs-cluster-info
     params:
      file: cluster/info.yml

- name: job2
  - put: gcs-cluster-info
     params:
      file: cluster/info.yml

In these two pipelines jobs, the gcs-cluster-info have same generation: gs://pipeline-store/1.9.x/cluster/info.yml#1598352097430055
https://pks.ci.cf-app.com/teams/bosh-lifecycle-dev/pipelines/pks-api-1.9.x-bump-kubernetes-1.18.8/jobs/create-cluster-test-gcp/builds/1
https://pks.ci.cf-app.com/teams/bosh-lifecycle-dev/pipelines/pks-api-1.9.x-bump-kubernetes-1.18.8/jobs/create-cluster-test-gcp-ha/builds/1

Have anyone seen this?

Version 0.5.0 failed with "No such file or directory

Hi,

we just noticed our gcs resources using latest (0.5.0) started failing on the checks with the following output:

Backend error: Exit status: 500, message: {"Type":"","Message":"runc exec: exit status 1: exec failed: container_linux.go:345: starting container process caused \"no such file or directory\"\n","Handle":"","ProcessID":"","Binary":""}

Thanks,

cc @davewalter

Q: How to retrieve the file name in a task

Hi,
I am new to concours and don't have much experience with the resources. I am using the gcs-resource. Is there a way to retrieve (file) name in my job.

This might not be the right place to ask the question but any help would be great.

Thanks!

regexp not using latest version

When using regexp it does not pull the latest version of the file. Issues persists no matter if bucket versioning is enabled or not. Would it be possible to default to using the latest file version when using regexp? Or providing a new resource parameter to enable that behavior?

For example, given a resource

  - name: saved_cluster_env_files
    type: gcs
    source:
      json_key: ((gcs-service-account-key))
      bucket: some-bucket
      regexp: sub-bucket/cluster_env_files(.*).tar.gz

the first job in my pipeline pushes the resource called cluster_env_files_MyBranchName.tar.gz. Subsequent jobs then pull that same resource. However, if the pipeline get's re-flown or the first job is re-run, the following jobs are matching the correct filename, but pulling down a previous version of it. This causes those jobs to fail.

`skip_download` param in the implicit get when put

When put a large object to gcs, it is a kind of waste to download the object again from the bucket because of the implicit get. Can we provide a param, e.g. skip_download, to skip downloading the file, and just retrieve its metadata, e.g. generation and url?

Use version from previous get-resource

We use this repo to version our npm-cache. The package.json file does not chance with every push to the repository so we would like to check for the package.json resource version and fetch that one. Using the value from the .git/short_ref from the git-resource would be very usefull for us.

I imagine something like this:

resources:
- name: app-npm-cache
  type: gcs-resource
  source:
    regexp: app-npm-cache/node-modules-v(.*).tar

- name: package-json-version
  type: git
  source:
    paths:
      - package.json

jobs:  
  plan:
  - get: github-source
    trigger: true # Perform on changes of the source-code
  - get: package-json-version
  - get: app-npm-cache
    version: { version_from_file: "package-json-version/.git/short_ref" }

Lets say the short_ref has the value 1234 then the gcs-resource will search for the file app-npm-cache/node-modules-v1234.tar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.