Giter VIP home page Giter VIP logo

atc's Introduction

atc's People

Contributors

aeijdenberg avatar amitkgupta avatar andrewedstrom avatar chendrix avatar clarafu avatar databus23 avatar dylangriffith avatar edtan avatar evashort avatar fmy avatar jmcarp avatar jmelchio avatar joshzarrabi avatar jtarchie avatar luan avatar mainephd avatar mariash avatar mhuangpivotal avatar nwdenton avatar pivotal-ahirji avatar robdimsdale avatar shyx0rmz avatar tanglisha avatar tanzeeb avatar timsimmons avatar tobocop avatar vito avatar xenophex avatar xoebus avatar zachgersh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atc's Issues

atc crashes with "too many open files"

Hi,

we are running Concourse 0.74 and our atc jobs on the "web" nodes are crashing often with:

failed to enable connection keepalive: file tcp 10.1.6.0:33296->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33297->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33299->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33303->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33314->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35453->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35455->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35456->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35457->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35458->10.1.6.12:7777: fcntl: too many open files

As a workaround, we call "monit restart atc". Do you know the root cause of this problem? Please tell us if you need more system information for analysis.

Thanks and Best Regards,

Jochen.

Collapsing sections of build output

It'd be great to have the equivalent of this Buildkite feature.

This would let us specify logical stages of our build output that could be collapsed. For example:

echo '--- bundling'
bundle check || bundle install
<lots of bundler output>
.
.
.
echo '--- preparing DB'
service mysql start
bundle exec rake db:migrate
<lots of migration output>
.
.
.
echo '--- running tests'
bundle exec rspec

Resource icons

It would be nice to have icons associated with resources. This way it is easy at a glance to see which type of resource is being in/outputted. And it prevents resource names with the type included.

No timestamps in ATC error logs

No timestamps when looking at atc.stderr.log, e.g.:

failed to set keepalive count: setsockopt: bad file descriptor
failed to set keepalive interval: setsockopt: bad file descriptor

Timestamps are in atc.stdout.log, e.g.:

{"timestamp":"1442945883.213434696","source":"atc","message":"atc.p-redis-deployments:radar.scanner-failed","log_level":2,"data":{"error":"multiple containers found, expected one: 0o59l29o642, 0o546uc66si","member":"p-redis-deployments:bosh-lite-site-changes","session":"1493318"}}

Resource returning empty response causes build log to keep 'loading'

When a 'out' resource returns an empty object {} the currently open view of the build status renders it just fine. But after reloading the build view the build status overview just shows a bar with 'loading' and a spinner.

javascript console throws this erorr:

failed to fetch plan: UnexpectedPayload ("expecting an object but got null"): ({ build = { id = 106, name = "35", job = Just { name = "build", pipelineName = "pylib" }, status = Succeeded, duration = { startedAt = Just {}, finishedAt = Just {} } }, steps = Nothing, errors = Nothing, state = StepsLoading, context = { events = Address <function>, buildStatus = Address <function> }, eventSource = Nothing, eventSourceOpened = False },None)

When the resource returns an object with empty 'version' {"version": {}} this problem does not happen.

Differentiate failing (no version) inputs in pipeline flow diagram

We had an issue today where we had misconfigured our pipelines in such a way that they were checking ok (the git repository they were looking at existed) but were failing to find refs / versions (the paths we told concourse to watch were bogus). The only way to find these half-broken inputs was to click on each input (and there were about a dozen total) and see which ones had a "checked successfully", but did not list any found versions.

It would be helpful in cases like these if the web UI behaved as follows:

  1. On input pages that have never received a version, put a note to that effect, rather than just displaying nothing.
  2. On the pipeline diagram, modify the display of inputs that have no versions, either by making them red, pulsating, etc. Combined with point 1, clicking on the "failing" input would make the root cause of the problem more apparent.

Option to make pipeline drawer expanded by default

Currently the collapsed pipeline 'menu' kind of hides other pipelines for newcomers to the CI until you discover it. Would be nice to be able to have the menu shown be default (and keep hidden toggle state in browser storage) or to be able to set a pipeline overview as starting page instead of the main pipeline.

Get error "could not read file from tar"

I setted up concourse on AWS with the standalone binary way (http://concourse.ci/binaries.html) and I am getting the following error 'could not read file from tar' on executing my pipeline.

If I run the same pipeline against a concourse setup within a local VM with vagrant it runs.

This is my configured image resource:

image_resource:
         type: docker-image
         source: {repository: ubuntu}

There is no more error message in the logs and from the source code it seems that the original error is hidden.

What is the cause of the error so that I can use concourse?

Alternate to Cmd-R to always refresh to the current active/finished job

Currently the only keybinding for refreshing the page is Cmd-R on a mac; or to click on a build number at the top of the screen.

But if a job has finished, and there are new builds running/finished, they don't show up automatically and cannot be selected. So you still need to Cmd-R first. Then click on a new build number.

Instead, could we have a keybinding to "refresh to current active or last finished build" please?

Bonus: make the keybinding redirect to it a URL endpoint e.g. jobs/foobar/latest that could be placed in a dashboard (the dashboard could go to this url ever 1 minute to ensure its showing the latest)

scheduling of pending builds should be independent from enqueueing builds

enqueueing builds can be slow because it's one big expensive query to determine candidate inputs. unfortunately when it's slow it also prevents pending builds from being scheduled, because we run one after the other on one interval (10sec currently).

we should ultimately fix the slowness, but until then a cheap improvement is to split these two operations into independently running loops with their own intervals (still 10sec). that way if one loop is blocking the other one at least works.

see https://github.com/concourse/atc/blob/5edf3322c6ed9d3141cb282f6bb5c4f12fdf4ba0/scheduler/runner.go#L99

UI bug(?) using ensure with a do block

We're testing out the ensure feature and found some Ui inconsistencies when using it with a do block. It works fine and looks like we'd expect when both tasks in the do block fail or only the first task fails, but the case when the last fails and the first passes looks odd in the UI

screen shot 2015-07-17 at 11 57 59 am

Here's the YAML for the pipeline:

---
resources:

jobs:
  - name: make-it-happen
    plan:
      - do: 
        - task: beep
          config:
            platform: linux
            image: docker:///ubuntu
            run:
              path: /bin/bash
              args: ["-c", "exit 0"]
        - task: boop
          privileged: true
          config:
            platform: linux
            image: docker:///ubuntu
            run:
              path: /bin/bash
              args: ["-c", "exit 1"]
        ensure:
          task: moop
          config:
              platform: linux
              image: docker:///ubuntu
              run:
                path: /bin/bash
                args: ["-c", "ls"]

Add a favicon

I pin a Concourse tab in all my browsers because CONCOURSE IS LIFE.

But a tab for an app without a favicon looks a little sad 😿

A favicon would be really useful. I would offer to submit a PR but we all know what happens to my PRs.

Support runtime configuration passing between out/in on resources

There are times where you want to specify runtime configuration during the out phase of a resource. This configuration typically happens in the params, rather than the resource's source. Due to the way resources perform in directly after out, this runtime configuration from the params is lost.

One example in particular is the bosh-deployment resource, where the BOSH target can be specified with target_file in the params. Because this runtime configuration is scoped only to the out, there was a decision made to include the target from the target_file in the version output so it is available to the impending in. This seems counter to the intent of what makes up a version. Instead, I believe it makes sense to either pass along the params (or as a new bit called runtime) as well as version between these actions. This will allow the passing of the configuration separately from the version for less confusion.

build pages timing out on (new elm-based) UI

This is on 0.70.1-rc.4

Our build pages for complex builds (many colored dots/lots of output) are timing out. Attached are screenshots of network timings from two different loads. The first 'events' entry takes ~43 seconds, and the second one times out at about a minute or so.

screen shot 2015-12-29 at 10 55 46 am
screen shot 2015-12-29 at 10 54 04 am

Viewing an old build doesn't correctly show newer builds

When I'm on build 120/163 the top bar shows up like this:

screen shot 2015-08-25 at 5 53 52 pm

When I move to build 140 it changes to this:
screen shot 2015-08-25 at 5 54 00 pm

Seems like in the first one it's broken, since it makes it look like 121 is the most recent build. I don't know what the best solution would be, since you won't be able to fit all those builds on the screen, but it should make clear what build you are on, and where it falls in the history of builds.

Task inputs/outputs copy/move actions

A common pattern is a simple task command which builds artifacts in the source directory. These artifacts then need to be copied/moved out of this directory into a directory which is an output because in-/outputs can't be nested. This directory is outside of the source directory, adding the need to modify/wrap the simple command to account for this and making these commands less CI agnostic.

This also applies to inputs where the command needs to be (made) aware of the CI directory structure to find the input artifact.

A antipattern I now apply is building tasks like this:

---
platform: linux

image_resource:
  type: docker-image
  source:
    repository: ubuntu
    tag: '14.04'

inputs:
  - name: my-repo
  - name: version

outputs:
  - name: dist

run:
  path: /bin/sh
  args:
      - -c
      - cp -v version/version dist/version; /usr/bin/make -C src test build;mv -v src/dist/*.tar.gz dist/

My suggestion is to add actions to the inputs/outputs which override the default behaviour of the directory nesting error and allow to copy or move files out.

Example:

---
platform: linux

image_resource:
  type: docker-image
  source:
    repository: ubuntu
    tag: '14.04'

inputs:
- name: my-repo

outputs:
- name: artifact
  copy: my-repo/dist/

run:
  path: my-repo/scripts/test

my-repo/scripts/test would generate a artifact my-repo/dist/build.tar.gz which would then be copied to artifact/build.tar.gz.

Or possibly support globing to allow artifact extraction without subdirectories:

---
platform: linux

image_resource:
  type: docker-image
  source:
    repository: ubuntu
    tag: '14.04'

inputs:
- name: my-repo

outputs:
- name: artifact
  copy: my-repo/*.tar.gz

run:
  path: my-repo/scripts/test

where my-repo/scripts/test would generate a artifact my-repo/build.tar.gz which would then be copied to artifact/build.tar.gz.

deal more gracefully with workers dropping out of the pool temporarily

BOSH team ran into an issue where their builds started erroring immediately with "multiple containers found: handle-a, handle-b". This happened when running the checks for all input resources. If we find multiple containers when searching for what should be uniquely identifying information (in this case, roughly pipeline name + resource name + resource config), we intentionally blow up, as something weird must have happened.

In this case, it ultimately came down to the worker taking too long to heartbeat:

Jul 21 13:09:27 worker-3 groundcrew:  2015/07/21 20:09:26 heartbeat took 1.111563ms
Jul 21 13:09:57 worker-3 groundcrew:  2015/07/21 20:09:56 heartbeat took 20.089841ms
Jul 21 13:10:27 worker-3 groundcrew:  2015/07/21 20:10:26 heartbeat took 6.510488ms
Jul 21 13:10:57 worker-3 groundcrew:  2015/07/21 20:10:56 heartbeat took 6.690761ms
Jul 21 13:12:33 worker-3 groundcrew:  2015/07/21 20:12:32 heartbeat took 1m5.971438976s
Jul 21 13:13:03 worker-3 groundcrew:  2015/07/21 20:13:02 heartbeat took 1.260327ms
Jul 21 13:13:32 worker-3 groundcrew:  2015/07/21 20:13:32 heartbeat took 118.188899ms
Jul 21 13:14:02 worker-3 groundcrew:  2015/07/21 20:14:02 heartbeat took 1.113901ms
Jul 21 13:14:32 worker-3 groundcrew:  2015/07/21 20:14:32 heartbeat took 1.112643ms

The botched heartbeat happened at 13:12:33, during which the worker will have dropped out of the pool (its TTL having expired), and if during that window we run a check, we'll not find the containers on the worker, and create another. When the worker comes back, we'll have two containers.

How should we handle this? Nuke one of the containers? Prevent action if all workers aren't in the pool? (Which we can't know ahead of time.)

Attempting to enqueue a build when asked to log back in fails

If you are logged out and press (+) you are redirected to log back in, the resulting redirect back loses the method (POST or PUT) of the request and does a GET instead. This results in landing on a page that says method not allowed. This can be confusing, as it may make users think they can't log in.

Sidebar Scrolling broken in 0.76.1+ (incl. 1.0.1)

Seems the header is interfering with the ability to see the bottom n pixels of the pipeline sidebar in scrolling mode.

To reproduce:

  1. go to http://ci.concourse.ci/
  2. resize the browser viewport so that queue-once and area51 are no longer visible
  3. try to scroll down to access the area51 pipeline (last in the list)

UI is unusable with too many / too long groups

If you have too many groups and they wrap, they overlay the menu at a higher z-index, and items on the menu are unclickable.

I've created a bookmarklet with some styling hacks that helps some. Before and after screenshots are attached.

Styling probably needs a holistic look, but perhaps at least the z-index/menu positioning fixes can be incorporated sooner than later to fix menu usability issues.

javascript:(function() {
  $('.nav-item, nav .groups li a').css('font-size', '12px', 'important');
  $('.nav-item, nav .groups li a').css('font-family', 'sans-serif', 'important');
  $('.nav-item, nav .groups li a').css('max-height', '40px', 'important');
  $('.nav-item, nav .groups li a').css('max-width', '150px', 'important');
  $('.nav-item, nav .groups li a').css('line-height', '40px', 'important');
  $('.nav-item, nav .groups li a').css('overflow', 'hidden', 'important');
  $('.nav-item, nav .groups li a').css('white-space', 'nowrap', 'important');
  $('.nav-item, nav .groups li a').css('text-overflow', 'ellipsis', 'important');
  $('.nav-item, nav .groups li a').css('padding', '0 2px', 'important');
  $('div .nav-container').css('z-index', '4', 'important');
  $('div .nav-container').css('top', '40px', 'important');
  $('.pipelines li').css('margin-top', '-34px', 'important');
})();

screen shot 2015-12-29 at 12 14 33 am
screen shot 2015-12-29 at 12 14 13 am

Support for Pipeline Group Separators

When you get to having a lot of pipelines - grouping them because a way to draw your eye directly to the section of the side panel your pipeline resides in.

We've done this in a hacky way and it looks like this:

screen shot 2016-03-31 at 11 29 28 am

Would be nice to officially support this so it doesn't have to be a paused blank pipeline.

Better information when resource returns empty version list

We have been trying to use the concourse S3 resource to hold some state and
been running into some issues that proved hard to diagnose. We think that when
a resource returns an empty array of versions, atc gets confused and the job is
shown as pending.

We have pipeline containing:

  • an S3 resource pointing to a bucket where we keep our terraform state
  • a job which runs the terraform command and writes the state to the S3 bucket
resources:
- name: s3-bucket
  type: s3
  source:
    access_key_id: {{aws_key}}
    bucket: piotr-state
    region_name: eu-west-1
    secret_access_key: {{secret_key}}
    versioned_file: vpc.tfstate
jobs:
- name: test-s3
  serial: true
  plan:
  - get: s3-bucket
  - task: echo
    config:
      image: docker:///governmentpaas/bosh-init
      inputs:
      - name: s3-bucket
        path: ""
      run:
        path: date

This is a cut-down version of our pipeline, which runs on concourse-lite and is
used to bootstrap our real concourse installation.

When we run this for the first time, the S3 bucket contains no files matching
the versioned_file parameter passed to the S3 resource ie. the resource is
"empty" and has no versions.

  • There is no obvious error when clicking on the S3 resource in the UI and the
    job is shown as pending.
  • There is no output from the resource visible in the UI, nothing to indicate
    what is going wrong.

We did a bit of debugging and found that the call to check passed in a null
version, as might be expected from reading
http://concourse.ci/implementing-resources.html.

The output from the S3 resource check executable contained an empty array of versions.

When a resource is empty, how should it respond to a check? With an empty array
of versions or a single null version?

If a resource does respond in a way that's unexpected or not allowed by
atc/concourse, we think it would be helpful to make the job show as failed
along with a reason. We spent a fair amount of time thinking that the job
itself was hanging.

Expose scheduler info for checks

It would be nice to be able to see on the check page for a resource:

  • when the check was last updated/failed
  • when the check will run next
  • if check is currently running

Deadlock detected acquring build tracking lease

{"timestamp":"1444779382.015635014","source":"atc","message":"atc.build-tracker.track.failed-to-get-lease","log_level":2,"data":{"build":8302,"error":"pq: deadlock detected","session":"5.139138"}}

possibly related to memory contention on our vSphere cluster - @vito

plenty of other weirdness:

screen shot 2015-10-13 at 5 56 03 pm
screen shot 2015-10-13 at 5 55 42 pm
screen shot 2015-10-13 at 5 55 33 pm
screen shot 2015-10-13 at 5 55 56 pm
screen shot 2015-10-13 at 5 55 38 pm
screen shot 2015-10-13 at 5 55 48 pm

ginkgo -r -race results in data race in resource suite

This is repeatable every time I run ginkgo -r -race on the resource directory

[1430594457] Resource Suite - 64/64 specs •••••••••••••••••••••••==================
WARNING: DATA RACE
Read by goroutine 47:
  github.com/concourse/atc/resource_test.func·085()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_out_test.go:57 +0x4c
  github.com/cloudfoundry-incubator/garden/fakes.(*FakeProcess).Wait()
      ~/workspace/concourse/src/github.com/cloudfoundry-incubator/garden/fakes/fake_process.go:71 +0x220
  github.com/concourse/atc/resource.func·003()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:134 +0x6b

Previous write by goroutine 7:
  github.com/concourse/atc/resource_test.func·086()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_out_test.go:51 +0x352
  github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:104 +0x11b
  github.com/onsi/ginkgo/internal/leafnodes.(*runner).run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0xd3
  github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:14 +0x78
  github.com/onsi/ginkgo/internal/spec.(*Spec).runSample()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:149 +0x362
  github.com/onsi/ginkgo/internal/spec.(*Spec).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:118 +0x1a9
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:144 +0x2fa
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:61 +0xb8
  github.com/onsi/ginkgo/internal/suite.(*Suite).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/suite/suite.go:59 +0x35b
  github.com/onsi/ginkgo.RunSpecsWithCustomReporters()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:203 +0x38f
  github.com/onsi/ginkgo.RunSpecs()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:184 +0x100
  github.com/concourse/atc/resource_test.TestResource()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_suite_test.go:30 +0xa8
  testing.tRunner()
      /usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:447 +0x133

Goroutine 47 (running) created at:
  github.com/concourse/atc/resource.func·004()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:140 +0xa66
  github.com/tedsuo/ifrit.RunFunc.Run()
      ~/workspace/concourse/src/github.com/tedsuo/ifrit/runner.go:36 +0x56
  github.com/concourse/atc/resource.func·002()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_out.go:33 +0x35c
  github.com/tedsuo/ifrit.RunFunc.Run()
      ~/workspace/concourse/src/github.com/tedsuo/ifrit/runner.go:36 +0x56
  github.com/concourse/atc/resource.(*versionedSource).Run()
      <autogenerated>:14 +0x97
  github.com/tedsuo/ifrit.(*process).run()
      ~/workspace/concourse/src/github.com/tedsuo/ifrit/process.go:71 +0x97

Goroutine 7 (running) created at:
  testing.RunTests()
      /usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:555 +0xd4e
  testing.(*M).Run()
      /usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:485 +0xe0
  main.main()
      github.com/concourse/atc/resource/_test/_testmain.go:54 +0x28c
==================
•••••••••••••••••••••••••••••••••==================
WARNING: DATA RACE
Read by goroutine 30:
  github.com/concourse/atc/resource_test.func·024()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_in_test.go:56 +0x4c
  github.com/cloudfoundry-incubator/garden/fakes.(*FakeProcess).Wait()
      ~/workspace/concourse/src/github.com/cloudfoundry-incubator/garden/fakes/fake_process.go:71 +0x220
  github.com/concourse/atc/resource.func·003()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:134 +0x6b

Previous write by goroutine 7:
  github.com/concourse/atc/resource_test.func·025()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_in_test.go:50 +0x3fc
  github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:104 +0x11b
  github.com/onsi/ginkgo/internal/leafnodes.(*runner).run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0xd3
  github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:14 +0x78
  github.com/onsi/ginkgo/internal/spec.(*Spec).runSample()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:149 +0x362
  github.com/onsi/ginkgo/internal/spec.(*Spec).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:118 +0x1a9
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:144 +0x2fa
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:61 +0xb8
  github.com/onsi/ginkgo/internal/suite.(*Suite).Run()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/internal/suite/suite.go:59 +0x35b
  github.com/onsi/ginkgo.RunSpecsWithCustomReporters()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:203 +0x38f
  github.com/onsi/ginkgo.RunSpecs()
      ~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:184 +0x100
  github.com/concourse/atc/resource_test.TestResource()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/resource_suite_test.go:30 +0xa8
  testing.tRunner()
      /usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:447 +0x133

Goroutine 30 (running) created at:
  github.com/concourse/atc/resource.func·004()
      ~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:140 +0xa66
  github.com/tedsuo/ifrit.RunFunc.Run()
      ~/workspace/concourse/src/github.com/tedsuo/ifrit/runner.go:36 +0x56
  github.com/concourse/atc/resource.(*versionedSource).Run()
      <autogenerated>:14 +0x97
  github.com/tedsuo/ifrit.(*process).run()
      ~/workspace/concourse/src/github.com/tedsuo/ifrit/process.go:71 +0x97

Goroutine 7 (running) created at:
  testing.RunTests()
      /usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:555 +0xd4e
  testing.(*M).Run()
      /usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:485 +0xe0
  main.main()
      github.com/concourse/atc/resource/_test/_testmain.go:54 +0x28c
==================

API authentication without using basic auth

It would be nice to be able to authenticate to the Concourse API in scripts without using basic auth. The current OAuth workflow can only happen in a browser that is logged into Github.

It was mentioned in Slack that there could be another auth type like personal access tokens. There are also other OAuth flows that might make sense.

how to use github webhooks for triggering

Hi,

is it possible or planned that github webhooks can be used instead of polling for new commits on git repos? Our pipelines' pollings create way too much traffic on certain github repos whose content changes seldomly.

Best regards, Stephan Weber

Concourse UI should indicate that the connection to the server is down

If you are using Concourse on a wall display and the server dies, it will continue to show the same view. Build animations continue so there is no way to tell that it will never update.

I suggest opening a modal that says "Communication with the server has been interrupted" when the server connection is down.

Support for non-container workloads

It would be nice if Concourse could run tests outside containers (which seem the main focus right now).

As an example, we have a Jenkins server that connects to a slave and starts a custom-built Vagrant VM that runs tests on GNOME 3 and other desktop environments (e.g. Windows). Sometimes these tests must be able to perform certain things that only exist in a Virtual Machine or even a proper Desktop computer (e.g. audio).

Since Concourse relies on Garden heavily, and Garden seems to work exclusively with workers, it's not clear how to migrate our CI pipeline to Concourse.

A job with complicated `passed` constraints takes a very long time to determine its inputs

Today we execute a pretty expensive query to determine the candidate versions for a build's get steps, based on their passed constraints. If the constraints are complicated enough (either passed: [a0, a1, ... aN] or many inputs with correlated passed constraints), this kills the postgres as more and more versions and more and more builds have to be scanned through.

The related database logic is (PipelineDB).GetLatestInputVersions. It already attempts some optimization (by splitting up the query to one per input, adding constraints as it goes along), but it's still very expensive.

atc hijack: invalid memory address or nil pointer dereference

I am using 0.68 concourse/lite vagrant release.

On the web UI in for my "src" resource I've got somehow following error:

checking failed
resource script '/opt/resource/check []' failed: exit status 255

stderr:
start process: container_daemon: connect to socket: unix_socket: connect to server socket: dial unix /var/vcap/data/garden/depot/6fav943qhns/run/wshd.sock: connection refused

If in that condition I do fly hijack I got following error in atc.error.log:

2015/11/25 16:51:23 http: panic serving 192.168.100.1:60867: runtime error: invalid memory address or nil pointer dereference
goroutine 163213 [running]:
net/http.(*conn).serve.func1(0xc8205400b0, 0x7f06fd07e4c0, 0xc8200285e0)
        /usr/local/go/src/net/http/server.go:1287 +0xb5
github.com/concourse/atc/api/containerserver.(*Server).hijack(0xc8202b6990, 0x7f06fd0ba548, 0xc820540160, 0xc8201ca986, 0xb, 0xc8204b7ef0, 0x4, 0x0, 0x0, 0x0, ...)
        /var/vcap/packages/atc/src/github.com/concourse/atc/api/containerserver/hijack.go:75 +0x180f
github.com/concourse/atc/api/containerserver.(*Server).HijackContainer(0xc8202b6990, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/concourse/atc/api/containerserver/hijack.go:49 +0x618
github.com/concourse/atc/api/containerserver.(*Server).HijackContainer-fm(0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/concourse/atc/api/handler.go:160 +0x3e
net/http.HandlerFunc.ServeHTTP(0xc8201b13d0, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /usr/local/go/src/net/http/server.go:1422 +0x3a
github.com/concourse/atc/auth.checkAuthHandler.ServeHTTP(0x7f06fd07a000, 0xc8201b13d0, 0x7f06fff0e640, 0xfd1130, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/concourse/atc/auth/check_auth_handler.go:22 +0x6b
github.com/concourse/atc/auth.(*checkAuthHandler).ServeHTTP(0xc8202bade0, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        <autogenerated>:4 +0xb4
github.com/concourse/atc/auth.authHandler.ServeHTTP(0x7f06fff0e668, 0xc8202bade0, 0x7f06fff0e408, 0xfd1130, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/concourse/atc/auth/wrap_handler.go:28 +0x101
github.com/concourse/atc/auth.(*authHandler).ServeHTTP(0xc8202bae00, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        <autogenerated>:16 +0xb4
github.com/bmizerany/pat.(*PatternServeMux).ServeHTTP(0xc8200280f0, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/bmizerany/pat/mux.go:109 +0x244
net/http.(*ServeMux).ServeHTTP(0xc820346c00, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /usr/local/go/src/net/http/server.go:1699 +0x17d
github.com/concourse/atc/auth.CookieSetHandler.ServeHTTP(0x7f06fff0ed90, 0xc820346c00, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/concourse/atc/auth/cookie_set_handler.go:35 +0x2be
github.com/concourse/atc/auth.(*CookieSetHandler).ServeHTTP(0xc82032d440, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        <autogenerated>:5 +0xb6
github.com/gorilla/context.ClearHandler.func1(0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /var/vcap/packages/atc/src/github.com/gorilla/context/context.go:141 +0x85
net/http.HandlerFunc.ServeHTTP(0xc820344c40, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /usr/local/go/src/net/http/server.go:1422 +0x3a
net/http.serverHandler.ServeHTTP(0xc8202f3380, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
        /usr/local/go/src/net/http/server.go:1862 +0x19e
net/http.(*conn).serve(0xc8205400b0)
        /usr/local/go/src/net/http/server.go:1361 +0xbee
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:1910 +0x3f6

Description in pipeline web page

Would be nice to set a (multiline) description on a pipeline which would appear on the pipeline page. This would allow to provide context about the pipeline and also provide links to github page, documentation, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.