Giter VIP home page Giter VIP logo

builder's People

Contributors

aboyett avatar aledbf avatar apsops avatar arschles avatar bregor avatar cryptophobia avatar duanhongyi avatar felixbuenemann avatar helgi avatar jeroenvisser101 avatar jgmize avatar joshua-anderson avatar kmala avatar krancour avatar krisnova avatar lshemesh avatar mattk42 avatar mboersma avatar monaka avatar n0n0x avatar robinmonjo avatar slack avatar technosophos avatar vdice avatar zinuzoid avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

builder's Issues

Idea: Deployment Keys

From @deis-admin on January 19, 2017 23:54

From @scottrobertson on June 16, 2015 8:17

Hey,

It would be quite cool if we could add deployment keys to a project. The work around right now is to create a deployment user, but that is a bit cumbersome.

When adding deployment keys, we should have the ability to supply our own, or have deis generate a keypair automatically (and return the public key so we can add it to CI).

Copied from original issue: deis/deis#3875

Copied from original issue: deis/builder#472

fatal: The remote end hung up unexpectedly

From @antoinedc on April 7, 2017 11:37

Hi,

I'm trying to push a codebase that's ~140mb, and I keep getting the fatal: The remote end hung up unexpectedly error.

The timeout on my ELB is 3600 (max), and looking at other threads I couldn't find any other solutions to fix that besides upping this value.

Is there any other way to push the code with Deis?

Copied from original issue: deis/builder#506

[META] Future build pipeline

From @arschles on January 21, 2016 17:56

This issue holds a discussion about future features to add to the build pipeline in the builder component.

Whiteboard discussion notes from 1/21/2016:

slack for ios upload

Copied from original issue: deis/builder#113

Deis does a duplicate build after pushing, uses different registry

From @bcokert on July 20, 2017 19:57

When I deploy my app in deis (a Dockerfile app), it builds twice. The first time succeeds, and the second time fails. This doesn't seem to affect the resulting application (since it fails on step 1 of the dockerfile), but it is still quite annoying.

> git push deis BRANCH
... regular output
Step 11 : CMD
 ---> Running in da8aa48f52c6
 ---> 39fc0d76f49b
Removing intermediate container da8aa48f52c6
Successfully built 39fc0d76f49b
Pushing to registry
Build complete.->
Launching App...
...
Done, enlightning-staging:v5 deployed to Workflow

Use 'deis open' to view this application in your browser

To learn more, use 'deis help' or visit https://deis.com/

Starting build... but first, coffee!
Step 1 : FROM companyRegistry/base/ubuntu-14.04:latest
Get https://companyRegistry/v1/_ping: dial tcp 52.0.154.17:443: i/o timeout
remote: 2017/07/20 18:41:14 Error running git receive hook [Build pod exited with code 1, stopping build.]
To ssh://internalDeisBuilder:2222/enlightning-staging.git
! [remote rejected] DEPLOY-PRODUCTION-2.0.1 -> master (pre-receive hook declined)
 ! [remote rejected] v0.2.0 -> v0.2.0 (pre-receive hook declined)
 ! [remote rejected] v0.2.1 -> v0.2.1 (pre-receive hook declined)
 ! [remote rejected] v0.2.2 -> v0.2.2 (pre-receive hook declined)
 ! [remote rejected] v0.2.3 -> v0.2.3 (pre-receive hook declined)
 ! [remote rejected] v0.3.0 -> v0.3.0 (pre-receive hook declined)
 ! [remote rejected] v0.4.0 -> v0.4.0 (pre-receive hook declined)
 ! [remote rejected] v0.4.1 -> v0.4.1 (pre-receive hook declined)
 ! [remote rejected] v0.4.2 -> v0.4.2 (pre-receive hook declined)
 ! [remote rejected] v0.4.3 -> v0.4.3 (pre-receive hook declined)
 ! [remote rejected] v0.4.4 -> v0.4.4 (pre-receive hook declined)
 ! [remote rejected] v0.4.5 -> v0.4.5 (pre-receive hook declined)
 ! [remote rejected] v0.5.0 -> v0.5.0 (pre-receive hook declined)
 ! [remote rejected] v0.6.0 -> v0.6.0 (pre-receive hook declined)
 ! [remote rejected] v0.7.0 -> v0.7.0 (pre-receive hook declined)
 ! [remote rejected] v0.8.0 -> v0.8.0 (pre-receive hook declined)
 ! [remote rejected] v0.8.1 -> v0.8.1 (pre-receive hook declined)
 ! [remote rejected] v0.8.2 -> v0.8.2 (pre-receive hook declined)
 ! [remote rejected] v0.9.0 -> v0.9.0 (pre-receive hook declined)
 ! [remote rejected] v0.9.1 -> v0.9.1 (pre-receive hook declined)
 ! [remote rejected] v0.9.2 -> v0.9.2 (pre-receive hook declined)
 ! [remote rejected] v0.9.3 -> v0.9.3 (pre-receive hook declined)
 ! [remote rejected] v1.0.0 -> v1.0.0 (pre-receive hook declined)
 ! [remote rejected] v1.1.0 -> v1.1.0 (pre-receive hook declined)
 ! [remote rejected] v1.2.0 -> v1.2.0 (pre-receive hook declined)
 ! [remote rejected] v1.3.0 -> v1.3.0 (pre-receive hook declined)
 ! [remote rejected] v1.3.1 -> v1.3.1 (pre-receive hook declined)
 ! [remote rejected] v1.3.2 -> v1.3.2 (pre-receive hook declined)
 ! [remote rejected] v1.3.3 -> v1.3.3 (pre-receive hook declined)
 ! [remote rejected] v1.3.4 -> v1.3.4 (pre-receive hook declined)
 ! [remote rejected] v1.3.5 -> v1.3.5 (pre-receive hook declined)
 ! [remote rejected] v1.4.0 -> v1.4.0 (pre-receive hook declined)
 ! [remote rejected] v1.5.0 -> v1.5.0 (pre-receive hook declined)
 ! [remote rejected] v1.6.0 -> v1.6.0 (pre-receive hook declined)
 ! [remote rejected] v1.6.1 -> v1.6.1 (pre-receive hook declined)
 ! [remote rejected] v1.6.2 -> v1.6.2 (pre-receive hook declined)
 ! [remote rejected] v1.6.3 -> v1.6.3 (pre-receive hook declined)
 ! [remote rejected] v1.7.0 -> v1.7.0 (pre-receive hook declined)
 ! [remote rejected] v1.7.1 -> v1.7.1 (pre-receive hook declined)
 ! [remote rejected] v1.7.2 -> v1.7.2 (pre-receive hook declined)
 ! [remote rejected] v1.8.0 -> v1.8.0 (pre-receive hook declined)
 ! [remote rejected] v1.8.1 -> v1.8.1 (pre-receive hook declined)
 ! [remote rejected] v1.8.2 -> v1.8.2 (pre-receive hook declined)
 ! [remote rejected] v1.8.3 -> v1.8.3 (pre-receive hook declined)
 ! [remote rejected] v1.9.0 -> v1.9.0 (pre-receive hook declined)
 ! [remote rejected] v1.9.1 -> v1.9.1 (pre-receive hook declined)
 ! [remote rejected] v1.9.2 -> v1.9.2 (pre-receive hook declined)
 ! [remote rejected] v1.9.3 -> v1.9.3 (pre-receive hook declined)
 ! [remote rejected] v1.9.4 -> v1.9.4 (pre-receive hook declined)
 ! [remote rejected] v1.9.5 -> v1.9.5 (pre-receive hook declined)
 ! [remote rejected] v2.0.0 -> v2.0.0 (pre-receive hook declined)
 ! [remote rejected] v2.0.1 -> v2.0.1 (pre-receive hook declined)
error: failed to push some refs to 'ssh://git@internalDeisBuilder:2222/enlightning.git'

Looking at the second build, you can see that it tries to use the company registry for the base image, which is where it fails. The successful build does not do this:

Step 1 : FROM ubuntu:14.04
 ---> 54333f1de4ed
Step 2 : MAINTAINER Brandon Okert <[email protected]>
 ---> Using cache
 ---> 9598ac5bd5dd
...

I do have both registries on my development machine:

> cat ~/.docker/config.json
{
	"auths": {
		"companyRegistryUrl": {
			"auth": "xxx",
			"email": "xxx"
		},
		"https://index.docker.io/v1/": {
			"auth": "xxx",
			"email": "xxx"
		}
	}
}

But I don't think this would affect deis.

Also, it's strange that it reports all those tags as well; as if it's thinking about building each one. There's only the one Dockerfile in the branch that I'm pushing.

Cheers!

Copied from original issue: deis/builder#521

Builder checks /tmp/env vars before building and fails

We're trying to create new apps and the first push with the cache enabled gives the following:

Starting build... but first, coffee!
-----> Restoring cache...
       No cache file found. If this is the first deploy, it will be created now.
cp: cannot stat '/tmp/env/*': No such file or directory
remote: 2018/03/13 18:03:06 Error running git receive hook [Build pod exited with code 1, stopping build.]

With the cache disabled it works, but then deploys take upwards of 4 or 5 minutes due to the sheer number of Python deps. Thoughts?

"There's an if statement in the builder script that checks if the directory is empty, then it does that cp. I don't have it in front of me--90% of this investigation was yesterday--but I suspect it's failing to detect this case. And of course 99% of the time you've got env vars."

"I don't know if that's the underlying bug, just that that's what fixed it. I suspect DEIS_DISABLE_CACHE worked because those are passed through to the app, not because there's a problem with caching."

Google PROTOCOL_ERROR

From @scottrobertson on March 19, 2017 2:25

Hey

Getting the following error when trying to deploy:

2017/03/19 02:22:48 Post https://www.googleapis.com/storage/v1/b?alt=json stream error: stream ID 3; PROTOCOL_ERROR

This is intermittent. Not sure if this is Google's issue or something on my end. Any ideas?

Deployments have been working fine. This error started to happen last night.

Copied from original issue: deis/builder#497

cannot use awsIAM roles instead of aws accessKey

From @Akshaykapoor on February 16, 2017 14:35

I upgraded my cluster from workflow v2.10.0 to 2.11.0. For this upgrade i changed storage backend to be off-cluster on s3.

My values.yaml looks something like below. I've also given full S3 access to the nodes. Nothing failed during installation, except that my registry, builder components are in CrashLoopBackoff with the following erros,

registry-logs

2017/02/16 13:58:53 INFO: Starting registry...
2017/02/16 13:58:53 INFO: using s3 as the backend
2017/02/16 13:58:53 open /var/run/secrets/deis/registry/creds/accesskey: no such file or directory

Builder-logs

2017/02/16 13:58:29 Running in debug mode
2017/02/16 13:58:29 Error creating storage driver (AccessDenied: Access Denied
	status code: 403, request id: DEB87202BB385735)

Is there a way that i can explicitly tell to not use accessKey and secretKey when installing in values.yaml file.

The yaml file mentions, if you leave it blank it will use IAM roles. I'm not sure it's using the IAM roles because the registry-logs seems to open the dir for creds.

Is it that i'm missing something, or the only way to go about this is to provide accessKey and secretKey

values.yaml

# This is the global configuration file for Workflow

global:
  # Set the storage backend
  #
  # Valid values are:
  # - s3: Store persistent data in AWS S3 (configure in S3 section)
  # - azure: Store persistent data in Azure's object storage
  # - gcs: Store persistent data in Google Cloud Storage
  # - minio: Store persistent data on in-cluster Minio server
  storage: s3

  # Set the location of Workflow's PostgreSQL database
  #
  # Valid values are:
  # - on-cluster: Run PostgreSQL within the Kubernetes cluster (credentials are generated
  #   automatically; backups are sent to object storage
  #   configured above)
  # - off-cluster: Run PostgreSQL outside the Kubernetes cluster (configure in database section)
  database_location: "off-cluster"

  # Set the location of Workflow's logger-specific Redis instance
  #
  # Valid values are:
  # - on-cluster: Run Redis within the Kubernetes cluster
  # - off-cluster: Run Redis outside the Kubernetes cluster (configure in loggerRedis section)
  logger_redis_location: "on-cluster"

  # Set the location of Workflow's influxdb cluster
  #
  # Valid values are:
  # - on-cluster: Run Influxdb within the Kubernetes cluster
  # - off-cluster: Influxdb is running outside of the cluster and credentials and connection information will be provided.
  influxdb_location: "on-cluster"
  # Set the location of Workflow's grafana instance
  #
  # Valid values are:
  # - on-cluster: Run Grafana within the Kubernetes cluster
  # - off-cluster: Grafana is running outside of the cluster
  grafana_location: "on-cluster"

  # Set the location of Workflow's Registry
  #
  # Valid values are:
  # - on-cluster: Run registry within the Kubernetes cluster
  # - off-cluster: Use registry outside the Kubernetes cluster (example: dockerhub,quay.io,self-hosted)
  # - ecr: Use Amazon's ECR
  # - gcr: Use Google's GCR
  registry_location: "on-cluster"
  # The host port to which registry proxy binds to
  host_port: 5555
  # Prefix for the imagepull secret created when using private registry
  secret_prefix: "private-registry"


s3:
  # Your AWS access key. Leave it empty if you want to use IAM credentials.
  accesskey: ""
  # Your AWS secret key. Leave it empty if you want to use IAM credentials.
  secretkey: ""
  # Any S3 region
  region: "us-east-1"
  # Your buckets.
  registry_bucket: "REDACTED"
  database_bucket: "REDACTED"
  builder_bucket: "REDACTED"

azure:
  accountname: "YOUR ACCOUNT NAME"
  accountkey: "YOUR ACCOUNT KEY"
  registry_container: "your-registry-container-name"
  database_container: "your-database-container-name"
  builder_container: "your-builder-container-name"

gcs:
  # key_json is expanded into a JSON file on the remote server. It must be
  # well-formatted JSON data.
  key_json: <base64-encoded JSON data>
  registry_bucket: "your-registry-bucket-name"
  database_bucket: "your-database-bucket-name"
  builder_bucket: "your-builder-bucket-name"

swift:
  username: "Your OpenStack Swift Username"
  password: "Your OpenStack Swift Password"
  authurl: "Swift auth URL for obtaining an auth token"
  # Your OpenStack tenant name if you are using auth version 2 or 3.
  tenant: ""
  authversion: "Your OpenStack swift auth version"
  registry_container: "your-registry-container-name"
  database_container: "your-database-container-name"
  builder_container: "your-builder-container-name"

# Set the default (global) way of how Application (your own) images are
# pulled from within the Controller.
# This can be configured per Application as well in the Controller.
#
# This affects pull apps and git push (slugrunner images) apps
#
# Values values are:
# - Always
# - IfNotPresent
controller:
  app_pull_policy: "IfNotPresent"
  # Possible values are:
  # enabled - allows for open registration
  # disabled - turns off open registration
  # admin_only - allows for registration by an admin only.
  registration_mode: "enabled"

database:
  # The username and password to be used by the on-cluster database.
  # If left empty they will be generated using randAlphaNum
  username: ""
  password: ""
  # Configure the following ONLY if using an off-cluster PostgreSQL database
  postgres:
    name: "database name"
    username: "database username"
    password: "database password"
    host: "database host"
    port: "database port"

redis:
  # Configure the following ONLY if using an off-cluster Redis instance for logger
  db: "0"
  host: "redis host"
  port: "redis port"
  password: "redis password" # "" == no password

fluentd:
  syslog:
    # Configure the following ONLY if using Fluentd to send log messages to both
    # the Logger component and external syslog endpoint
    # external syslog endpoint url
    host: ""
    # external syslog endpoint port
    port: ""

monitor:
  grafana:
    user: "admin"
    password: "admin"
  # Configure the following ONLY if using an off-cluster Influx database
  influxdb:
    url: "my.influx.url"
    database: "kubernetes"
    user: "user"
    password: "password"

registry-token-refresher:
  # Time in minutes after which the token should be refreshed.
  # Leave it empty to use the default provider time.
  token_refresh_time: ""
  off_cluster_registry:
    hostname: ""
    organization: ""
    username: ""
    password: ""
  ecr:
    # Your AWS access key. Leave it empty if you want to use IAM credentials.
    accesskey: ""
    # Your AWS secret key. Leave it empty if you want to use IAM credentials.
    secretkey: ""
    # Any S3 region
    region: "us-west-2"
    registryid: ""
    hostname: ""
  gcr:
    key_json: <base64-encoded JSON data>
    hostname: ""

router:
  dhparam: ""
  # Any custom router annotations(https://github.com/deis/router#annotations)
  # which need to be applied can be specified as key-value pairs under "deployment_annotations"
  deployment_annotations:
    #<example-key>: <example-value>

  # Any custom annotations for k8s services like http://kubernetes.io/docs/user-guide/services/#ssl-support-on-aws
  # which need to be applied can be specified as key-value pairs under "service_annotations"
  service_annotations:
    #<example-key>: <example-value>

  # Enable to pin router pod hostPort when using minikube or vagrant
  host_port:
    enabled: true

  # Service type default to LoadBalancer
  # service_type: LoadBalancer

workflow-manager:
  versions_api_url: https://versions.deis.com
  doctor_api_url: https://doctor.deis.com

Copied from original issue: deis/builder#486

Ability to configure global `prefix/` for Deis-created Docker repositories

From @mariusmarais on June 2, 2017 10:31

We're using the AWS ECR as our registry, of which you can have only one per account.

We want to run multiple Kubernetes clusters (with Deis on top), which causes trouble because the repository names inside the registry are all flat and cause collisions when DeisA and DeisB both have the same app name. (DeisA and DeisB are running completely separately, on different clusters with separate storage and domains.)

Is it possible to create a namespace or prefix/ for Deis-created repositories, so that app1 becomes deisA/app1 ? If that works correctly, it should also be possible to use IAM to segregate access to the repositories based on the prefix.

Copied from original issue: deis/builder#519

Idea: Asynchronous/detached builds

From @ineu on February 2, 2017 12:55

Sometimes builds can take quite a while, depending on the project size, dependencies, network speed, general cluster utilization etc. All this time connection opened for git push must stay open which is not very convenient.
It would be nice to have a way to run builds in background. It could work this way:

  1. User sets some option on the app to make deploys async. Maybe an ENV var.
  2. User performs git push
  3. After the push is finished, receive hooks/scripts do not run the build but rather create some kind of background task which performs a build (k8s job?)
  4. The deis command should have some additional subcommand to track the deployment process. It will return the deployment status and exit, so no long-running connections.
  5. It can also have some kind of 'watch' command to connect to the controller and wait for the deploy to finish.

Another option is to allow to detach from the build process as Heroku does: https://devcenter.heroku.com/articles/git#detach-from-build-process. This approach allows to avoid step 0 above but to me it looks somewhat confusing.

Sorry if this question has been asked before, didn't find it with a quick search.

Copied from original issue: deis/builder#476

Resource limits for buildPod?

From @chexxor on July 19, 2017 22:15

My kubernetes cluster's nodes are relatively small. I SSH into a node on which my builder pods run and using top, I see the "node" process (building a nodeJS app) is 100% for awhile. I suspect this is the reason my builder pod is being evicted. Can we pass some configuration options to the slugbuilder pod that deis/builder spawns to limit the number of resources it can use?

Copied from original issue: deis/builder#520

ssh_exchange_identification: Connection closed by remote host

From @pfeodrippe on March 30, 2017 19:38

I was pushing my app to production and

...
...
...
...
...
Done, example-web-production:v7 deployed to Workflow

Use 'deis open' to view this application in your browser

To learn more, use 'deis help' or visit https://deis.com/

fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly

Now I can't push more releases

ssh_exchange_identification: Connection closed by remote host
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Copied from original issue: deis/builder#503

git push errors could be more helpful than "permission denied"

From @deis-admin on January 19, 2017 23:51

From @ianblenke on September 4, 2014 20:33

This is a reoccurring problem for everyone. For various reasons, either no ssh key is available for the push that matches a user (the user forgot to add their key, their key isn't on their agent keyring, etc), or the ssh-agent presents a key for another user that doesn't have access to push to an app.

While it is fine to talk people through "ssh-add -l" and "ssh-add -D" and "ssh-add ~/.ssh/id_rsa-deis", that's just the first layer. The second layer is telling people to use "deis keys:list" and "deis keys:add" to add the key for their user.

It would help to return two possible error messages during a git push that is rejected:

No user was found for any of your presented keys. Please try "deis keys:list" and "deis keys:add"

and

You presented a key for jsmith, but that user does not have shared permissions for the app you are pushing to.

These two things would help users realize what is going on and maybe take action themselves without having to resort to asking for support.

Copied from original issue: deis/deis#1772

Copied from original issue: deis/builder#469

Allow namespacing of private registry images

Right now, if the ECR registry is namespaces in the values.yml is like this:

    registryid: "373147738271"
    hostname: "373147738271.dkr.ecr.us-east-1.amazonaws.com/team_hephy"

The deis-builder still creates image repo in ECR without the namespace.

Error:

Successfully built 2641d630d4e2
Pushing to registry
name unknown: The repository with name 'team_hephy/slack' does not exist in the registry with id '318742728771'
remote: 2018/07/17 16:15:49 Error running git receive hook [Build pod exited with code 1, stopping build.]

The deis-builder will not be able to push because it has not idea of namespacing the images right now. This needs to be fixed in the code.

Proposal: send logs directly from builder pods back to builder pod

From @arschles on February 29, 2016 19:35

Note: I believe others have suggested a similar or identical solution to this problem in the past. Hopefully this issue solidifies those ideas.

Rel deis/builder#185
Rel deis/builder#199
Rel #298

Problem Statement

As of this writing, the builder does the following to do a build:

  1. Launch a builder pod (slugbuilder or dockerbuilder)
  2. Poll the k8s API for the pod's existence
  3. Begin streaming pod logs after the pod exists

We've found issues with this approach, all of which stem from the fact that the pod may not be reported as running during any polling event. This is a race condition, from which so far we've found the following symptoms:

  1. The pod has started & completed inside of one polling interval
    1. Attempted solution in deis/builder#185. Note that this will not address the problem laid out in (2)
  2. The pod has started, completed and been garbage collected inside of one polling interval
    1. Temporary fix that relies on internal k8s GC implementation at: deis/builder#206

Solution Details

Because of this race condition, we can't rely on polling, and even if we successfully use the event stream (#185), k8s GC doesn't guarantee that pod logs will still be available after the pod is done. This proposal calls for the builder pod to stream its logs back to the builder that launched it.

Here are the following changes (as of this writing) that would need to happen to make this work:

  1. Each git-receive hook process runs a websocket server (on a unique port, assigned by the builder SSH server) that accepts incoming logs from the builder pod. It uses these logs for the following purposes:
    1. Writes them to STDOUT (for the builder to write back to the SSH connection)
    2. Look for a FINISHED message that indicates the builder pod is done
  2. Each git-receive process launches builder pods with its "phone-home" IP and port, which is the websocket server that they should write their logs to
  3. The builder pods now include a program that launch the builder logic (a shell script for slugbuilder and a python program for dockerbuilder). This program's purpose is to:
    1. Stream STDOUT & STDERR via a websocket connection to the phone-home address
    2. Send a FINISHED message when the builder logic exits

After the builder's git-receive hook receives the FINISHED message, or after a generous timeout, it can shut down the websocket server and continue with the logic it already has. The builder no longer would need to rely on polling the k8s API if this proposal were implemented.'

Copied from original issue: deis/builder#207

git submodules hook/support ala Heroku

From @deis-admin on January 19, 2017 23:35

From @azurewraith on June 2, 2014 20:18

I attempted to reproduce some Heroku'esque workflow involving a vendored Redmine plugin and found this:

https://devcenter.heroku.com/articles/git-submodules

90% sure it probably wouldn't have worked for me because git submodules are a PITA but it seems like Heroku handles this in their post-receive hook before delegating to the buildpack...

Copied from original issue: deis/deis#1094

Copied from original issue: deis/builder#468

Builder fails to resolve host

After git push, during docker build, build fails to resolve deb.debian.org

 ---> Using cache
 ---> 5e8494e15701
Step 4/19 : RUN apt-get update &&     apt-get install -y apt-transport-https ca-certificates vim &&     curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - &&     echo "deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list &&     rm -rf /var/lib/apt/lists/*
 ---> Running in cad63ad42fd8
Err:1 http://deb.debian.org/debian buster InRelease
  Temporary failure resolving 'deb.debian.org'
Err:2 http://security.debian.org/debian-security buster/updates InRelease
  Temporary failure resolving 'security.debian.org'
Err:3 http://deb.debian.org/debian buster-updates InRelease
  Temporary failure resolving 'deb.debian.org'
Reading package lists...
W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease  Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://security.debian.org/debian-security/dists/buster/updates/InRelease  Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease  Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package vim
The command '/bin/sh -c apt-get update &&     apt-get install -y apt-transport-https ca-certificates vim &&     curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - &&     echo "deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list &&     rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100
remote: 2020/04/17 13:42:22 Error running git receive hook [Build pod exited with code 1, stopping build.]
To ssh://deis-builder.x.x.x.x.nip.io:2222/demo-server.git
 ! [remote rejected]   ENGG-3881 -> ENGG-3881 (pre-receive hook declined)
error: failed to push some refs to 'ssh://[email protected]:2222/demo-server.git'

There are no errors in builder pod logs


deis deis-builder-57cf7db484-64x99 deis-builder Accepted connection.
deis deis-builder-57cf7db484-64x99 deis-builder Starting ssh authentication
deis deis-builder-57cf7db484-64x99 deis-builder Channel type: session
deis deis-builder-57cf7db484-64x99 deis-builder
deis deis-builder-57cf7db484-64x99 deis-builder Key='LANG', Value='C.UTF-8'
deis deis-builder-57cf7db484-64x99 deis-builder
deis deis-builder-57cf7db484-64x99 deis-builder Key='LC_ALL', Value='en_US.UTF-8'
deis deis-builder-57cf7db484-64x99 deis-builder
deis deis-builder-57cf7db484-64x99 deis-builder Key='LC_CTYPE', Value='UTF-8'
deis deis-builder-57cf7db484-64x99 deis-builder
deis deis-builder-57cf7db484-64x99 deis-builder receiving git repo name: demo-server.git, operation: git-receive-pack, fingerprint: ee:02:70:18:75:c4:23:6c:38:d6:11:13:81:4e:6a:c8, user: test
deis deis-builder-57cf7db484-64x99 deis-builder creating repo directory /home/git/demo-server.git
deis deis-builder-57cf7db484-64x99 deis-builder writing pre-receive hook under /home/git/demo-server.git
deis deis-builder-57cf7db484-64x99 deis-builder git-shell -c git-receive-pack 'demo-server.git'
deis deis-builder-57cf7db484-64x99 deis-builder Waiting for git-receive to run.
deis deis-builder-57cf7db484-64x99 deis-builder Waiting for deploy.
deis deis-builder-57cf7db484-64x99 deis-builder Deploy complete.

If I ssh into pod and try to resolve it, it is working.

root@deis-builder-57cf7db484-64x99:/# host deb.debian.org
deb.debian.org is an alias for debian.map.fastly.net.
debian.map.fastly.net has address 151.101.158.133
debian.map.fastly.net has IPv6 address 2a04:4e42:24::645

Builder proceeds if slugrunner pod is evicted

From @chexxor on March 16, 2017 18:53

My slugrunner pod is quite often evicted due to low compute resources on the node.

An example log of a git push of a buildpack build.

[chexxor@fedora myapp]$ git push ssh://[email protected]:2222/myapp-master.git master
Counting objects: 4129, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3757/3757), done.
Writing objects: 100% (4129/4129), 5.39 MiB | 4.75 MiB/s, done.
Total 4129 (delta 2821), reused 474 (delta 288)
remote: Resolving deltas: 100% (2821/2821), done.
Starting build... but first, coffee!
-----> Restoring cache...
       Done!
-----> Node.js app detected
       
-----> Creating runtime environment
       
       NPM_CONFIG_LOGLEVEL=error
       NPM_CONFIG_PRODUCTION=true
       NODE_ENV=production
       NODE_MODULES_CACHE=false
       
-----> Installing binaries
       engines.node (package.json):  4.6.0
       engines.npm (package.json):   2.15.x
       
       Downloading and installing node 4.6.0...
       Resolving npm version 2.15.x via semver.io...
       Downloading and installing npm 2.15.11 (replacing version 2.15.9)...
       
-----> Restoring cache
       Skipping cache restore (disabled by config)
       
-----> Building dependencies
       Running heroku-prebuild
       
       > [email protected] heroku-prebuild /tmp/build
       > echo "Prebuild steps running..."
       
       Installing node modules (package.json)
Build complete.
Launching App...
...
...
...
...
...ote: 
...
...
...
...ote: 
...
...
...
...
...
...
...
...
...
...
...
...
Done, myapp-master:v76 deployed to Workflow

Note that the slugbuilder pod was evicted while executing the -----> Building dependencies step. I believe this because the logs produced by this step should be hundreds of lines, and the following buildpack steps don't appear, like "-----> Caching build" and "-----> Build succeeded!".

Despite this slugbuilder pod failing, the builder process continues, and prints "Build complete. skipping the failed pod check.

I upgraded my workflow just a few days ago, so I believe I have the latest versions of these components.

Copied from original issue: deis/builder#496

Ruby buildpack failures after updating to workflow 2.13.0

From @nathansamson on April 10, 2017 14:20

This weekend we updated to workflow 2.13, mainly to fix the problem of a crashing builder every so often.

But this had the unintended side effect that deploying our rails app did not work anymore as it seems to ignore environment variables during building.
Let me explain:

We need the CURL_TIMEOUT environment variable to be set, as for some reason our european cluster has a very bad download time to Amazon S3 overseas, and will always faill with the default of 30s.
This should still work according to: https://github.com/heroku/heroku-buildpack-ruby/blob/master/lib/language_pack/fetcher.rb#L38

The only solution to get it working for is to manually set the BUILD_PACKURL to v149 (very well possible that v150 for example might also work, but I believe v149 is the one we used before the workflow update).

I can't really imagine this is a bug in deis builder as clearly the v149 buildpack respects my settings, but on the other hand I am not sure why the env variable wouldn't work anymore in the newer buildback...

Any ideas/suggestions?

Copied from original issue: deis/builder#508

Pushing any app without env variables fails

It was reported again today, on the Hephy slack by user @edison, that the teamhephy/example-python-django example fails when the basic instructions are followed.

Starting build... but first, coffee!
-----> Restoring cache...
      No cache file found. If this is the first deploy, it will be created now.
cp: cannot stat ‘/tmp/env/*’: No such file or directory
remote: 2018/09/03 02:43:49 Error running git receive hook [Build pod exited with code 1, stopping build.]

This was confirmed after testing to be the same issue we've seen with any app that gets pushed and is lacking any set environment variables. The telling error message is:

cp: cannot stat ‘/tmp/env/*’: No such file or directory

What that error is telling you is that it tried to enumerate the environment variables from the config and it didn't find any trace of them. The workaround is deis config:set FOO=bar (eg. set any environment variable, even if your app doesn't need one. Doesn't matter what it is. Then push a release again and wait for builder to finish.)

Hopefully this helps until we can fix it properly and get it into a release! Users should not be required to set up dummy ENV variables just to make the builder happy. This must have been a regression in one of the final Deis Workflow releases; I don't remember this being a problem forever, and none of the example apps recommend adding such dummy environment variables.

Feature Request: Ability to skip Dockerfile builds

From @scottrobertson on April 28, 2017 18:56

Apologies if this is not in the correct format for a feature request.

Recently I have been trying to deploy an open source project to Deis that contains a Dockerfile. I would love a way to have Deis skip the Dockerfile deployment, and just revert back to standard build packs. There are a few reasons for this:

  • Mainly because the build pack deploys tend to be much faster for me
  • My k8's cluster is setup via Stackpoint, and I am having a lot of issues getting the registry working

My hacky solution right now is just to create a deis branch, remove the Dockerfile and push that to deis. Merging master into that branch as i need to update.

Copied from original issue: deis/builder#512

List of issues with running 2 or more builder pods in a cluster

From @arschles on January 7, 2016 22:10

This issue tracks the current limitations of builder as a clustered system

  • It uses a filesystem based lockfile to prevent 2 or more git pushes at the same time
    • In a multi-builder setup, this means that multiple git pushes can in fact happen at the same time
    • To solve, the lock will need to be distributed
  • It keeps git repositories on disk and runs pre-receive hooks on them to deploy
    • In a multi-builder setup, this means that one git push may only have to push a few refs, while another identical git push may have to push all refs. Not the end of the world
    • To solve, the refs list will need to be moved off of local disk to a shared location (object storage?)

Copied from original issue: deis/builder#86

Feature request: support for deploying directly from a git remote

From @deis-admin on January 19, 2017 21:5

From @olalonde on March 24, 2016 20:25

It would be nice if the controller supported deploying apps by specifying a git remote URL. e.g. deis pull https://github.com/deis/example-nodejs-express.git or deis deploy https://github.com/deis/example-nodejs-express.git. That would make it easier to eventually have a "Deploy to Deis" button like Heroku.

Copied from original issue: deis/deis#4994

Copied from original issue: deis/builder#466

Feature Request: Add SSH key to slugbuilders via kubernetes secrets

From @ROYBOTNIK on May 11, 2017 12:21

We're able to provide an SSH key to slugbuilder pods created via the deis builder by setting an SSH_KEY variable for the app. This works well for things like bundling private github repos during the build, but the downside is that anyone who has access to the app has access to the SSH private key.

This isn't very secure. For example, if someone leaves an organization and they grabbed the SSH key at some point, they would still have access to whatever that SSH key is used for. In many cases this will give read-only access to something like github. To ensure that their access has been revoked, we would need to rotate this key for each app that uses it.

It would be much better if we could use a kubernetes secret to provide the key. It could be specified in values.yaml and passed as part of the slugbuilder env when builder creates one. This would give better access control and make it so we don't have to set the SSH_KEY variable for each app that needs to use it.

I can work on a PR if this sounds like a good idea.

Copied from original issue: deis/builder#515

Use build args to capture build time data

From @jchauncey on November 10, 2016 17:52

Acceptance Criteria:

  • Use the --build-args flag when running docker build to pass in the following items (and more if needed)
  • BUILD_DATE
  • VERSION

You will need to do the following in the dockerfile to persist the data into the image:

ARG VERSION
ARG BUILD_DATE
ENV VERSION $VERSION
ENV BUILD_DATE $BUILD_DATE

Copied from original issue: deis/builder#450

[ERROR] Failed handshake: read tcp ...

Accepted connection.
---> [ERROR] Failed handshake: read tcp 10.244.0.91:2223->10.244.0.1:49157: read: connection reset by peer
---> [ERROR] Failed handshake: read tcp 10.244.0.91:2223->10.240.0.4:51527: read: connection reset by peer
Accepted connection.
---> [ERROR] Failed handshake: read tcp 10.244.0.91:2223->10.240.0.5:62983: read: connection reset by peer
Accepted connection.

In deis/deis#4431 (V1 PaaS) this message was supposedly demoted to a debug warning, but in the current release of builder it appears to be pretty clearly labeled as a scary [ERROR]

I had done a complete wipe of my deis namespace/helm release and rotate of all credentials and keys, up and running with a restored database from bucket storage, and an unrelated problem with DNS/hosts was preventing me from reaching builder. Well this error led me down a rabbit hole of trying to figure out whether there was a database record of the public key somewhere, and some controller component that needed to communicate with builder wasn't able.

It would be nice to clarify somehow in these messages that they are really just health checks hitting the builder pod. I'm not sure if this is a regression from V1 PaaS or if it looks exactly how it did after deis/deis#4431 had been merged, or what, but it seems like the behavior is in an undesirable state.

git push doesn't show slug build logs

From @gemoya on May 8, 2017 19:0

Hi,

I have a Kubernetes v1.5.0 provided by rancher:v1.4.3 with Deis Workflow 2.14

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-115+611cbb22703182", GitCommit:"611cbb22703182611863beda17bf9f3e90afa148", GitTreeState:"clean", BuildDate:"2017-01-13T18:03:00Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

My problem is: When I hit 'git push deis' I got stuck at

Counting objects: 102, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (58/58), done.
Writing objects: 100% (102/102), 22.81 KiB | 0 bytes/s, done.
Total 102 (delta 39), reused 102 (delta 39)
remote: Resolving deltas: 100% (39/39), done.
Starting build... but first, coffee!

The slug builder pod is launched and completed with success, you can view manually the supposed logs to be streamed (kubectl logs slug-build-xxxxxx).

$ kubectl -n deis logs slugbuild-rara-e91bdc46-8c9e375b
-----> Restoring cache...
       No cache file found. If this is the first deploy, it will be created now.
-----> Go app detected
-----> Fetching jq... done
-----> Checking Godeps/Godeps.json file.
-----> Installing go1.7.5
-----> Fetching go1.7.5.linux-amd64.tar.gz... done
-----> Running: go install -v -tags heroku .
       github.com/deis/example-go
-----> Discovering process types
       Procfile declares types -> web
-----> Checking for changes inside the cache directory...
       Files inside cache folder changed, uploading new cache...
       Done: Uploaded cache (82M)
-----> Compiled slug size is 1.9M

Finally the app is deployed and works! but the 'git push' command never ends and the client can not know if his/her app is ready or not.

On the other way, the builder logs are:

receiving git repo name: rara.git, operation: git-receive-pack, fingerprint: 82:b4:09:7c:b9:ac:e1:b1:4b:0d:f3:7e:79:3f:ad:bb, user: admin
creating repo directory /home/git/rara.git
writing pre-receive hook under /home/git/rara.git
git-shell -c git-receive-pack 'rara.git'
Waiting for git-receive to run.
Waiting for deploy.
---> ---> ---> ---> ---> ---> ---> ---> [ERROR] Failed git receive: Failed to run git pre-receive hook:  (signal: broken pipe)
Cleaner deleting cache home/rara/cache for app rara
Cleaner deleting slug /home/rara:git-e91bdc46 for app rara

And if I go inside of builder pod and tried to debug it I encounter this on pod processes

root       300  0.0  0.0  91316  3512 ?        S    04:01   0:00 git receive-pack asdf.git
root       308  0.0  0.0  18104  2872 ?        S    04:01   0:00  \_ /bin/bash hooks/pre-receive
root       309  0.1  0.4 167260 35388 ?        Sl   04:01   0:59      \_ boot git-receive
root       310  0.0  0.0  18108   336 ?        S    04:01   0:00      \_ /bin/bash hooks/pre-receive
root       311  0.0  0.0  15428  1108 ?        S    04:01   0:00          \_ sed s/^/.[1G/

So, my idea is: the builder isn't getting the logs stream buffer from some side then I got a broken pipe because the builder is listening forever.
I don't know the exact component what it uses to get the logs. I think fluentd takes the logs output of all containers but I don't know how the builder make a request of slug builder logs.

The deis workflow is deployed all on-cluster, with of-cluster redis/object-storage/all the problem persist.

An output of my deis workflow pods

$ kubectl -n deis get pods
NAME                                     READY     STATUS    RESTARTS   AGE
deis-builder-3550604618-14fq8            1/1       Running   0          16h
deis-controller-3566093518-gs23q         1/1       Running   3          16h
deis-database-223698169-qqwn7            1/1       Running   0          16h
deis-logger-343314728-9jsr7              1/1       Running   2          16h
deis-logger-fluentd-vhhbd                1/1       Running   0          16h
deis-logger-redis-394109792-tj6fv        1/1       Running   0          16h
deis-minio-676004970-144jz               1/1       Running   0          16h
deis-monitor-grafana-740719322-pvd02     1/1       Running   0          16h
deis-monitor-influxdb-2881832136-7xd6c   1/1       Running   0          16h
deis-monitor-telegraf-wgzfv              1/1       Running   1          16h
deis-nsqd-3764030276-rqbs7               1/1       Running   0          16h
deis-registry-245622726-c9p9c            1/1       Running   1          16h
deis-registry-proxy-2c7tv                1/1       Running   0          16h
deis-router-2483473170-c375l             1/1       Running   0          16h
deis-workflow-manager-1893365363-v3rfv   1/1       Running   0          16h

extra info:
My kubelet running options:

kubelet --kubeconfig=/etc/kubernetes/ssl/kubeconfig --api_servers=https://kubernetes.kubernetes.rancher.internal:6443 --allow-privileged=true --register-node=true --cloud-provider=rancher --healthz-bind-address=0.0.0.0 --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --network-plugin=cni --network-plugin-dir=/etc/cni/managed.d --authorization-mode=AlwaysAllow --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0

The app los can be viewed using deis cli

$ deis logs -a rara
2017-05-08T18:13:59+00:00 deis[controller]: INFO config rara-1e3114e updated
2017-05-08T18:13:59+00:00 deis[controller]: INFO admin created initial release
2017-05-08T18:13:59+00:00 deis[controller]: INFO appsettings rara-1aaf45d updated
2017-05-08T18:13:59+00:00 deis[controller]: INFO domain rara added
2017-05-08T18:19:02+00:00 deis[controller]: INFO build rara-52988eb created
2017-05-08T18:19:02+00:00 deis[controller]: INFO admin deployed e91bdc4

Some idea how do to the get this working properly ?

Copied from original issue: deis/builder#514

Use a cheaper way to ping the Kubernetes API in the health check endpoint

From @arschles on February 12, 2016 18:3

After #149, the health check server (/healthz) will be listing all namespaces in the cluster as a way to ping the Kubernetes API to determine whether it's available. This method of checking availability is more heavyweight than it has to be, since the API server should also have a healthz endpoint. Some code similar to the following should be used to check that endpoint:

// kubeClient is a *(k8s.io/kubernetes/pkg/client/unversioned).Client
res := kubeClient.Get().AbsPath("/healthz").Do()
if res.Error() != nil{
  // not possible to reach the server
}

This idea was first proposed by @aledbf in https://github.com/deis/builder/pull/149/files#r52692363

Copied from original issue: deis/builder#180

[META] Future runner capabilities

From @arschles on January 21, 2016 17:57

This issue holds a discussion about future features and capabilities to add to apps that run on the Deis platform. Discussions herein may relate to deis/workflow, deis/slugbuilder, deis/slugrunner and others.

Whiteboard discussion notes from 1/21/2016:

slack for ios upload

Copied from original issue: deis/builder#114

Builder pods not removed after deploy

From @felixbuenemann on February 20, 2017 21:44

Currently (as of deis-builder v2.7.1) the slugbuild and dockerbuild pods are not deleted after a successful or failed build.

This means that the pod (eg. slugbuild-example-e24fafeb-b31237bb) will continue to exist in state "Completed" or state "Error" and the docker container associated with the pod can never be garbage collected by Kubernetes, causing the node to quickly run out of disk space.

Example:

On a k8s node with an uptime of 43 days and 95 GB disk storage for docker there where 249 completed (or some erred) slugbuild and dockerbuild pods whose docker images accounted for 80 GB of disk storage, while the deployed apps and deis services only required 15 GB storage.

Expected Behavior:

The expected behavior for the builder would be, that it automatically deletes the build pod after is has completed or erred, so that the K8s garbage collection can remove the docker containers which frees the disk space allocated to them.

Copied from original issue: deis/builder#487

Allow git pushes to respond with an informative failure message

From @arschles on February 16, 2016 21:37

If a downstream dependency of builder is not reachable, git pushes should respond with an informative error message to the user. Since #149 adds probes, and those probes check downstream dependencies, the behavior immediately after that PR is that builder will be shut down by Kubernetes.

The implementation to close this issue will likely be to move the actual health check logic into the builder. There of course may be others.

Ref deis/builder#149 (comment)

Copied from original issue: deis/builder#183

Protect pushes from wrong branch

From @nathansamson on January 13, 2017 10:0

TLDR; I want to protect certain apps to only be pushable by a certain branch, and preferably not allow forced pushes (similar to gitlabs protected branches - https://about.gitlab.com/2014/11/26/keeping-your-code-protected/).

This to prevent accidental pushes to my production application.

Note as suggested as comments in original report this can be also achieved with a good CI/CD policy, but some protection on workflow level is also a nice addition

Long story.

Lets say I have an app, and I have different environments (test, preprod, production, various short-lived test branches, ...) for this app. Each of these environments is linked to a branch.

test -> master
preprod -> stable
production -> (also) stable
feature-x -> feature-x
you get the idea...

To deploy a new version I just do git push deis-production stable and all is well.
Another developer/ops guy takes an old version of stable, does an emergency commit + deploy and does a git push deis-production stable --force as well.. (In theory he should have checked why he needed to force, but sometines in the heat of time you don't think too well)

Alternatively (and this does not require a --force so is more easy to do accidently) one of the deployment guys does git push deis-production master (either he intended to deploy to test, or intended to deploy another branch).

If there were an option to say deis apps:protect branch-name, to only allow pushes to that application with that branch-name, and enforce non --force pushes, this would prevent these errors

Blatant copy paste from deis/deis#4460

Copied from original issue: deis/builder#463

Builder fails to cleanup object store slugs

From @markkwasnick on March 10, 2017 16:38

Destroying an app via Deis cli does not clean up the build slugs when app name fails to match regex.

Builder Version: 2.7.1
Storage Driver: Swift
Steps to reproduce:
deis destroy --app=my-app
kubectl logs BUILDER --namespace=deis

Only log line written is clearing the cache:
Cleaner deleting cache home/my-app/cache for app my-app

Missing the lines in code at #445

log.Info("Cleaner deleting slug %s for app %s", obj, app)

my-app had over 100 builds.

Issue is potentially the regex matching:

gitRegex, err := regexp.Compile(^/ + fmt.Sprintf(gitreceive.GitKeyPattern, app, ".{8}") + "$")

Copied from original issue: deis/builder#494

Deis builder should do best effort to keep git checkout

From @deis-admin on January 19, 2017 23:52

From @nathansamson on March 22, 2015 15:0

I udnerstand that deis builder will throw away all git checkouts it has on a reboot / node movement.

Problem is when pushing a branch with rather some history, this might take a while every time.

I think deis builder should do a best (better than currently) to keep a quick checkout of branches.
I am thinking of a new container / volume that has all repos, that can be fetched.
Ofcourse in case that is down, or the repo isn't available there it must keep current behaviour to just repush the branch from scratch..

This will be even more important when deis builder is scalable (I hope there are plans for this) in large clusters

Hreoku displays "fetching branch" when pushign to an existing app, so I assume they push branches to a central (but distributed) location, fetch it locally from the datacenter (which should be fast), and then accepts new pushes from there on.
I hope we can achieve something similar...

Copied from original issue: deis/deis#3355

Copied from original issue: deis/builder#471

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.