Giter VIP home page Giter VIP logo

kube-airflow's Introduction

kube-airflow (Celery Executor)

Docker Hub Docker Pulls Docker Stars

kube-airflow provides a set of tools to run Airflow in a Kubernetes cluster. This is useful when you'd want:

  • Easy high availability of the Airflow scheduler
  • Easy parallelism of task executions
    • The common way to scale out workers in Airflow is to utilize Celery. However, managing a H/A backend database and Celery workers just for parallelising task executions sounds like a hassle. This is where Kubernetes comes into play, again. If you already had a K8S cluster, just let K8S manage them for you.
    • If you have ever considered to avoid Celery for task parallelism, yes, K8S can still help you for a while. Just keep using LocalExecutor instead of CeleryExecutor and delegate actual tasks to Kubernetes by calling e.g. kubectl run --restart=Never ... from your tasks. It will work until the concurrent kubectl run executions(up to the concurrency implied by scheduler's max_threads and LocalExecutor's parallelism. See this SO question for gotchas) consumes all the resources a single airflow-scheduler pod provides, which will be after the pretty long time.

This repository contains:

  • Dockerfile(.template) of airflow for Docker images published to the public Docker Hub Registry.
  • airflow.all.yaml for manual creating Kubernetes services and deployments to run Airflow on Kubernetes
  • Helm Chart in ./airflow for deployments using Helm

Informations

Manual Installation

Create all the deployments and services to run Airflow on Kubernetes:

kubectl create -f airflow.all.yaml

It will create deployments for:

  • postgres
  • rabbitmq
  • airflow-webserver
  • airflow-scheduler
  • airflow-flower
  • airflow-worker

and services for:

  • postgres
  • rabbitmq
  • airflow-webserver
  • airflow-flower

Helm Deployment (recommended)

Ensure your helm installation is done, you may need to have TILLER_NAMESPACE set as environment variable.

Deploy to Kubernetes using:

make helm-install NAMESPACE=yournamespace HELM_VALUES=/path/to/you/values.yaml

Helm ingresses

The Chart provides ingress configuration to allow customization the installation by adapting the config.yaml depending on your setup.

Prefix

This Helm chart allows using a "prefix" string that will be added to every Kubernetes names. That allows instantiating several, independent Airflow clusters in the same namespace.

Note:

Do NOT use characters such as " (double quote), ' (simple quote), / (slash) or \ (backslash)
in your passwords and prefix and keep it as small as possible.

DAGs deployment: embedded DAGs or git-sync

This chart provide basically two way of deploying DAGs in your Airflow installation:

  • embedded DAGs
  • Git-Sync

This helm chart provide support for Persistant Storage but not for sidecar git-sync pod. If you are willing to contribute, do not hesitate to do a Pull Request !

Using embedded Git-Sync

Git-sync is the easiest way to automatically update your DAGs. It simply checks periodically (by default every minute) a Git project on a given branch and check this new version out when available. Scheduler and worker see changes almost real-time. There is no need to other tool and complex rolling-update procedure.

While it is extremely cool to see its DAG appears on Airflow 60s after merge on this project, you should be aware of some limitations Airflow has with dynamic DAG updates:

If the scheduler reloads a dag in the middle of a dagrun then the dagrun will actually start
using the new version of the dag in the middle of execution.

This is a known issue with airflow and it means it's unsafe in general to use a git-sync like solution with airflow without:

  • using explicit locking, ie never pull down a new dag if a dagrun is in progress
  • make dags immutable, never modify your dag always make a new one

Also keep in mind using git-sync may not be scalable at all in production if you have lot of DAGs. The best way to deploy you DAG is to build a new docker image containing all the DAG and their dependencies. To do so, fork this project

Airflow.cfg as ConfigMap

By default, we use the configuration file airflow.cfg hardcoded in the docker image. This file uses a custom templating system to apply some environmnet variable and feed the airflow processes with (basically it is just some sed).

If you want to use your own airflow.cfg file without having to rebuild a complete docker image, for example when testing new settings, there is a way to define this file in a Kubernetes configuration map:

  • you need to define your own Value file you will feed to helm with helm install -f myvalue.yaml
  • you need to enable init the node airflow.airflow_cfg.enable: true
  • you need to store the content of your airflow.cfg in the node airflow.airflow_cfg.data You can see at airflow/myvalue-with-airflowcfg-configmap.yaml for an example on how to set it in your config.yaml file
  • note it is important to keep the custom templating in your airflow.cfg (ex: {{ POSTGRES_CREDS }}) or at least keep it aligned with the configuration applyied in your Kubernetes Cluster.

Worker Statefulset

As you can see, Celery workers uses StatefulSet instead of deployment. It is used to freeze their DNS using a Kubernetes Headless Service, and allow the webserver to requests the logs from each workers individually. This requires to expose a port (8793) and ensure the pod DNS is accessible to the web server pod, which is why StatefulSet is for.

Embedded DAGs

If you want more control on the way you deploy your DAGs, you can use embedded DAGs, where DAGs are burned inside the Docker container deployed as Scheduler and Workers.

Be aware this requirement more heavy tooling than using git-sync, especially if you use CI/CD:

  • your CI/CD should be able to build a new docker image each time your DAGs are updated.
  • your CI/CD should be able to control the deployment of this new image in your kubernetes cluster

Example of procedure:

  • Fork this project
  • Place your DAG inside the dags folder of this project, update requirements-dags.txt to install new dependencies if needed (see bellow)
  • Add build script connected to your CI that will build the new docker image
  • Deploy on your Kubernetes cluster

You can avoid forking this project by:

  • keep a git-project dedicated to storing only your DAGs + dedicated requirements.txt

  • you can gate any change to DAGs in your CI (unittest, pip install -r requirements-dags.txt,.. )

  • have your CI/CD makes a new docker image after each successful merge using

    DAG_PATH=$PWD
    cd /path/to/kube-aiflow
    make ENBEDDED_DAGS_LOCATION=$DAG_PATH
    
  • trigger the deployment on this new image on your Kubernetes infrastructure

Python dependencies

If you want to add specific python dependencies to use in your DAGs, you simply declare them inside the requirements/dags.txt file. They will be automatically installed inside the container during build, so you can directly use these library in your DAGs.

To use another file, call:

make REQUIREMENTS_TXT_LOCATION=/path/to/you/dags/requirements.txt

Please note this requires you set up the same tooling environment in your CI/CD that when using Embedded DAGs.

Helm configuration customization

Helm allow to overload the configuration to adapt to your environment. You probably want to specify your own ingress configuration for instance.

Build Docker image

git clone this repository and then just run:

make build

Run with minikube

You can browse the Airflow dashboard via running:

minikube start
make browse-web

the Flower dashboard via running:

make browse-flower

If you want to use Ad hoc query, make sure you've configured connections: Go to Admin -> Connections and Edit "mysql_default" set this values (equivalent to values in config/airflow.cfg) :

  • Host : mysql
  • Schema : airflow
  • Login : airflow
  • Password : airflow

Check Airflow Documentation

Run the test "tutorial"

    kubectl exec web-<id> --namespace airflow-dev airflow backfill tutorial -s 2015-05-01 -e 2015-06-01

Scale the number of workers

For now, update the value for the replicas field of the deployment you want to scale and then:

    make apply

Wanna help?

Fork, improve and PR. ;-)

kube-airflow's People

Contributors

chribsen avatar gsemet avatar j450h1 avatar jpds avatar mumoshu avatar sedward avatar vyper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-airflow's Issues

What is the difference between make helm-install and just using helm-install

I'm super new to kubernetes and helm, so please excuse my stupid question.

The guide under Helm Deployment mentions:

Deploy to Kubernetes using:

make helm-install NAMESPACE=yournamespace HELM_VALUES=/path/to/you/values.yaml

As i'm learning helm, I can only find helm-install and can't figure out what make is supposed to do.

Sorry again for the stupid question, just trying to figure this out (google search doesn't really bring anything up on this)
Thank You!

Error when accessing logfile

I'm getting this error when trying to view the logfile in the webui for a dag run.

From my webui container, I was not able to resolve the pod name - worker-6d569f7867-l5gvg

Thanks

*** Log file does not exist: /usr/local/airflow/logs/example_kubernetes_operator/task/2018-08-24T15:56:35.159271+00:00/1.log
*** Fetching from: http://worker-6d569f7867-l5gvg:8793/log/example_kubernetes_operator/task/2018-08-24T15:56:35.159271+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='worker-6d569f7867-l5gvg', port=8793): Max retries exceeded with url: /log/example_kubernetes_operator/task/2018-08-24T15:56:35.159271+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1bbb4f0f50>: Failed to establish a new connection: [Errno -2] Name or service not known',))

Python dependency management

Let's imagine we have DAGs that does some math or connect to a database. This dags will probably use a python dependency, that should be fectched and installed on the python environment of the Dags.
If the scheduler + workers + web ui does not have these dependencies, they will crash (I think the scheduler can handle that).

My PR #16 uses git-sync to automatically synchronize the Dags with a git repository. But it will work until a new python dependency is needed

So there is two solutions:

  • force users to build there own docker image, and hope they do rolling update properly to deploy on their workers.
  • allow installation from the workers (from a requirements.txt defined in the Dag folders) in entrypoint.sh.

Could not translate host name "postgres" to address: Name or service not known

I tried to deploy kube-airflow on a kubernetes cluster running ubnuntu 14.04. When I run the example airflow command "airflow backfill tutorial -s ---", I get the following error:

File "/usr/local/lib/python2.7/dist-packages/psycopg2/init.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: Name or service not known.

I checked the pods and postgres-2280134284-e2j1r pod seems to be running. Not sure whats wrong, any help is much appreciated

Note that I did not install kubernetes using minkube, I installed the kubernetes manually on ubuntu using this guide

Acess to web too slow

I ran the airflow.all.yaml in my kubernetes cluster but when I try to access the web interface, it takes too long. Some minutes after, I can navigate, but it also takes too long.

The cluster is full of resources (CPU/memory). No errors were found in any pod, just these warnings in 'web':

[2017-10-30 13:51:17,936] [53] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2017-10-30 13:51:18 +0000] [32] [INFO] Handling signal: ttou
[2017-10-30 13:51:47 +0000] [49] [INFO] Worker exiting (pid: 49)
[2017-10-30 13:52:17 +0000] [32] [INFO] Handling signal: ttin
[2017-10-30 13:52:17 +0000] [54] [INFO] Booting worker with pid: 54
/usr/local/lib/python2.7/dist-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
  .format(x=modname), ExtDeprecationWarning
[2017-10-30 13:52:17,995] [54] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2017-10-30 13:52:18 +0000] [32] [INFO] Handling signal: ttou
[2017-10-30 13:52:18 +0000] [50] [INFO] Worker exiting (pid: 50)
[2017-10-30 13:52:49 +0000] [32] [INFO] Handling signal: ttin
[2017-10-30 13:52:49 +0000] [55] [INFO] Booting worker with pid: 55
/usr/local/lib/python2.7/dist-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
  .format(x=modname), ExtDeprecationWarning

Do you have any idea? Does anyone tried to run it on a real Kubernetes cluster?

How do you handle logs [Question]

Hey there,

It looks like logs continue to grow.
Do you know if airflow trim or rotate logs at some point ?
How do you handle that in k8s ?

Best,

Nolan

Run the test "tutorial"

Below is the command instead
kubectl exec -it web- --namespace airflow-dev -- airflow backfill tutorial -s 2019-02-18 -e 2019-02-19

`make build` fails due to `ImportError: No module named pyparsing`

$ make build
cd build/1.7.1.3-1.3.0-0.9 && docker build -t mumoshu/kube-airflow:1.7.1.3-1.3.0-0.9 . && docker tag mumoshu/kube-airflow:1.7.1.3-1.3.0-0.9 mumoshu/kube-airflow:1.7.1.3-1.3.0
Sending build context to Docker daemon 15.87 kB
Step 1 : FROM debian:jessie
jessie: Pulling from library/debian
6d827a3ef358: Pull complete
Digest: sha256:72f784399fd2719b4cb4e16ef8e369a39dc67f53d978cd3e2e7bf4e502c7b793
Status: Downloaded newer image for debian:jessie
 ---> 8cedef9d7368
Step 2 : MAINTAINER Yusuke KUOKA <[email protected]>
 ---> Running in b16ffe29a556
 ---> 93201adc6386
Removing intermediate container b16ffe29a556
Step 3 : ENV DEBIAN_FRONTEND noninteractive
 ---> Running in 0a955029afce
 ---> f5244957cde2
Removing intermediate container 0a955029afce
Step 4 : ENV TERM linux
 ---> Running in aeccd2226091
 ---> b278f2851daa
Removing intermediate container aeccd2226091
Step 5 : ARG AIRFLOW_VERSION=1.7.1.3
 ---> Running in 3273ab7cb801
 ---> f1002e43f9b2
Removing intermediate container 3273ab7cb801
Step 6 : ENV AIRFLOW_HOME /usr/local/airflow
 ---> Running in 966cd1c04a95
 ---> a7278110b5bc
Removing intermediate container 966cd1c04a95
Step 7 : ENV LANGUAGE en_US.UTF-8
 ---> Running in 3e4b5bb29d70
 ---> 80be5bbeead8
Removing intermediate container 3e4b5bb29d70
Step 8 : ENV LANG en_US.UTF-8
 ---> Running in 66f0778f6876
 ---> 7b178c712418
Removing intermediate container 66f0778f6876
Step 9 : ENV LC_ALL en_US.UTF-8
 ---> Running in baaf6028dff7
 ---> 3a30f7826114
Removing intermediate container baaf6028dff7
Step 10 : ENV LC_CTYPE en_US.UTF-8
 ---> Running in 4c30572d4d0d
 ---> a1b7a9f26326
Removing intermediate container 4c30572d4d0d
Step 11 : ENV LC_MESSAGES en_US.UTF-8
 ---> Running in 96ee2f1a086d
 ---> d013b74eb358
Removing intermediate container 96ee2f1a086d
Step 12 : ENV LC_ALL en_US.UTF-8
 ---> Running in 4aea419e1e95
 ---> 290dfd57dae3
Removing intermediate container 4aea419e1e95
Step 13 : RUN set -ex     && buildDeps='         python-pip         python-dev         libkrb5-dev         libsasl2-dev         libssl-dev         libffi-dev         build-essential         libblas-dev         liblapack-dev     '     && echo "deb http://http.debian.net/debian jessie-backports main" >/etc/apt/sources.list.d/backports.list     && apt-get update -yqq     && apt-get install -yqq --no-install-recommends         $buildDeps         apt-utils         curl         netcat         locales     && apt-get install -yqq -t jessie-backports python-requests libpq-dev     && sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen     && locale-gen     && update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8     && useradd -ms /bin/bash -d ${AIRFLOW_HOME} airflow     && pip install pytz==2015.7     && pip install cryptography     && pip install pyOpenSSL     && pip install ndg-httpsclient     && pip install pyasn1     && pip install psycopg2     && pip install airflow[celery,postgresql,hive]==$AIRFLOW_VERSION     && apt-get remove --purge -yqq $buildDeps libpq-dev     && apt-get clean     && rm -rf         /var/lib/apt/lists/*         /tmp/*         /var/tmp/*         /usr/share/man         /usr/share/doc         /usr/share/doc-base
 ---> Running in 781401ab87a8
+ buildDeps=         python-pip         python-dev         libkrb5-dev         libsasl2-dev         libssl-dev         libffi-dev         build-essential         libblas-dev         liblapack-dev
+ echo deb http://http.debian.net/debian jessie-backports main
+ apt-get update -yqq
+ apt-get install -yqq --no-install-recommends python-pip python-dev libkrb5-dev libsasl2-dev libssl-dev libffi-dev build-essential libblas-dev liblapack-dev apt-utils curl netcat locales
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US.UTF-8",
	LC_ALL = "en_US.UTF-8",
	LC_CTYPE = "en_US.UTF-8",
	LC_MESSAGES = "en_US.UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libapt-inst1.5:amd64.
(Reading database ... 7562 files and directories currently installed.)
Preparing to unpack .../libapt-inst1.5_1.0.9.8.4_amd64.deb ...
Unpacking libapt-inst1.5:amd64 (1.0.9.8.4) ...
Selecting previously unselected package libgdbm3:amd64.
Preparing to unpack .../libgdbm3_1.8.3-13.1_amd64.deb ...
Unpacking libgdbm3:amd64 (1.8.3-13.1) ...
Selecting previously unselected package libssl1.0.0:amd64.
Preparing to unpack .../libssl1.0.0_1.0.1t-1+deb8u6_amd64.deb ...
Unpacking libssl1.0.0:amd64 (1.0.1t-1+deb8u6) ...
Selecting previously unselected package libgmp10:amd64.
Preparing to unpack .../libgmp10_2%3a6.0.0+dfsg-6_amd64.deb ...
Unpacking libgmp10:amd64 (2:6.0.0+dfsg-6) ...
Selecting previously unselected package libnettle4:amd64.
Preparing to unpack .../libnettle4_2.7.1-5+deb8u2_amd64.deb ...
Unpacking libnettle4:amd64 (2.7.1-5+deb8u2) ...
Selecting previously unselected package libhogweed2:amd64.
Preparing to unpack .../libhogweed2_2.7.1-5+deb8u2_amd64.deb ...
Unpacking libhogweed2:amd64 (2.7.1-5+deb8u2) ...
Selecting previously unselected package libffi6:amd64.
Preparing to unpack .../libffi6_3.1-2+b2_amd64.deb ...
Unpacking libffi6:amd64 (3.1-2+b2) ...
Selecting previously unselected package libp11-kit0:amd64.
Preparing to unpack .../libp11-kit0_0.20.7-1_amd64.deb ...
Unpacking libp11-kit0:amd64 (0.20.7-1) ...
Selecting previously unselected package libtasn1-6:amd64.
Preparing to unpack .../libtasn1-6_4.2-3+deb8u2_amd64.deb ...
Unpacking libtasn1-6:amd64 (4.2-3+deb8u2) ...
Selecting previously unselected package libgnutls-deb0-28:amd64.
Preparing to unpack .../libgnutls-deb0-28_3.3.8-6+deb8u4_amd64.deb ...
Unpacking libgnutls-deb0-28:amd64 (3.3.8-6+deb8u4) ...
Selecting previously unselected package libkeyutils1:amd64.
Preparing to unpack .../libkeyutils1_1.5.9-5+b1_amd64.deb ...
Unpacking libkeyutils1:amd64 (1.5.9-5+b1) ...
Selecting previously unselected package libkrb5support0:amd64.
Preparing to unpack .../libkrb5support0_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libkrb5support0:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libk5crypto3:amd64.
Preparing to unpack .../libk5crypto3_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libk5crypto3:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libkrb5-3:amd64.
Preparing to unpack .../libkrb5-3_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libkrb5-3:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libgssapi-krb5-2:amd64.
Preparing to unpack .../libgssapi-krb5-2_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libgssapi-krb5-2:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libgssrpc4:amd64.
Preparing to unpack .../libgssrpc4_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libgssrpc4:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libidn11:amd64.
Preparing to unpack .../libidn11_1.29-1+deb8u2_amd64.deb ...
Unpacking libidn11:amd64 (1.29-1+deb8u2) ...
Selecting previously unselected package libkadm5clnt-mit9:amd64.
Preparing to unpack .../libkadm5clnt-mit9_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libkadm5clnt-mit9:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libkdb5-7:amd64.
Preparing to unpack .../libkdb5-7_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libkdb5-7:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libkadm5srv-mit9:amd64.
Preparing to unpack .../libkadm5srv-mit9_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libkadm5srv-mit9:amd64 (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libsasl2-modules-db:amd64.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-13+deb8u1_amd64.deb ...
Unpacking libsasl2-modules-db:amd64 (2.1.26.dfsg1-13+deb8u1) ...
Selecting previously unselected package libsasl2-2:amd64.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-13+deb8u1_amd64.deb ...
Unpacking libsasl2-2:amd64 (2.1.26.dfsg1-13+deb8u1) ...
Selecting previously unselected package libldap-2.4-2:amd64.
Preparing to unpack .../libldap-2.4-2_2.4.40+dfsg-1+deb8u2_amd64.deb ...
Unpacking libldap-2.4-2:amd64 (2.4.40+dfsg-1+deb8u2) ...
Selecting previously unselected package libsqlite3-0:amd64.
Preparing to unpack .../libsqlite3-0_3.8.7.1-1+deb8u2_amd64.deb ...
Unpacking libsqlite3-0:amd64 (3.8.7.1-1+deb8u2) ...
Selecting previously unselected package perl-modules.
Preparing to unpack .../perl-modules_5.20.2-3+deb8u6_all.deb ...
Unpacking perl-modules (5.20.2-3+deb8u6) ...
Selecting previously unselected package perl.
Preparing to unpack .../perl_5.20.2-3+deb8u6_amd64.deb ...
Unpacking perl (5.20.2-3+deb8u6) ...
Selecting previously unselected package libpython2.7-minimal:amd64.
Preparing to unpack .../libpython2.7-minimal_2.7.9-2+deb8u1_amd64.deb ...
Unpacking libpython2.7-minimal:amd64 (2.7.9-2+deb8u1) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../python2.7-minimal_2.7.9-2+deb8u1_amd64.deb ...
Unpacking python2.7-minimal (2.7.9-2+deb8u1) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../python-minimal_2.7.9-1_amd64.deb ...
Unpacking python-minimal (2.7.9-1) ...
Selecting previously unselected package mime-support.
Preparing to unpack .../mime-support_3.58_all.deb ...
Unpacking mime-support (3.58) ...
Selecting previously unselected package libexpat1:amd64.
Preparing to unpack .../libexpat1_2.1.0-6+deb8u3_amd64.deb ...
Unpacking libexpat1:amd64 (2.1.0-6+deb8u3) ...
Selecting previously unselected package libpython2.7-stdlib:amd64.
Preparing to unpack .../libpython2.7-stdlib_2.7.9-2+deb8u1_amd64.deb ...
Unpacking libpython2.7-stdlib:amd64 (2.7.9-2+deb8u1) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../python2.7_2.7.9-2+deb8u1_amd64.deb ...
Unpacking python2.7 (2.7.9-2+deb8u1) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../libpython-stdlib_2.7.9-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.9-1) ...
Setting up libpython2.7-minimal:amd64 (2.7.9-2+deb8u1) ...
Setting up python2.7-minimal (2.7.9-2+deb8u1) ...
Setting up python-minimal (2.7.9-1) ...
Selecting previously unselected package python.
(Reading database ... 9799 files and directories currently installed.)
Preparing to unpack .../python_2.7.9-1_amd64.deb ...
Unpacking python (2.7.9-1) ...
Selecting previously unselected package libasan1:amd64.
Preparing to unpack .../libasan1_4.9.2-10_amd64.deb ...
Unpacking libasan1:amd64 (4.9.2-10) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../libatomic1_4.9.2-10_amd64.deb ...
Unpacking libatomic1:amd64 (4.9.2-10) ...
Selecting previously unselected package libcilkrts5:amd64.
Preparing to unpack .../libcilkrts5_4.9.2-10_amd64.deb ...
Unpacking libcilkrts5:amd64 (4.9.2-10) ...
Selecting previously unselected package libisl10:amd64.
Preparing to unpack .../libisl10_0.12.2-2_amd64.deb ...
Unpacking libisl10:amd64 (0.12.2-2) ...
Selecting previously unselected package libcloog-isl4:amd64.
Preparing to unpack .../libcloog-isl4_0.18.2-1+b2_amd64.deb ...
Unpacking libcloog-isl4:amd64 (0.18.2-1+b2) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../librtmp1_2.4+20150115.gita107cef-1_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20150115.gita107cef-1) ...
Selecting previously unselected package libssh2-1:amd64.
Preparing to unpack .../libssh2-1_1.4.3-4.1+deb8u1_amd64.deb ...
Unpacking libssh2-1:amd64 (1.4.3-4.1+deb8u1) ...
Selecting previously unselected package libcurl3:amd64.
Preparing to unpack .../libcurl3_7.38.0-4+deb8u5_amd64.deb ...
Unpacking libcurl3:amd64 (7.38.0-4+deb8u5) ...
Selecting previously unselected package libquadmath0:amd64.
Preparing to unpack .../libquadmath0_4.9.2-10_amd64.deb ...
Unpacking libquadmath0:amd64 (4.9.2-10) ...
Selecting previously unselected package libgfortran3:amd64.
Preparing to unpack .../libgfortran3_4.9.2-10_amd64.deb ...
Unpacking libgfortran3:amd64 (4.9.2-10) ...
Selecting previously unselected package libgomp1:amd64.
Preparing to unpack .../libgomp1_4.9.2-10_amd64.deb ...
Unpacking libgomp1:amd64 (4.9.2-10) ...
Selecting previously unselected package libitm1:amd64.
Preparing to unpack .../libitm1_4.9.2-10_amd64.deb ...
Unpacking libitm1:amd64 (4.9.2-10) ...
Selecting previously unselected package liblsan0:amd64.
Preparing to unpack .../liblsan0_4.9.2-10_amd64.deb ...
Unpacking liblsan0:amd64 (4.9.2-10) ...
Selecting previously unselected package libmpfr4:amd64.
Preparing to unpack .../libmpfr4_3.1.2-2_amd64.deb ...
Unpacking libmpfr4:amd64 (3.1.2-2) ...
Selecting previously unselected package libpython2.7:amd64.
Preparing to unpack .../libpython2.7_2.7.9-2+deb8u1_amd64.deb ...
Unpacking libpython2.7:amd64 (2.7.9-2+deb8u1) ...
Selecting previously unselected package libc-dev-bin.
Preparing to unpack .../libc-dev-bin_2.19-18+deb8u7_amd64.deb ...
Unpacking libc-dev-bin (2.19-18+deb8u7) ...
Selecting previously unselected package linux-libc-dev:amd64.
Preparing to unpack .../linux-libc-dev_3.16.39-1+deb8u2_amd64.deb ...
Unpacking linux-libc-dev:amd64 (3.16.39-1+deb8u2) ...
Selecting previously unselected package libc6-dev:amd64.
Preparing to unpack .../libc6-dev_2.19-18+deb8u7_amd64.deb ...
Unpacking libc6-dev:amd64 (2.19-18+deb8u7) ...
Selecting previously unselected package libexpat1-dev:amd64.
Preparing to unpack .../libexpat1-dev_2.1.0-6+deb8u3_amd64.deb ...
Unpacking libexpat1-dev:amd64 (2.1.0-6+deb8u3) ...
Selecting previously unselected package libpython2.7-dev:amd64.
Preparing to unpack .../libpython2.7-dev_2.7.9-2+deb8u1_amd64.deb ...
Unpacking libpython2.7-dev:amd64 (2.7.9-2+deb8u1) ...
Selecting previously unselected package libtsan0:amd64.
Preparing to unpack .../libtsan0_4.9.2-10_amd64.deb ...
Unpacking libtsan0:amd64 (4.9.2-10) ...
Selecting previously unselected package libubsan0:amd64.
Preparing to unpack .../libubsan0_4.9.2-10_amd64.deb ...
Unpacking libubsan0:amd64 (4.9.2-10) ...
Selecting previously unselected package libmpc3:amd64.
Preparing to unpack .../libmpc3_1.0.2-1_amd64.deb ...
Unpacking libmpc3:amd64 (1.0.2-1) ...
Selecting previously unselected package apt-utils.
Preparing to unpack .../apt-utils_1.0.9.8.4_amd64.deb ...
Unpacking apt-utils (1.0.9.8.4) ...
Selecting previously unselected package netcat-traditional.
Preparing to unpack .../netcat-traditional_1.10-41_amd64.deb ...
Unpacking netcat-traditional (1.10-41) ...
Selecting previously unselected package bzip2.
Preparing to unpack .../bzip2_1.0.6-7+b3_amd64.deb ...
Unpacking bzip2 (1.0.6-7+b3) ...
Selecting previously unselected package locales.
Preparing to unpack .../locales_2.19-18+deb8u7_all.deb ...
Unpacking locales (2.19-18+deb8u7) ...
Selecting previously unselected package patch.
Preparing to unpack .../patch_2.7.5-1_amd64.deb ...
Unpacking patch (2.7.5-1) ...
Selecting previously unselected package xz-utils.
Preparing to unpack .../xz-utils_5.1.1alpha+20120614-2+b3_amd64.deb ...
Unpacking xz-utils (5.1.1alpha+20120614-2+b3) ...
Selecting previously unselected package binutils.
Preparing to unpack .../binutils_2.25-5_amd64.deb ...
Unpacking binutils (2.25-5) ...
Selecting previously unselected package cpp-4.9.
Preparing to unpack .../cpp-4.9_4.9.2-10_amd64.deb ...
Unpacking cpp-4.9 (4.9.2-10) ...
Selecting previously unselected package cpp.
Preparing to unpack .../cpp_4%3a4.9.2-2_amd64.deb ...
Unpacking cpp (4:4.9.2-2) ...
Selecting previously unselected package libgcc-4.9-dev:amd64.
Preparing to unpack .../libgcc-4.9-dev_4.9.2-10_amd64.deb ...
Unpacking libgcc-4.9-dev:amd64 (4.9.2-10) ...
Selecting previously unselected package gcc-4.9.
Preparing to unpack .../gcc-4.9_4.9.2-10_amd64.deb ...
Unpacking gcc-4.9 (4.9.2-10) ...
Selecting previously unselected package gcc.
Preparing to unpack .../gcc_4%3a4.9.2-2_amd64.deb ...
Unpacking gcc (4:4.9.2-2) ...
Selecting previously unselected package libstdc++-4.9-dev:amd64.
Preparing to unpack .../libstdc++-4.9-dev_4.9.2-10_amd64.deb ...
Unpacking libstdc++-4.9-dev:amd64 (4.9.2-10) ...
Selecting previously unselected package g++-4.9.
Preparing to unpack .../g++-4.9_4.9.2-10_amd64.deb ...
Unpacking g++-4.9 (4.9.2-10) ...
Selecting previously unselected package g++.
Preparing to unpack .../g++_4%3a4.9.2-2_amd64.deb ...
Unpacking g++ (4:4.9.2-2) ...
Selecting previously unselected package make.
Preparing to unpack .../make_4.0-8.1_amd64.deb ...
Unpacking make (4.0-8.1) ...
Selecting previously unselected package libtimedate-perl.
Preparing to unpack .../libtimedate-perl_2.3000-2_all.deb ...
Unpacking libtimedate-perl (2.3000-2) ...
Selecting previously unselected package libdpkg-perl.
Preparing to unpack .../libdpkg-perl_1.17.27_all.deb ...
Unpacking libdpkg-perl (1.17.27) ...
Selecting previously unselected package dpkg-dev.
Preparing to unpack .../dpkg-dev_1.17.27_all.deb ...
Unpacking dpkg-dev (1.17.27) ...
Selecting previously unselected package build-essential.
Preparing to unpack .../build-essential_11.7_amd64.deb ...
Unpacking build-essential (11.7) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.1t-1+deb8u6_amd64.deb ...
Unpacking openssl (1.0.1t-1+deb8u6) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20141019+deb8u2_all.deb ...
Unpacking ca-certificates (20141019+deb8u2) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.38.0-4+deb8u5_amd64.deb ...
Unpacking curl (7.38.0-4+deb8u5) ...
Selecting previously unselected package comerr-dev.
Preparing to unpack .../comerr-dev_2.1-1.42.12-2+b1_amd64.deb ...
Unpacking comerr-dev (2.1-1.42.12-2+b1) ...
Selecting previously unselected package krb5-multidev.
Preparing to unpack .../krb5-multidev_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking krb5-multidev (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package libblas-common.
Preparing to unpack .../libblas-common_1.2.20110419-10_amd64.deb ...
Unpacking libblas-common (1.2.20110419-10) ...
Selecting previously unselected package libblas3.
Preparing to unpack .../libblas3_1.2.20110419-10_amd64.deb ...
Unpacking libblas3 (1.2.20110419-10) ...
Selecting previously unselected package libblas-dev.
Preparing to unpack .../libblas-dev_1.2.20110419-10_amd64.deb ...
Unpacking libblas-dev (1.2.20110419-10) ...
Selecting previously unselected package libffi-dev:amd64.
Preparing to unpack .../libffi-dev_3.1-2+b2_amd64.deb ...
Unpacking libffi-dev:amd64 (3.1-2+b2) ...
Selecting previously unselected package liblapack3.
Preparing to unpack .../liblapack3_3.5.0-4_amd64.deb ...
Unpacking liblapack3 (3.5.0-4) ...
Selecting previously unselected package liblapack-dev.
Preparing to unpack .../liblapack-dev_3.5.0-4_amd64.deb ...
Unpacking liblapack-dev (3.5.0-4) ...
Selecting previously unselected package libpython-dev:amd64.
Preparing to unpack .../libpython-dev_2.7.9-1_amd64.deb ...
Unpacking libpython-dev:amd64 (2.7.9-1) ...
Selecting previously unselected package libsasl2-dev.
Preparing to unpack .../libsasl2-dev_2.1.26.dfsg1-13+deb8u1_amd64.deb ...
Unpacking libsasl2-dev (2.1.26.dfsg1-13+deb8u1) ...
Selecting previously unselected package zlib1g-dev:amd64.
Preparing to unpack .../zlib1g-dev_1%3a1.2.8.dfsg-2+b1_amd64.deb ...
Unpacking zlib1g-dev:amd64 (1:1.2.8.dfsg-2+b1) ...
Selecting previously unselected package libssl-dev:amd64.
Preparing to unpack .../libssl-dev_1.0.1t-1+deb8u6_amd64.deb ...
Unpacking libssl-dev:amd64 (1.0.1t-1+deb8u6) ...
Selecting previously unselected package python-pkg-resources.
Preparing to unpack .../python-pkg-resources_5.5.1-1_all.deb ...
Unpacking python-pkg-resources (5.5.1-1) ...
Selecting previously unselected package python-chardet.
Preparing to unpack .../python-chardet_2.3.0-1_all.deb ...
Unpacking python-chardet (2.3.0-1) ...
Selecting previously unselected package python-colorama.
Preparing to unpack .../python-colorama_0.3.2-1_all.deb ...
Unpacking python-colorama (0.3.2-1) ...
Selecting previously unselected package python2.7-dev.
Preparing to unpack .../python2.7-dev_2.7.9-2+deb8u1_amd64.deb ...
Unpacking python2.7-dev (2.7.9-2+deb8u1) ...
Selecting previously unselected package python-dev.
Preparing to unpack .../python-dev_2.7.9-1_amd64.deb ...
Unpacking python-dev (2.7.9-1) ...
Selecting previously unselected package python-distlib.
Preparing to unpack .../python-distlib_0.1.9-1_all.deb ...
Unpacking python-distlib (0.1.9-1) ...
Selecting previously unselected package python-six.
Preparing to unpack .../python-six_1.8.0-1_all.deb ...
Unpacking python-six (1.8.0-1) ...
Selecting previously unselected package python-html5lib.
Preparing to unpack .../python-html5lib_0.999-3_all.deb ...
Unpacking python-html5lib (0.999-3) ...
Selecting previously unselected package python-urllib3.
Preparing to unpack .../python-urllib3_1.9.1-3_all.deb ...
Unpacking python-urllib3 (1.9.1-3) ...
Selecting previously unselected package python-requests.
Preparing to unpack .../python-requests_2.4.3-6_all.deb ...
Unpacking python-requests (2.4.3-6) ...
Selecting previously unselected package python-setuptools.
Preparing to unpack .../python-setuptools_5.5.1-1_all.deb ...
Unpacking python-setuptools (5.5.1-1) ...
Selecting previously unselected package python-pip.
Preparing to unpack .../python-pip_1.5.6-5_all.deb ...
Unpacking python-pip (1.5.6-5) ...
Selecting previously unselected package libkrb5-dev.
Preparing to unpack .../libkrb5-dev_1.12.1+dfsg-19+deb8u2_amd64.deb ...
Unpacking libkrb5-dev (1.12.1+dfsg-19+deb8u2) ...
Selecting previously unselected package netcat.
Preparing to unpack .../netcat_1.10-41_all.deb ...
Unpacking netcat (1.10-41) ...
Setting up libapt-inst1.5:amd64 (1.0.9.8.4) ...
Setting up libgdbm3:amd64 (1.8.3-13.1) ...
Setting up libssl1.0.0:amd64 (1.0.1t-1+deb8u6) ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Setting up libgmp10:amd64 (2:6.0.0+dfsg-6) ...
Setting up libnettle4:amd64 (2.7.1-5+deb8u2) ...
Setting up libhogweed2:amd64 (2.7.1-5+deb8u2) ...
Setting up libffi6:amd64 (3.1-2+b2) ...
Setting up libp11-kit0:amd64 (0.20.7-1) ...
Setting up libtasn1-6:amd64 (4.2-3+deb8u2) ...
Setting up libgnutls-deb0-28:amd64 (3.3.8-6+deb8u4) ...
Setting up libkeyutils1:amd64 (1.5.9-5+b1) ...
Setting up libkrb5support0:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libk5crypto3:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libkrb5-3:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libgssapi-krb5-2:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libgssrpc4:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libidn11:amd64 (1.29-1+deb8u2) ...
Setting up libkadm5clnt-mit9:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libkdb5-7:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libkadm5srv-mit9:amd64 (1.12.1+dfsg-19+deb8u2) ...
Setting up libsasl2-modules-db:amd64 (2.1.26.dfsg1-13+deb8u1) ...
Setting up libsasl2-2:amd64 (2.1.26.dfsg1-13+deb8u1) ...
Setting up libldap-2.4-2:amd64 (2.4.40+dfsg-1+deb8u2) ...
Setting up libsqlite3-0:amd64 (3.8.7.1-1+deb8u2) ...
Setting up perl-modules (5.20.2-3+deb8u6) ...
Setting up perl (5.20.2-3+deb8u6) ...
update-alternatives: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode
Setting up mime-support (3.58) ...
Setting up libexpat1:amd64 (2.1.0-6+deb8u3) ...
Setting up libpython2.7-stdlib:amd64 (2.7.9-2+deb8u1) ...
Setting up python2.7 (2.7.9-2+deb8u1) ...
Setting up libpython-stdlib:amd64 (2.7.9-1) ...
Setting up python (2.7.9-1) ...
Setting up libasan1:amd64 (4.9.2-10) ...
Setting up libatomic1:amd64 (4.9.2-10) ...
Setting up libcilkrts5:amd64 (4.9.2-10) ...
Setting up libisl10:amd64 (0.12.2-2) ...
Setting up libcloog-isl4:amd64 (0.18.2-1+b2) ...
Setting up librtmp1:amd64 (2.4+20150115.gita107cef-1) ...
Setting up libssh2-1:amd64 (1.4.3-4.1+deb8u1) ...
Setting up libcurl3:amd64 (7.38.0-4+deb8u5) ...
Setting up libquadmath0:amd64 (4.9.2-10) ...
Setting up libgfortran3:amd64 (4.9.2-10) ...
Setting up libgomp1:amd64 (4.9.2-10) ...
Setting up libitm1:amd64 (4.9.2-10) ...
Setting up liblsan0:amd64 (4.9.2-10) ...
Setting up libmpfr4:amd64 (3.1.2-2) ...
Setting up libpython2.7:amd64 (2.7.9-2+deb8u1) ...
Setting up libc-dev-bin (2.19-18+deb8u7) ...
Setting up linux-libc-dev:amd64 (3.16.39-1+deb8u2) ...
Setting up libc6-dev:amd64 (2.19-18+deb8u7) ...
Setting up libexpat1-dev:amd64 (2.1.0-6+deb8u3) ...
Setting up libpython2.7-dev:amd64 (2.7.9-2+deb8u1) ...
Setting up libtsan0:amd64 (4.9.2-10) ...
Setting up libubsan0:amd64 (4.9.2-10) ...
Setting up libmpc3:amd64 (1.0.2-1) ...
Setting up apt-utils (1.0.9.8.4) ...
Setting up netcat-traditional (1.10-41) ...
update-alternatives: using /bin/nc.traditional to provide /bin/nc (nc) in auto mode
Setting up bzip2 (1.0.6-7+b3) ...
Setting up locales (2.19-18+deb8u7) ...
Generating locales (this might take a while)...
Generation complete.
Setting up patch (2.7.5-1) ...
Setting up xz-utils (5.1.1alpha+20120614-2+b3) ...
update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
Setting up binutils (2.25-5) ...
Setting up cpp-4.9 (4.9.2-10) ...
Setting up cpp (4:4.9.2-2) ...
Setting up libgcc-4.9-dev:amd64 (4.9.2-10) ...
Setting up gcc-4.9 (4.9.2-10) ...
Setting up gcc (4:4.9.2-2) ...
Setting up libstdc++-4.9-dev:amd64 (4.9.2-10) ...
Setting up g++-4.9 (4.9.2-10) ...
Setting up g++ (4:4.9.2-2) ...
update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
Setting up make (4.0-8.1) ...
Setting up libtimedate-perl (2.3000-2) ...
Setting up libdpkg-perl (1.17.27) ...
Setting up dpkg-dev (1.17.27) ...
Setting up build-essential (11.7) ...
Setting up openssl (1.0.1t-1+deb8u6) ...
Setting up ca-certificates (20141019+deb8u2) ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
/usr/sbin/update-ca-certificates: [--verbose] [--fresh]
Setting up curl (7.38.0-4+deb8u5) ...
Setting up comerr-dev (2.1-1.42.12-2+b1) ...
Setting up krb5-multidev (1.12.1+dfsg-19+deb8u2) ...
Setting up libblas-common (1.2.20110419-10) ...
Setting up libblas3 (1.2.20110419-10) ...
update-alternatives: using /usr/lib/libblas/libblas.so.3 to provide /usr/lib/libblas.so.3 (libblas.so.3) in auto mode
Setting up libblas-dev (1.2.20110419-10) ...
update-alternatives: using /usr/lib/libblas/libblas.so to provide /usr/lib/libblas.so (libblas.so) in auto mode
Setting up libffi-dev:amd64 (3.1-2+b2) ...
Setting up liblapack3 (3.5.0-4) ...
update-alternatives: using /usr/lib/lapack/liblapack.so.3 to provide /usr/lib/liblapack.so.3 (liblapack.so.3) in auto mode
Setting up liblapack-dev (3.5.0-4) ...
update-alternatives: using /usr/lib/lapack/liblapack.so to provide /usr/lib/liblapack.so (liblapack.so) in auto mode
Setting up libpython-dev:amd64 (2.7.9-1) ...
Setting up libsasl2-dev (2.1.26.dfsg1-13+deb8u1) ...
Setting up zlib1g-dev:amd64 (1:1.2.8.dfsg-2+b1) ...
Setting up libssl-dev:amd64 (1.0.1t-1+deb8u6) ...
Setting up python-pkg-resources (5.5.1-1) ...
Setting up python-chardet (2.3.0-1) ...
Setting up python-colorama (0.3.2-1) ...
Setting up python2.7-dev (2.7.9-2+deb8u1) ...
Setting up python-dev (2.7.9-1) ...
Setting up python-distlib (0.1.9-1) ...
Setting up python-six (1.8.0-1) ...
Setting up python-html5lib (0.999-3) ...
Setting up python-urllib3 (1.9.1-3) ...
Setting up python-requests (2.4.3-6) ...
Setting up python-setuptools (5.5.1-1) ...
Setting up python-pip (1.5.6-5) ...
Setting up libkrb5-dev (1.12.1+dfsg-19+deb8u2) ...
Setting up netcat (1.10-41) ...
Processing triggers for libc-bin (2.19-18+deb8u7) ...
Processing triggers for ca-certificates (20141019+deb8u2) ...
Updating certificates in /etc/ssl/certs... 174 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
+ apt-get install -yqq -t jessie-backports python-requests libpq-dev
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US.UTF-8",
	LC_ALL = "en_US.UTF-8",
	LC_CTYPE = "en_US.UTF-8",
	LC_MESSAGES = "en_US.UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Selecting previously unselected package libpq5:amd64.
(Reading database ... 15070 files and directories currently installed.)
Preparing to unpack .../libpq5_9.6.2-1~bpo8+1_amd64.deb ...
Unpacking libpq5:amd64 (9.6.2-1~bpo8+1) ...
Selecting previously unselected package libpq-dev.
Preparing to unpack .../libpq-dev_9.6.2-1~bpo8+1_amd64.deb ...
Unpacking libpq-dev (9.6.2-1~bpo8+1) ...
Preparing to unpack .../python-urllib3_1.16-1~bpo8+1_all.deb ...
Unpacking python-urllib3 (1.16-1~bpo8+1) over (1.9.1-3) ...
Preparing to unpack .../python-requests_2.11.1-1~bpo8+1_all.deb ...
Unpacking python-requests (2.11.1-1~bpo8+1) over (2.4.3-6) ...
Setting up libpq5:amd64 (9.6.2-1~bpo8+1) ...
Setting up libpq-dev (9.6.2-1~bpo8+1) ...
Setting up python-urllib3 (1.16-1~bpo8+1) ...
Setting up python-requests (2.11.1-1~bpo8+1) ...
Processing triggers for libc-bin (2.19-18+deb8u7) ...
+ sed -i s/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g /etc/locale.gen
+ locale-gen
Generating locales (this might take a while)...
  en_US.UTF-8... done
Generation complete.
+ update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
+ useradd -ms /bin/bash -d /usr/local/airflow airflow
+ pip install pytz==2015.7
Downloading/unpacking pytz==2015.7
Installing collected packages: pytz
Successfully installed pytz
Cleaning up...
+ pip install cryptography
Downloading/unpacking cryptography
  Running setup.py (path:/tmp/pip-build-tjhsvH/cryptography/setup.py) egg_info for package cryptography

    no previously-included directories found matching 'docs/_build'
    warning: no previously-included files matching '*' found under directory 'vectors'
Downloading/unpacking idna>=2.1 (from cryptography)
Downloading/unpacking asn1crypto>=0.21.0 (from cryptography)
Downloading/unpacking packaging (from cryptography)
  Downloading packaging-16.8-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.4.1 in /usr/lib/python2.7/dist-packages (from cryptography)
Downloading/unpacking setuptools>=11.3 (from cryptography)
Downloading/unpacking enum34 (from cryptography)
  Downloading enum34-1.1.6-py2-none-any.whl
Downloading/unpacking ipaddress (from cryptography)
  Downloading ipaddress-1.0.18-py2-none-any.whl
Downloading/unpacking cffi>=1.4.1 (from cryptography)
  Running setup.py (path:/tmp/pip-build-tjhsvH/cffi/setup.py) egg_info for package cffi

Downloading/unpacking pyparsing (from packaging->cryptography)
Downloading/unpacking appdirs>=1.4.0 (from setuptools>=11.3->cryptography)
  Downloading appdirs-1.4.3-py2.py3-none-any.whl
Downloading/unpacking pycparser (from cffi>=1.4.1->cryptography)
  Running setup.py (path:/tmp/pip-build-tjhsvH/pycparser/setup.py) egg_info for package pycparser

    warning: no previously-included files matching 'yacctab.*' found under directory 'tests'
    warning: no previously-included files matching 'lextab.*' found under directory 'tests'
    warning: no previously-included files matching 'yacctab.*' found under directory 'examples'
    warning: no previously-included files matching 'lextab.*' found under directory 'examples'
Installing collected packages: cryptography, idna, asn1crypto, packaging, setuptools, enum34, ipaddress, cffi, pyparsing, appdirs, pycparser
  Running setup.py install for cryptography

    Installed /tmp/pip-build-tjhsvH/cryptography/cffi-1.10.0-py2.7-linux-x86_64.egg
    Searching for pycparser
    Reading https://pypi.python.org/simple/pycparser/
    Best match: pycparser 2.17
    Downloading https://pypi.python.org/packages/be/64/1bb257ffb17d01f4a38d7ce686809a736837ad4371bcc5c42ba7a715c3ac/pycparser-2.17.tar.gz#md5=ca98dcb50bc1276f230118f6af5a40c7
    Processing pycparser-2.17.tar.gz
    Writing /tmp/easy_install-nRNo9R/pycparser-2.17/setup.cfg
    Running pycparser-2.17/setup.py -q bdist_egg --dist-dir /tmp/easy_install-nRNo9R/pycparser-2.17/egg-dist-tmp-9PV91_
    warning: no previously-included files matching 'yacctab.*' found under directory 'tests'
    warning: no previously-included files matching 'lextab.*' found under directory 'tests'
    warning: no previously-included files matching 'yacctab.*' found under directory 'examples'
    warning: no previously-included files matching 'lextab.*' found under directory 'examples'
    zip_safe flag not set; analyzing archive contents...
    pycparser.ply.ygen: module references __file__
    pycparser.ply.yacc: module references __file__
    pycparser.ply.yacc: module MAY be using inspect.getsourcefile
    pycparser.ply.yacc: module MAY be using inspect.stack
    pycparser.ply.lex: module references __file__
    pycparser.ply.lex: module MAY be using inspect.getsourcefile

    Installed /tmp/pip-build-tjhsvH/cryptography/pycparser-2.17-py2.7.egg

    no previously-included directories found matching 'docs/_build'
    warning: no previously-included files matching '*' found under directory 'vectors'
    generating cffi module 'build/temp.linux-x86_64-2.7/_padding.c'
    generating cffi module 'build/temp.linux-x86_64-2.7/_constant_time.c'
    generating cffi module 'build/temp.linux-x86_64-2.7/_openssl.c'
    building '_openssl' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
    x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o -lssl -lcrypto -o build/lib.linux-x86_64-2.7/cryptography/hazmat/bindings/_openssl.so
    building '_constant_time' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_constant_time.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_constant_time.o
    x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_constant_time.o -o build/lib.linux-x86_64-2.7/cryptography/hazmat/bindings/_constant_time.so
    building '_padding' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_padding.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_padding.o
    x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_padding.o -o build/lib.linux-x86_64-2.7/cryptography/hazmat/bindings/_padding.so
  Found existing installation: setuptools 5.5.1
    Not uninstalling setuptools at /usr/lib/python2.7/dist-packages, owned by OS
  Running setup.py install for cffi
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/local/lib/python2.7/dist-packages/setuptools/__init__.py", line 12, in <module>
        import setuptools.version
      File "/usr/local/lib/python2.7/dist-packages/setuptools/version.py", line 1, in <module>
        import pkg_resources
      File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 72, in <module>
        import packaging.requirements
      File "/usr/local/lib/python2.7/dist-packages/packaging/requirements.py", line 9, in <module>
        from pyparsing import stringStart, stringEnd, originalTextFor, ParseException
    ImportError: No module named pyparsing
    Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-tjhsvH/cffi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-cF3ol1-record/install-record.txt --single-version-externally-managed --compile:
    Traceback (most recent call last):

  File "<string>", line 1, in <module>

  File "/usr/local/lib/python2.7/dist-packages/setuptools/__init__.py", line 12, in <module>

    import setuptools.version

  File "/usr/local/lib/python2.7/dist-packages/setuptools/version.py", line 1, in <module>

    import pkg_resources

  File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 72, in <module>

    import packaging.requirements

  File "/usr/local/lib/python2.7/dist-packages/packaging/requirements.py", line 9, in <module>

    from pyparsing import stringStart, stringEnd, originalTextFor, ParseException

ImportError: No module named pyparsing

----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-tjhsvH/cffi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-cF3ol1-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip-build-tjhsvH/cffi
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c set -ex     && buildDeps='         python-pip         python-dev         libkrb5-dev         libsasl2-dev         libssl-dev         libffi-dev         build-essential         libblas-dev         liblapack-dev     '     && echo "deb http://http.debian.net/debian jessie-backports main" >/etc/apt/sources.list.d/backports.list     && apt-get update -yqq     && apt-get install -yqq --no-install-recommends         $buildDeps         apt-utils         curl         netcat         locales     && apt-get install -yqq -t jessie-backports python-requests libpq-dev     && sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen     && locale-gen     && update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8     && useradd -ms /bin/bash -d ${AIRFLOW_HOME} airflow     && pip install pytz==2015.7     && pip install cryptography     && pip install pyOpenSSL     && pip install ndg-httpsclient     && pip install pyasn1     && pip install psycopg2     && pip install airflow[celery,postgresql,hive]==$AIRFLOW_VERSION     && apt-get remove --purge -yqq $buildDeps libpq-dev     && apt-get clean     && rm -rf         /var/lib/apt/lists/*         /tmp/*         /var/tmp/*         /usr/share/man         /usr/share/doc         /usr/share/doc-base' returned a non-zero code: 1
make: *** [build] Error 1

Rabbitmq not reachable

Hi,

I am facing below issue while running the YAML file. The web & scheduler pods are unable to reach rabbitmq, but the rabbitmq pod is running fine.
Is there any workaround to solve this issue?

Logs of web & scheduler

Sun Aug  2 09:28:52 UTC 2020 - waiting for RabbitMQ... 1/10
Sun Aug  2 09:29:17 UTC 2020 - waiting for RabbitMQ... 2/10
Sun Aug  2 09:29:43 UTC 2020 - waiting for RabbitMQ... 3/10
Sun Aug  2 09:30:08 UTC 2020 - waiting for RabbitMQ... 4/10
Sun Aug  2 09:30:34 UTC 2020 - waiting for RabbitMQ... 5/10
Sun Aug  2 09:30:59 UTC 2020 - waiting for RabbitMQ... 6/10
Sun Aug  2 09:31:25 UTC 2020 - waiting for RabbitMQ... 7/10
Sun Aug  2 09:31:50 UTC 2020 - waiting for RabbitMQ... 8/10
Sun Aug  2 09:32:16 UTC 2020 - waiting for RabbitMQ... 9/10
Sun Aug  2 09:32:41 UTC 2020 - rabbitmq still not reachable, giving up

Logs of rabbitmq

2020-08-02 09:28:36.615 [debug] <0.283.0> Lager installed handler error_logger_lager_h into error_logger
2020-08-02 09:28:36.636 [debug] <0.322.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event
2020-08-02 09:28:36.636 [debug] <0.289.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event
2020-08-02 09:28:36.636 [debug] <0.286.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event
2020-08-02 09:28:36.637 [debug] <0.295.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event
2020-08-02 09:28:36.637 [debug] <0.298.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event
2020-08-02 09:28:36.637 [debug] <0.292.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event
2020-08-02 09:28:36.637 [debug] <0.301.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event
2020-08-02 09:28:36.637 [debug] <0.304.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event
2020-08-02 09:28:36.637 [debug] <0.307.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event
2020-08-02 09:28:36.637 [debug] <0.310.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event
2020-08-02 09:28:36.637 [debug] <0.316.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event
2020-08-02 09:28:36.637 [debug] <0.313.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event
2020-08-02 09:28:36.637 [debug] <0.319.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event
2020-08-02 09:28:36.663 [info] <0.268.0> HiPE disabled: no modules were natively recompiled.
2020-08-02 09:28:36.787 [info] <0.268.0>
 Starting RabbitMQ 3.8.5 on Erlang 23.0.3
 Copyright (c) 2007-2020 VMware, Inc. or its affiliates.
 Licensed under the MPL 1.1. Website: https://rabbitmq.com

  ##  ##      RabbitMQ 3.8.5
  ##  ##
  ##########  Copyright (c) 2007-2020 VMware, Inc. or its affiliates.
  ######  ##
  ##########  Licensed under the MPL 1.1. Website: https://rabbitmq.com

  Doc guides: https://rabbitmq.com/documentation.html
  Support:    https://rabbitmq.com/contact.html
  Tutorials:  https://rabbitmq.com/getstarted.html
  Monitoring: https://rabbitmq.com/monitoring.html

  Logs: <stdout>

  Config file(s): /etc/rabbitmq/rabbitmq.conf

  Starting broker...2020-08-02 09:28:36.789 [info] <0.268.0>
 node           : rabbit@rabbitmq-56b7bb7886-v99cn
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : YOqc07y6MH3LQm5Xg39w1w==
 log(s)         : <stdout>
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@rabbitmq-56b7bb7886-v99cn
2020-08-02 09:28:37.115 [debug] <0.279.0> Lager installed handler lager_backend_throttle into lager_event
2020-08-02 09:28:38.650 [info] <0.268.0> Running boot step pre_boot defined by app rabbit
2020-08-02 09:28:38.650 [info] <0.268.0> Running boot step rabbit_core_metrics defined by app rabbit
2020-08-02 09:28:38.651 [info] <0.268.0> Running boot step rabbit_alarm defined by app rabbit
2020-08-02 09:28:38.655 [info] <0.350.0> Memory high watermark set to 6359 MiB (6668245401 bytes) of 15898 MiB (16670613504 bytes) total
2020-08-02 09:28:38.660 [info] <0.352.0> Enabling free disk space monitoring
2020-08-02 09:28:38.660 [info] <0.352.0> Disk free limit set to 50MB
2020-08-02 09:28:38.664 [info] <0.268.0> Running boot step code_server_cache defined by app rabbit
2020-08-02 09:28:38.664 [info] <0.268.0> Running boot step file_handle_cache defined by app rabbit
2020-08-02 09:28:38.664 [info] <0.355.0> Limiting to approx 1048479 file handles (943629 sockets)
2020-08-02 09:28:38.664 [info] <0.356.0> FHC read buffering:  OFF
2020-08-02 09:28:38.664 [info] <0.356.0> FHC write buffering: ON
2020-08-02 09:28:38.665 [info] <0.268.0> Running boot step worker_pool defined by app rabbit
2020-08-02 09:28:38.665 [info] <0.342.0> Will use 4 processes for default worker pool
2020-08-02 09:28:38.665 [info] <0.342.0> Starting worker pool 'worker_pool' with 4 processes in it
2020-08-02 09:28:38.666 [info] <0.268.0> Running boot step database defined by app rabbit
2020-08-02 09:28:38.666 [info] <0.268.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@rabbitmq-56b7bb7886-v99cn is empty. Assuming we need to join an existing cluster or initialise from scratch...
2020-08-02 09:28:38.666 [info] <0.268.0> Configured peer discovery backend: rabbit_peer_discovery_classic_config
2020-08-02 09:28:38.666 [info] <0.268.0> Will try to lock with peer discovery backend rabbit_peer_discovery_classic_config
2020-08-02 09:28:38.666 [info] <0.268.0> Peer discovery backend does not support locking, falling back to randomized delay
2020-08-02 09:28:38.666 [info] <0.268.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping randomized startup delay.
2020-08-02 09:28:38.666 [info] <0.268.0> All discovered existing cluster peers:
2020-08-02 09:28:38.666 [info] <0.268.0> Discovered no peer nodes to cluster with. Some discovery backends can filter nodes out based on a readiness criteria. Enabling debug logging might help troubleshoot.
2020-08-02 09:28:38.669 [info] <0.44.0> Application mnesia exited with reason: stopped
2020-08-02 09:28:38.756 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-08-02 09:28:38.757 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.779 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-08-02 09:28:38.779 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.779 [info] <0.268.0> Feature flag `implicit_default_bindings`: supported, attempt to enable...
2020-08-02 09:28:38.780 [info] <0.268.0> Feature flag `implicit_default_bindings`: mark as enabled=state_changing
2020-08-02 09:28:38.788 [info] <0.268.0> Feature flags: list of feature flags found:
2020-08-02 09:28:38.788 [info] <0.268.0> Feature flags:   [~] implicit_default_bindings
2020-08-02 09:28:38.788 [info] <0.268.0> Feature flags:   [ ] quorum_queue
2020-08-02 09:28:38.788 [info] <0.268.0> Feature flags:   [ ] virtual_host_metadata
2020-08-02 09:28:38.788 [info] <0.268.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:38.799 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 0 retries left
2020-08-02 09:28:38.800 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.800 [info] <0.268.0> Feature flag `implicit_default_bindings`: mark as enabled=true
2020-08-02 09:28:38.810 [info] <0.268.0> Feature flags: list of feature flags found:
2020-08-02 09:28:38.810 [info] <0.268.0> Feature flags:   [x] implicit_default_bindings
2020-08-02 09:28:38.810 [info] <0.268.0> Feature flags:   [ ] quorum_queue
2020-08-02 09:28:38.810 [info] <0.268.0> Feature flags:   [ ] virtual_host_metadata
2020-08-02 09:28:38.810 [info] <0.268.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:38.822 [info] <0.268.0> Feature flag `quorum_queue`: supported, attempt to enable...
2020-08-02 09:28:38.823 [info] <0.268.0> Feature flag `quorum_queue`: mark as enabled=state_changing
2020-08-02 09:28:38.831 [info] <0.268.0> Feature flags: list of feature flags found:
2020-08-02 09:28:38.831 [info] <0.268.0> Feature flags:   [x] implicit_default_bindings
2020-08-02 09:28:38.831 [info] <0.268.0> Feature flags:   [~] quorum_queue
2020-08-02 09:28:38.831 [info] <0.268.0> Feature flags:   [ ] virtual_host_metadata
2020-08-02 09:28:38.831 [info] <0.268.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:38.843 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-08-02 09:28:38.843 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.843 [info] <0.268.0> Feature flag `quorum_queue`:   migrating Mnesia table rabbit_queue...
2020-08-02 09:28:38.855 [info] <0.268.0> Feature flag `quorum_queue`:   migrating Mnesia table rabbit_durable_queue...
2020-08-02 09:28:38.866 [info] <0.268.0> Feature flag `quorum_queue`:   Mnesia tables migration done
2020-08-02 09:28:38.866 [info] <0.268.0> Feature flag `quorum_queue`: mark as enabled=true
2020-08-02 09:28:38.875 [info] <0.268.0> Feature flags: list of feature flags found:
2020-08-02 09:28:38.875 [info] <0.268.0> Feature flags:   [x] implicit_default_bindings
2020-08-02 09:28:38.875 [info] <0.268.0> Feature flags:   [x] quorum_queue
2020-08-02 09:28:38.875 [info] <0.268.0> Feature flags:   [ ] virtual_host_metadata
2020-08-02 09:28:38.875 [info] <0.268.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:38.886 [info] <0.268.0> Feature flag `virtual_host_metadata`: supported, attempt to enable...
2020-08-02 09:28:38.887 [info] <0.268.0> Feature flag `virtual_host_metadata`: mark as enabled=state_changing
2020-08-02 09:28:38.895 [info] <0.268.0> Feature flags: list of feature flags found:
2020-08-02 09:28:38.895 [info] <0.268.0> Feature flags:   [x] implicit_default_bindings
2020-08-02 09:28:38.895 [info] <0.268.0> Feature flags:   [x] quorum_queue
2020-08-02 09:28:38.895 [info] <0.268.0> Feature flags:   [~] virtual_host_metadata
2020-08-02 09:28:38.895 [info] <0.268.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:38.908 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-08-02 09:28:38.908 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.919 [info] <0.268.0> Feature flag `virtual_host_metadata`: mark as enabled=true
2020-08-02 09:28:38.928 [info] <0.268.0> Feature flags: list of feature flags found:
2020-08-02 09:28:38.929 [info] <0.268.0> Feature flags:   [x] implicit_default_bindings
2020-08-02 09:28:38.929 [info] <0.268.0> Feature flags:   [x] quorum_queue
2020-08-02 09:28:38.929 [info] <0.268.0> Feature flags:   [x] virtual_host_metadata
2020-08-02 09:28:38.929 [info] <0.268.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:38.939 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-08-02 09:28:38.940 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.966 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-08-02 09:28:38.966 [info] <0.268.0> Successfully synced tables from a peer
2020-08-02 09:28:38.966 [info] <0.268.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
2020-08-02 09:28:38.966 [info] <0.268.0> Running boot step database_sync defined by app rabbit
2020-08-02 09:28:38.966 [info] <0.268.0> Running boot step feature_flags defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step codec_correctness_check defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step external_infrastructure defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step rabbit_registry defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step rabbit_queue_location_random defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step rabbit_event defined by app rabbit
2020-08-02 09:28:38.967 [info] <0.268.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
2020-08-02 09:28:38.968 [info] <0.268.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
2020-08-02 09:28:38.968 [info] <0.268.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
2020-08-02 09:28:38.968 [info] <0.268.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
2020-08-02 09:28:38.968 [info] <0.268.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
2020-08-02 09:28:38.968 [info] <0.268.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
2020-08-02 09:28:38.968 [info] <0.268.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
2020-08-02 09:28:38.969 [info] <0.268.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
2020-08-02 09:28:38.969 [info] <0.268.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
2020-08-02 09:28:38.969 [info] <0.268.0> Running boot step rabbit_priority_queue defined by app rabbit
2020-08-02 09:28:38.969 [info] <0.268.0> Priority queues enabled, real BQ is rabbit_variable_queue
2020-08-02 09:28:38.969 [info] <0.268.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
2020-08-02 09:28:38.970 [info] <0.268.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
2020-08-02 09:28:38.970 [info] <0.268.0> Running boot step kernel_ready defined by app rabbit
2020-08-02 09:28:38.970 [info] <0.268.0> Running boot step rabbit_sysmon_minder defined by app rabbit
2020-08-02 09:28:38.970 [info] <0.268.0> Running boot step rabbit_epmd_monitor defined by app rabbit
2020-08-02 09:28:38.971 [info] <0.582.0> epmd monitor knows us, inter-node communication (distribution) port: 25672
2020-08-02 09:28:38.971 [info] <0.268.0> Running boot step guid_generator defined by app rabbit
2020-08-02 09:28:38.974 [info] <0.268.0> Running boot step rabbit_node_monitor defined by app rabbit
2020-08-02 09:28:38.974 [info] <0.586.0> Starting rabbit_node_monitor
2020-08-02 09:28:38.974 [info] <0.268.0> Running boot step delegate_sup defined by app rabbit
2020-08-02 09:28:38.975 [info] <0.268.0> Running boot step rabbit_memory_monitor defined by app rabbit
2020-08-02 09:28:38.975 [info] <0.268.0> Running boot step core_initialized defined by app rabbit
2020-08-02 09:28:38.976 [info] <0.268.0> Running boot step upgrade_queues defined by app rabbit
2020-08-02 09:28:38.996 [info] <0.268.0> message_store upgrades: 1 to apply
2020-08-02 09:28:38.996 [info] <0.268.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
2020-08-02 09:28:38.997 [info] <0.268.0> message_store upgrades: No durable queues found. Skipping message store migration
2020-08-02 09:28:38.997 [info] <0.268.0> message_store upgrades: Removing the old message store data
2020-08-02 09:28:38.997 [info] <0.268.0> message_store upgrades: All upgrades applied successfully
2020-08-02 09:28:39.020 [info] <0.268.0> Running boot step rabbit_connection_tracking defined by app rabbit
2020-08-02 09:28:39.020 [info] <0.268.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
2020-08-02 09:28:39.020 [info] <0.268.0> Running boot step rabbit_exchange_parameters defined by app rabbit
2020-08-02 09:28:39.021 [info] <0.268.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
2020-08-02 09:28:39.021 [info] <0.268.0> Running boot step rabbit_policies defined by app rabbit
2020-08-02 09:28:39.023 [info] <0.268.0> Running boot step rabbit_policy defined by app rabbit
2020-08-02 09:28:39.023 [info] <0.268.0> Running boot step rabbit_queue_location_validator defined by app rabbit
2020-08-02 09:28:39.023 [info] <0.268.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit
2020-08-02 09:28:39.023 [info] <0.268.0> Running boot step rabbit_vhost_limit defined by app rabbit
2020-08-02 09:28:39.023 [info] <0.268.0> Running boot step recovery defined by app rabbit
2020-08-02 09:28:39.024 [info] <0.268.0> Running boot step definition_import_worker_pool defined by app rabbit
2020-08-02 09:28:39.024 [info] <0.342.0> Starting worker pool 'definition_import_pool' with 4 processes in it
2020-08-02 09:28:39.025 [info] <0.268.0> Running boot step load_core_definitions defined by app rabbit
2020-08-02 09:28:39.025 [info] <0.268.0> Running boot step empty_db_check defined by app rabbit
2020-08-02 09:28:39.025 [info] <0.268.0> Adding vhost 'airflow' (description: 'Default virtual host')
2020-08-02 09:28:39.042 [info] <0.629.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@rabbitmq-56b7bb7886-v99cn/msg_stores/vhosts/27758KT3R9N8CPIU1D26LR8E6' for vhost 'airflow' exists
2020-08-02 09:28:39.046 [info] <0.629.0> Starting message stores for vhost 'airflow'
2020-08-02 09:28:39.046 [info] <0.633.0> Message store "27758KT3R9N8CPIU1D26LR8E6/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2020-08-02 09:28:39.047 [info] <0.629.0> Started message store of type transient for vhost 'airflow'
2020-08-02 09:28:39.048 [info] <0.637.0> Message store "27758KT3R9N8CPIU1D26LR8E6/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2020-08-02 09:28:39.048 [warning] <0.637.0> Message store "27758KT3R9N8CPIU1D26LR8E6/msg_store_persistent": rebuilding indices from scratch
2020-08-02 09:28:39.049 [info] <0.629.0> Started message store of type persistent for vhost 'airflow'
2020-08-02 09:28:39.053 [info] <0.268.0> Created user 'airflow'
2020-08-02 09:28:39.055 [info] <0.268.0> Successfully set user tags for user 'airflow' to [administrator]
2020-08-02 09:28:39.056 [info] <0.268.0> Successfully set permissions for 'airflow' in virtual host 'airflow' to '.*', '.*', '.*'
2020-08-02 09:28:39.056 [info] <0.268.0> Running boot step rabbit_looking_glass defined by app rabbit
2020-08-02 09:28:39.056 [info] <0.268.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
2020-08-02 09:28:39.056 [info] <0.268.0> Running boot step background_gc defined by app rabbit
2020-08-02 09:28:39.057 [info] <0.268.0> Running boot step connection_tracking defined by app rabbit
2020-08-02 09:28:39.060 [info] <0.268.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@rabbitmq-56b7bb7886-v99cn'
2020-08-02 09:28:39.064 [info] <0.268.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@rabbitmq-56b7bb7886-v99cn'
2020-08-02 09:28:39.064 [info] <0.268.0> Running boot step routing_ready defined by app rabbit
2020-08-02 09:28:39.065 [info] <0.268.0> Running boot step pre_flight defined by app rabbit
2020-08-02 09:28:39.065 [info] <0.268.0> Running boot step notify_cluster defined by app rabbit
2020-08-02 09:28:39.065 [info] <0.268.0> Running boot step networking defined by app rabbit
2020-08-02 09:28:39.067 [info] <0.684.0> started TCP listener on [::]:5672
2020-08-02 09:28:39.067 [info] <0.268.0> Running boot step cluster_name defined by app rabbit
2020-08-02 09:28:39.067 [info] <0.268.0> Initialising internal cluster ID to 'rabbitmq-cluster-id-rCjq6tnep6mYmM4Z5_ZcYg'
2020-08-02 09:28:39.069 [info] <0.268.0> Running boot step direct_client defined by app rabbit
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags: list of feature flags found:
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags:   [ ] drop_unroutable_metric
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags:   [ ] empty_basic_get_metric
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags:   [x] implicit_default_bindings
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags:   [x] quorum_queue
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags:   [x] virtual_host_metadata
2020-08-02 09:28:39.516 [info] <0.687.0> Feature flags: feature flag states written to disk: yes
2020-08-02 09:28:39.679 [info] <0.687.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent
2020-08-02 09:28:39.679 [info] <0.687.0> Management plugin: using rates mode 'basic'
2020-08-02 09:28:39.783 [info] <0.687.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management
2020-08-02 09:28:39.783 [info] <0.687.0> Running boot step rabbit_management_load_definitions defined by app rabbitmq_management
2020-08-02 09:28:39.821 [info] <0.752.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2020-08-02 09:28:39.821 [info] <0.858.0> Statistics database started.
2020-08-02 09:28:39.821 [info] <0.857.0> Starting worker pool 'management_worker_pool' with 3 processes in it
2020-08-02 09:28:40.133 [info] <0.687.0> Server startup complete; 3 plugins started.
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * rabbitmq_management_agent
 completed with 3 plugins.

(ImportError: No module named 'MySQLdb') celery executor issue

Hi,

I am facing below issue while running DAGS using airflow backfill <DAG_id> -s <start_date> -e <end_date>

Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/airflow/executors/celery_executor.py", line 94, in sync
state = task.state
File "/usr/local/lib/python3.5/dist-packages/celery/result.py", line 431, in state
return self._get_task_meta()['status']
File "/usr/local/lib/python3.5/dist-packages/celery/result.py", line 370, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 352, in get_task_meta
meta = self._get_task_meta_for(task_id)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/database/init.py", line 53, in _inner
return fun(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/database/init.py", line 122, in _get_task_meta_for
session = self.ResultSession()
File "/usr/local/lib/python3.5/dist-packages/celery/backends/database/init.py", line 99, in ResultSession
**self.engine_options)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/database/session.py", line 58, in session_factory
engine, session = self.create_session(dburi, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/database/session.py", line 44, in create_session
engine = self.get_engine(dburi, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/database/session.py", line 41, in get_engine
return create_engine(dburi, poolclass=NullPool)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/init.py", line 391, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/strategies.py", line 80, in create
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 110, in dbapi
return import('MySQLdb')
ImportError: No module named 'MySQLdb'

Pinning rabbitmq without a minor version broke when a default timeout was added to rabbitmq

I am submitting this mostly as a protip in case anybody is using this older example implementation - In rabbitmq/rabbitmq-server#2990, a default timeout was added to rabbitmq. When this timeout is hit by a job running for more than 15 minutes, Airflow workers will get into a bad state and stop accepting new work.

This affects this example because it pins only a major version, and not a minor version.

A short term fix is to pin rabbitmq to a specific minor version, such as 3.8.14, which does not have this default behavior. A better fix is to configure the timeout to something that makes sense for your own Airflow workflows.

scheduler pods is in CrashLoopBackOff mode

Every time we deploy through "kubectl create -f airflow.all.yaml --namespace airflow" the scheduler pods is crashing after 5 restart and we observer that there is no airflow home( /usr/local/airflow) or airflow drag path ( /usr/local/airflow/diag)is getting created in the worker node. Neither the diag run is yelling any result. Not sure if we are missing any prerequisites from running kubectl create. Any help will be appreciated..

Updating dags

How are you handling the updating of dags without restarting the other containers? Distributing the dags to all the different containers (dashboard, workers, schedulers).

SU pwd?

Hi!
Can u share su pwd for container?
Thx

helm installation fail

helm upgrade -f /home/pathto/airflow/values.yaml
--install
--debug
airflow
./airflow
[debug] Created tunnel using local port: '43327'

[debug] SERVER: "127.0.0.1:43327"

Error: found in requirements.yaml, but missing in charts/ directory: postgresql, redis
Makefile:35: recipe for target 'helm-install' failed
make: *** [helm-install] Error 1

Embedded dags - handling deployments

I have just got Airflow working on K8 through the help of this great project, thanks!

I had a question about the embedded DAG's approach. In the event of a deployment or a pod being rescheduled by K8, how would a running DAG survive the process?

Or is the answer quite simply it wouldn't and it would need to retry?

Any other wise words on having a reliable Airflow on K8 are welcome

Log does not show when browsing Logs on web UI

I am very new to Kubernetes and Airflow, please bear with me if I ask some very simple questions. I had kube-airflow running on the Kubernetes cluster, but when I click "Browse" "Logs" for some DAG, I saw the message:
*** Log file isn't local.
*** Fetching here: http://worker-797654bfc5-wnpvn:8793/log/tuto_test/print_date_test/2018-02-11T00:00:00
*** Failed to fetch log file from worker.

*** Reading remote logs...
*** Unsupported remote log location.
There is a folder /usr/local/airflow/logs in the worker pod, but why the log cannot be fetched?

Any suggestion is appreciated. And thank you in advance.

where is the dags floder?

I want to know where can I create a new dag,which pod does the dags folder in?
I tried to put my dags in the dag folder,and restart the pod,but there's no change?

make build fails

trying to run make build fails with:

mkdir -p build/1.7.1.3-1.3.0-0.9
sed -e 's/%%KUBECTL_VERSION%%/'"1.3.0"'/g;' -e 's/%%AIRFLOW_VERSION%%/'"1.7.1.3"'/g;' Dockerfile.template > build/1.7.1.3-1.3.0-0.9/Dockerfile
cp -R rootfs build/1.7.1.3-1.3.0-0.9/rootfs
cp: rootfs: No such file or directory
make: *** [build/1.7.1.3-1.3.0-0.9/rootfs] Error 1

Does this rootfs folder need to be pulled from somewhere else? Maybe there is a dependency missing?

Using external databse.

I want to use an external database like RDS for airflow, but I think it's hardcoded to postgres:5432, is there a way to override this?

how to access airflow web

as mentioned in the readme, use make web-browse to access airflow web interface. but the I checked the Makefile, this just start minikube command which confuses me:
minikube service web -n $(NAMESPACE)

so my question is how to access the airflow web interface from a remote machhine not in a minikube environment? Thank you.

Scheduler Not Queueing up Tasks

I'm using a slightly different configuration of this repo but I noticed that my Scheduler is not queueing up tasks even though my CeleryExecutors are able to fire off tasks through the CLI or Web UI.

I'm wondering if there is good reason to hardcore the Celery Broker URL and backend to amqp://airflow:airflow@rabbitmq:5672/airflow rather than using something like amqp://$RABBITMQ_CREDS@$RABBITMQ_PORT_15672_TCP_ADDR:$RABBITMQ_SERVICE_PORT_NODE/$RABBITMQ_DEFAULT_VHOST where the host is dynamic? I've tried both methods and my scheduler is not queuing up tasks although it is constantly sending heartbeats.

Helm?

Have you considered authoring a Helm chart for making this turn key? I'm happy to help.

Scheduler defaults to 5 runs, so it goes into a CrashLoopBackoff when deployed

args: ["scheduler", "-n", "5"]

This causes the file read loop to happen five times, then the scheduler exits. It seems like a strange default setup.

I'm a bit confused why this is set this way - shouldn't the scheduler be looping indefinitely? I'm also seeing the scheduler failing to queue up tasks, same as #19, and I wonder if this is the cause in that case, or something else.

enable daemon in scheduler ?

HI.,

Is there any way to avoid scheduler container from restarting everyday, by means of enabling daemon mode ? Any parameter is appreciated.

Thanks.

Issues with AirFlow version

Hey there I'm building kube-airflow on Circle and when the docker build step got to && pip install airflow[celery,postgresql,hive]==$AIRFLOW_VERSION \ it failed with 'no such version'. Had to remove the version requirement. AIRFLOW_VERSION was set in my environment. I'm actually not familiar with the %%AIRFLOW_VERSION%% syntax but presume that's grabbing the AIRFLOW_VERSION from the environment?

Also needed to add Cython to pip steps of Dockerfile, looks like a pandas dependency that's not getting built before pandas...

    && pip install pyasn1 \
    && pip install psycopg2 \
    && pip install Cython \
    && pip install airflow \

How do i add dags

I am trying to add dags but I get the below error

error: error validating "airflow.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false

My YAML snippet:

--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: scheduler namespace: dev spec: replicas: 1 template: metadata: labels: app: airflow tier: scheduler spec: restartPolicy: Always containers: - name: scheduler image: mumoshu/kube-airflow:1.8.0.0-1.6.1 volumes: - /home/admin/dags/:/usr/local/airflow/dags env: - name: AIRFLOW_HOME value: "/usr/local/airflow" args: ["scheduler", "-n", "5"] ---

I tried to adjust the indent of volumes to be under containers but when I do that I see the following:

error: error converting YAML to JSON: yaml: line 19: mapping values are not allowed in this context

Can you please provide me an example of how to add/delete dags.

Allow DNS to resolve workers

Is there a way of allowing the webserver to resolve the workers through their names?

Currently, this stack puts workers into StatefulSets. I think the barrier to using a Deployment is that before log files are put in GCS, they fail to resolve:

*** Fetching from: http://airflow-worker-6dd9f5bc74-h97vf:8793/log/[***]
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-6dd9f5bc74-h97vf', port=8793): Max retries exceeded with url: /log/[***] (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f142598b210>: Failed to establish a new connection: [Errno -2] Name or service not known',))

If we could surmount that, we could use a Deployment, which means we can run on preemptive machines, and generally allow moving pods around

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.