Giter VIP home page Giter VIP logo

willhallonline / docker-ansible Goto Github PK

View Code? Open in Web Editor NEW
343.0 7.0 127.0 360 KB

Ansible inside Docker containers: Alpine, Ubuntu, Rocky & Debian with Ansible 2.16, 2.15, 2.14, 2.13, 2.12, 2.11, 2.10 and 2.9 + Mitogen

Home Page: https://www.willhallonline.co.uk/project/docker/docker-ansible/

License: MIT License

Dockerfile 100.00%
ansible docker docker-image dockerfile containers configuration-management devops docker-ansible docker-automated alpine debian rockylinux ubuntu

docker-ansible's Introduction

Ansible

Ansible inside Docker for consistent running of ansible inside your local machine or CI/CD system. You can view CHANGELOG to understand what changes have happened to this recently.

Docker Pulls Docker Image Size (tag)

Current Ansible Core Versions

These are the latest Ansible Core versions running within the containers:

Supported tags and respective Dockerfile links

Immutable Images

There are a number of immutable images that are also being collected. To find a specific version of Ansible, look within the Docker Hub Tags. Each of the containers follow a similar pattern: Ansible-version-Base OS version.

Ansible Core (2.13, 2.14, 2.15, 2.16)

This includes:

Base Image (↓) \ Ansible Version (→) Dockerfile 2.13 2.14 2.15 2.16
Latest Dockerfile latest
Alpine Dockerfile alpine
Ubuntu Dockerfile ubuntu
Alpine 3.15 Dockerfile 2.13-alpine-3.15 2.14-alpine-3.15 2.15-alpine-3.15
Alpine 3.16 Dockerfile 2.13-alpine-3.16 2.14-alpine-3.16 2.15-alpine-3.16 2.16-alpine-3.16
Alpine 3.17 Dockerfile 2.13-alpine-3.17 2.14-alpine-3.17 2.15-alpine-3.17 2.16-alpine-3.17
Alpine 3.18 Dockerfile 2.13-alpine-3.18 2.14-alpine-3.18 2.15-alpine-3.18 2.16-alpine-3.18
Alpine 3.19 Dockerfile 2.13-alpine-3.19 2.14-alpine-3.19 2.15-alpine-3.19 2.16-alpine-3.19
Bullseye (Debian 11) Dockerfile 2.14-bullseye 2.15-bullseye
Bullseye Slim (Debian 11) Dockerfile 2.14-bullseye-slim 2.15-bullseye-slim
Bookworm (Debian 12) Dockerfile 2.14-bookworm 2.15-bookworm 2.16-bookworm
Bookworm Slim (Debian 12) Dockerfile 2.14-bookworm-slim 2.15-bookworm-slim 2.16-bookworm-slim
Rocky Linux 9 Dockerfile 2.14-rockylinux-9 2.15-rockylinux-9
Ubuntu 22.04 Dockerfile 2.14-ubuntu-22.04 2.15-ubuntu-22.04 2.16-ubuntu-22.04

ARM Releases

There is some support for Arm architecture.

  • linux/arm64 (Macbook and AWS Graviton) to latest and alpine image tags.
  • linux/arm/v7 and linux/arm/v6 to arm image tag (Raspberry Pi). *Experimental!!

Older releases

  • Ansible 2.12 (2.12.10) includes ansible-core + ansible. This also requires Python >=3.8
  • Ansible 2.11 (2.11.12) includes ansible-core + ansible. This also requires Python 3.
  • Ansible 2.10 (2.10.17) includes ansible-base.
  • Ansible 2.9 (2.9.27) includes ansible.
  • All versions also include ansible-lint.

These are no longer updated or maintained, however, remain for users running older workloads.

Base Image (↓) \ Ansible Version (→) 2.12 2.11 2.10 2.9
Alpine 3.14 2.12-alpine-3.14 Dockerfile 2.11-alpine-3.14 Dockerfile 2.10-alpine-3.14 Dockerfile 2.9-alpine-3.14 Dockerfile
Alpine 3.15 2.12-alpine-3.15 Dockerfile 2.11-alpine-3.15 Dockerfile 2.10-alpine-3.15 Dockerfile 2.9-alpine-3.15 Dockerfile
Alpine 3.16 2.12-alpine-3.16 Dockerfile 2.11-alpine-3.16 Dockerfile 2.10-alpine-3.16 Dockerfile 2.9-alpine-3.16 Dockerfile
Alpine 3.17 2.12-alpine-3.17 Dockerfile 2.11-alpine-3.17 Dockerfile 2.10-alpine-3.17 Dockerfile 2.9-alpine-3.17 Dockerfile
Bullseye (Debian 11) 2.12-bullseye Dockerfile 2.11-bullseye Dockerfile 2.10-bullseye Dockerfile 2.9-bullseye Dockerfile
Bullseye Slim (Debian 11) 2.12-bullseye-slim Dockerfile 2.11-bullseye-slim Dockerfile 2.10-bullseye-slim Dockerfile 2.9-bullseye-slim Dockerfile
Buster (Debian 10) 2.12-buster Dockerfile 2.11-buster Dockerfile 2.10-buster Dockerfile 2.9-buster Dockerfile
Buster Slim (Debian 10) 2.12-buster-slim Dockerfile 2.11-buster-slim Dockerfile 2.10-buster-slim Dockerfile 2.9-buster-slim Dockerfile
Centos 7 2.12-centos-7 Dockerfile 2.11-centos-7 Dockerfile 2.10-centos-7 Dockerfile 2.9-centos-7 Dockerfile
Rocky Linux 8 2.12-rockylinux-8 Dockerfile 2.11-rockylinux-8 Dockerfile 2.10-rockylinux-8 Dockerfile 2.9-rockylinux-8 Dockerfile
Rocky Linux 9 2.12-rockylinux-9 Dockerfile 2.11-rockylinux-9 Dockerfile 2.10-rockylinux-9 Dockerfile 2.9-rockylinux-9 Dockerfile
Ubuntu 18.04 2.12-ubuntu-18.04 Dockerfile 2.11-ubuntu-18.04 Dockerfile 2.10-ubuntu-18.04 Dockerfile 2.9-ubuntu-18.04 Dockerfile
Ubuntu 20.04 2.12-ubuntu-20.04 Dockerfile 2.11-ubuntu-20.04 Dockerfile 2.10-ubuntu-20.04 Dockerfile 2.9-ubuntu-20.04 Dockerfile
Ubuntu 22.04 2.12-ubuntu-22.04 Dockerfile 2.11-ubuntu-22.04 Dockerfile 2.10-ubuntu-22.04 Dockerfile 2.9-ubuntu-22.04 Dockerfile

Using Mitogen

All installs include Mitogen mainly due to the performance improvements that Mitogen awards you. You can read more about it inside the Mitogen for Ansible documentation. To leverage *Mitogen- to accelerate your playbook runs, add this to your ansible.cfg:

Please investigate in your container the location of ansible_mitogen (it is different per container). You can do this via:

your_container="ansible:latest"
docker run --rm -it "willhallonline/${your_container}" /bin/sh -c "find / -type d | grep 'ansible_mitogen/plugins' | sort | head -n 1"

and then configuring your own ansible.cfg like:

[defaults]
strategy_plugins = /usr/local/lib/python3.{python-version}/site-packages/ansible_mitogen/plugins/
strategy = mitogen_linear

Running

**You will likely need to mount required directories into your container to make it run (or build on top of what is here).

Simple

$~   docker run --rm -it willhallonline/ansible:latest /bin/sh

Mount local directory and ssh key

$~  docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/id_rsa willhallonline/ansible:latest /bin/sh

Injecting commands

$~  docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/id_rsa willhallonline/ansible:latest ansible-playbook playbook.yml

Bash Alias

You can put these inside your dotfiles (~/.bashrc or ~/.zshrc to make handy aliases).

alias docker-ansible-cli='docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --workdir=/ansible willhallonline/ansible:latest /bin/sh'
alias docker-ansible-cmd='docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --workdir=/ansible willhallonline/ansible:latest '

use with:

$~  docker-ansible-cli ansible-playbook -u playbook.yml

Maintainer

docker-ansible's People

Contributors

badochov avatar bartdorlandt avatar christ-re avatar dimon222 avatar eill avatar larsuhartmann avatar pavelpikta avatar poeschl avatar renovate-bot avatar rezabojnordi avatar sdktr avatar willhallonline avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-ansible's Issues

Wrong Docker image tags for Alpine versions

Some Docker images tagged as a specific version are shipping a different Alpine version, such as 2.12-alpine-3.14 or 2.12-alpine-3.15 being actually Alpine 3.16.

❯ docker run --rm -it willhallonline/ansible:2.12-alpine-3.14 sh

/ansible # cat /etc/issue
Welcome to Alpine Linux 3.16
Kernel \r on an \m (\l)

/ansible # apk add vim
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
...

This breaks automated workflows when expecting specific versions but the base Alpine image is a different release.

Version 2.15-ubuntu-22.04 unable to run playbook

When run in 2.15-ubuntu-22.04, some errors occur.

$ docker run --rm -it -v $(pwd):/ansible willhallonline/ansible:2.15-ubuntu-22.04 ansible-playbook playbook.yaml

PLAY [Change to multi user target] *******************************************
ERROR! Unexpected Exception, this is probably a bug: can't start new thread
to see the full traceback, use -vvv

That ok when run in 2.15.2-alpine-3.16.

$ docker run --rm -it -v $(pwd):/ansible willhallonline/ansible:2.15.2-alpine-3.16 ansible-playbook playbook.yaml

PLAY [Change to multi user target] *******************************************

TASK [Gathering Facts] 
...

ARM64v8 images

Hi Will

I just got an M2 MacBook Air, and wanted to try running this container, but unfortunately discovered that it does not have any arm64v8 images. I see you use gitlab-ci, but I have never worked with that, so I am not sure which changes would be needed to include arm64v8 images as well - if you would be interested in that 😄

Would it be feasible to add ARM images as well?

https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Thanks!

WORKDIR /ansible in the Dockerfile

Hello, William!

The documentation for your image recommends mounting playbook directory to the /ansible path, but the workdir in the docker image is set to the /, so to execute playbook we need to call "ansible-playbook /ansible/playbook.yml".

Unfortunately, ansible looks for config files into $(pwd) and predefined config dirs, and thus we need explicitly mount all the configuration files (like ansible.cfg) to the /etc/ansible or other default search directories. What do you think about changing WORKDIR in the Dockerfiles to the /ansible?

I think it may be useful, but I'm not sure how it would affect the existing pipelines, so I'm not submitting a PR yet.

Wrong 'password_hash' module result for 'bcrypt' algorithm using willhallonline/ansible:2.12-bullseye image

I tried to use 'password_hash' module in my playbook, and I got misbehavior when using 'bcrypt' algorithm with the module.

Example playbook:

$ cat _test18.yml 
- hosts: all
  tasks:
  - name: get hash
    debug:
      msg: "{{ 'asdfasdf' | password_hash('bcrypt', rounds=10, ident='2a') }}"

Actual behavior:
Hash value is not meaningful.

$ podman run --rm -it -v $(pwd):/ansible \
docker.io/willhallonline/ansible:2.12-bullseye \
bash -c 'ansible-playbook -l localhost _test18.yml'

PLAY [all] ********************************************************

TASK [Gathering Facts] ********************************************
ok: [localhost]

TASK [get hash] ***************************************************
ok: [localhost] => {
    "msg": "*0"
}

PLAY RECAP ********************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Expected behavior:
Meaningful hash value (something that starts from $2a$10$).

Suggested solution to the problem:
It needs to be added installation of python3-passlib package into Dockerfile.
Adding this into docker/podman run command solves the problem:

$ podman run --rm -it -v $(pwd):/ansible \
docker.io/willhallonline/ansible:2.12-bullseye \
bash -c 'apt-get -yq update; apt-get -yq install python3-passlib; ansible-playbook -l localhost _test18.yml'
Get:1 http://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
Get:2 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
Get:4 http://security.debian.org/debian-security bullseye-security/main amd64 Packages [164 kB]
Get:5 http://deb.debian.org/debian bullseye/main amd64 Packages [8182 kB]
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2592 B]
Fetched 8548 kB in 3s (3410 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  python3-passlib
0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.
Need to get 368 kB of archives.
After this operation, 2097 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 python3-passlib all 1.7.4-1 [368 kB]
Fetched 368 kB in 0s (1557 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package python3-passlib.
(Reading database ... 19434 files and directories currently installed.)
Preparing to unpack .../python3-passlib_1.7.4-1_all.deb ...
Unpacking python3-passlib (1.7.4-1) ...
Setting up python3-passlib (1.7.4-1) ...

PLAY [all] ********************************************************

TASK [Gathering Facts] ********************************************
ok: [localhost]

TASK [get hash] ***************************************************
ok: [localhost] => {
    "msg": "$2a$10$l0AJxfvJ34moWsRtJGeFR.fdDbwU60mmxEX6tV/OK6.dHHcPYfVDS"
}

PLAY RECAP ********************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Environment:

$ grep -P '^(NAME|VERSION)=' /etc/os-release 
NAME="Fedora Linux"
VERSION="36 (KDE Plasma)"
$ uname -a
Linux admin200 5.17.14-300.fc36.x86_64 #1 SMP PREEMPT Thu Jun 9 13:41:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
$ podman --version
podman version 4.1.0

GITCLONE via SSH within a container, using ahn SSH key pair

hi again @willhallonline ,

  • As mentioned earlier, I am currently using your willhallonline/ansible:2.8-centos image
  • I am not sure it is due to your image design, or related to the underlying centos, or my simple wide ignorance, but I tested and re-tested a couple dozen of times, until I made it sure :
    • what I wanted to do , is to git clone my ansible playbook, into the container, to execute it.
    • I have an ssh key pair, inside a folder, of path /path/to/my/secrets/.ssh, where /path/ is a folder that does not exist in the centos filesystem layout. So I just create it, and put away my keypair into it.
    • I use the GIT_SSH_COMMAND to specify to git clone command, where the private key is. export GIT_SSH_COMMAND='ssh -i /path/to/my/secrets/.ssh/my_private_key_file'.
    • And here is what I made sure :
      • unless export GIT_SSH_COMMAND='ssh -i~/.ssh/id_rsa',
      • ssh -Tvvvai /path/to/my/secrets/.ssh/my_private_key_file [email protected] always succeeds (no SSH problem)
      • git clone $SSH_URI_TO_MY_REPO will always fail 'permission denied',
      • and the git clone still fails even if I :
chmod 700 -R /path/to/my/secrets/.ssh
chmod 644 -R /path/to/my/secrets/.ssh/my_public_key_file.pub
chmod 600 -R /path/to/my/secrets/.ssh/my_private_key_file

Don't hesitate asking me any additional infos,

And thank you for your work.

ps: worth mentioning, I don't use at all this repo, I just docker pull willhallonline/ansible:2.8-centos, and build my own image, FROM yours.

Add the possibility to set ansible minor version

It would be nice to add the possibility to set ansible minor version.

For example for ansible 2.10 and based on ansible doc, something like that could be implemented:

...
# set ansible version
ENV ANSIBLE_VERSION 2.10.4
...
RUN pip3 install "ansible-base==${ANSIBLE_VERSION}"
...

The key point is to replace ansible by ansible-base (see referenced doc above), otherwise the latest minor version (currently 2.10.13) will always be installed whatever you set in the line pip3 install ansible==2.10.7 && \.

What do you think?

Mitogen version

Greetings,

Running ansible-playbook with willhallonline/ansible:alpine is giving the following:

ERROR! Your Ansible version (2.10.1) is too recent. The most recent version
supported by Mitogen for Ansible is (2, 9).x. Please check the Mitogen
release notes to see if a new version is available, otherwise
subscribe to the corresponding GitHub issue to be notified when
support becomes available.

also:

ansible --version
ansible 2.10.1
config file = /eis-ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Jul 18 2020, 19:35:03) [GCC 9.2.0]

is showing 2.10.1 and docs say it should be 2.10.3

Am I doing something wrong?

Thanks!

Config = none

Hello !
When containers running complete I see some logs:

ansible-playbook 2.7.9

  **config file = None**

  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']

  ansible python module location = /usr/lib/python2.7/site-packages/ansible

  executable location = /usr/bin/ansible-playbook

  python version = 2.7.15 (default, Aug 16 2018, 14:17:09) [GCC 6.4.0]

Is it ok? I think ansible cant write my files : inventory, .cfg or my playbook .yml file.

Plz help :-)

ansible-galaxy requires resolvelib<0.9.0,>=0.5.3

Background

I absolutely love the effort you put into this project, thank you! I am using this project on another open source project and I'm running into issues when using the latest alpine version.

Affected images

  • willhallonline/ansible:latest
  • willhallonline/ansible:alpine

Special Notes

Note

This does not happen on willhallonline/ansible:ubuntu

Steps to reproduce

Run a container:

docker run --rm --pull always -it willhallonline/ansible:alpine sh

Once in the container, try to run an Ansible Galaxy install:

ansible-galaxy collection install geerlingguy.mac

Expected Results

  • I am expecting my Ansible Galaxy collection to install and succeed

Actual Results

image

Text version of error

/ansible # ansible-galaxy collection install geerlingguy.mac
Starting galaxy collection install process
Process install dependency map
ERROR! ansible-galaxy requires resolvelib<0.9.0,>=0.5.3

Possibly related issues

container restarting loop

Hello, today i tried to bring up container with ansible 2.10 but it's in restarting loop (tried 2.12 as well, but same).

docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
859de8ee5a7a willhallonline/ansible:2.10-alpine-3.15 "ansible-playbook --…" 9 minutes ago Restarting (0) 23 seconds ago ansible210

Any ideas or suggestions about this issue?
Regards.

ansible-galaxy collection list is rather old

Many of the collections included are really old (possibly years old, in some cases).
I second the decision of including them, but updated collections should be included at some point.

And yes, of course, I can install them by hand, but the whole point of having a packaged ansible distro is dispensing with most of that..

ansible-galaxy requires resolvelib <0.6.0, >=0.5.3

Summary

Some images with 2.12 and Alpine 3.14 (and above) are throwing the error:

ERROR! ansible-galaxy requires resolvelib<0.6.0,>=0.5.3

This seems to be happening starting yesterday (Feb 20, 2023) after the image update in Docker Hub.

All tags seem to run ansible core 2.13.1

Example:

docker run --rm -it willhallonline/ansible:2.10-bullseye

ansible-playbook [core 2.13.1]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
  jinja version = 3.1.2
  libyaml = True

Add git for ansible-galaxy

Hi,

Thank you for these great images!

What do you think of adding git so that we can use ansible-galaxy with git?

Thanks a lot!

Regards,

Raphaël

Alpine 3.17 with Ansible-lint issue with `packaging`

Currently, there is a conflict within the Alpine 3.17 base image and using packaging for ansible-core (ansible 2.11, 2.12 and 2.13). I have dropped ansible-lint within the latest image, however, as a long-term need it will be brought back (probably in a couple of weeks when Alpine 3.17 changes).

SSH-KEY

When attempting to run playbook, I am unable to ssh into the hosts: all of them have root log in turned on:

`/ansible # ssh-copy-id root@ip
/usr/bin/ssh-copy-id: ERROR: No identities found
/ansible # ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:k5DL7cRJYsUnaYTFFgZEL9MeoRI1xKsJqNgNbffZlGM root@7b0947e308c0
The key's randomart image is:
+---[RSA 3072]----+
| .*BBBo |

| |
| |
+----[SHA256]-----+
/ansible # ls
hosts playbook.yml
/ansible # ssh-copy-id root@ip
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
expr: warning: '^ERROR: ': using '^' as the first character
of a basic regular expression is not portable; it is ignored
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ip's password:
Permission denied, please try again.
root@ip's password:`

Include pymssql python and ansible collection community-general dependencies

I couldn't find any other strategy to use the module community.general.mssql_script in your docker image apart from adding some commands on docker run to install these dependencies on every playbook run.

Please include pymssql python and ansible collection community-general dependencies in your images.

Request: Immutable tags on docker hub

I use this image for a GitLab CI/CD pipeline in conjunction with matrix_docker_ansible_deploy, a comprehensive playbook for installing a Matrix homeserver.
The last update of this image has bumped the ansible version in willhallonline/ansible:2.10-ubuntu20.04 from 2.10.12 to 2.10.15, which has broken the execution of this playbook, and I have no way of using an older version of this image, since there are no immutable tags I could pin the docker image at.
Therefore, I request immutable tags to be added, like "2.10.15-ubuntu20.04" in addition to the usual "2.10-ubuntu20.04".

Kerberos Support?

Would be we able to get Kerberos support in the container? It would help tremendously when trying to manage Windows systems via ansible.

New OS Base

Need new OS base for:

  1. Alpine-3.12
  2. Alpine-3.13
  3. Centos-8

Error for local steps with ansible:2.9-alpine-3.12 container

Hello,

context

I am using your containers in a CI/CD process with Gitlab CI.
Since yesterday, my CI process failed on each run. It seems to be related to the last container update, done few days ago: https://hub.docker.com/layers/willhallonline/ansible/2.10-alpine-3.12/images/sha256-79839d6061d896f023e0cf977f089a3af124432802ec9bcab2ea9048477419c1?context=explore.

I am using this container:

  • willhallonline/ansible:2.9-alpine-3.12
    But, I also tried this one:
  • willhallonline/ansible:2.10-alpine-3.12
    with the same result.

task in error

- name: Generate HAProxy configuration (for Gitlab artifacts).
  template:
    src: haproxy.cfg.j2
    dest: haproxy.cfg
    mode: 0644
  delegate_to: localhost

error logs

TASK [include_role : unamur.haproxy] *******************************************
task path: /builds/siu-infra/services/haproxy/haproxy-deployment/haproxy-test.yml:6
TASK [unamur.haproxy : Generate HAProxy configuration (for Gitlab artifacts).] ***
task path: /builds/siu-infra/services/haproxy/haproxy-deployment/roles/unamur.haproxy/tasks/test.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: admin
<localhost> EXEC /bin/sh -c 'echo ~admin && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~admin/.ansible/tmp `"&& mkdir "` echo ~admin/.ansible/tmp/ansible-tmp-1654012894.3296542-42-10278076990972 `" && echo ansible-tmp-1654012894.3296542-42-10278076990972="` echo ~admin/.ansible/tmp/ansible-tmp-1654012894.3296542-42-10278076990972 `" ) && sleep 0'
fatal: [proxyweb1.srv.unamur.be -> localhost]: UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to create temporary directory. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~admin/.ansible/tmp `\"&& mkdir \"` echo ~admin/.ansible/tmp/ansible-tmp-1654012894.3296542-42-10278076990972 `\" && echo ansible-tmp-1654012894.3296542-42-10278076990972=\"` echo ~admin/.ansible/tmp/ansible-tmp-1654012894.3296542-42-10278076990972 `\" ), exited with result 1, stderr output: mkdir: can't create directory '~admin': File exists\n",
    "unreachable": true
}

It seems that it is not possible to create locally in the container an ansible tmp directory in the home directory.
Do you have any idea what could be the cause of this error?

Thanks,

Alexandre

ansible.cfg

My ansible.cfg file:

# config file for ansible -- http://ansible.com/
# ==============================================

# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first

[defaults]

# some basic default values...
inventory = ./environments/dev
#inventory      = /etc/ansible/hosts
#library        = /usr/share/my_modules/
#remote_tmp     = ~/.ansible/tmp
#local_tmp      = ~/.ansible/tmp
#forks          = 5
#poll_interval  = 15
#sudo_user      = root
#ask_sudo_pass = True
#ask_pass      = True
#transport      = smart
#remote_port    = 22
#module_lang    = C
#module_set_locale = False

# plays will gather facts by default, which contain information about
# the remote system.
#
# smart - gather by default, but don't regather if already gathered
# implicit - gather by default, turn off with gather_facts: False
# explicit - do not gather by default, must say gather_facts: True
#gathering = implicit

# by default retrieve all facts subsets
# all - gather all subsets
# network - gather min and network facts
# hardware - gather hardware facts (longest facts to retrieve)
# virtual - gather min and virtual facts
# facter - import facts from facter
# ohai - import facts from ohai
# You can combine them using comma (ex: network,virtual)
# You can negate them using ! (ex: !hardware,!facter,!ohai)
# A minimal set of facts is always gathered.
#gather_subset = all

# some hardware related facts are collected
# with a maximum timeout of 10 seconds. This
# option lets you increase or decrease that
# timeout to something more suitable for the
# environment. 
# gather_timeout = 10

# additional paths to search for roles in, colon separated
#roles_path    = /etc/ansible/roles
roles_path = ../roles:roles:../roles.galaxy:roles.galaxy

# uncomment this to disable SSH key host checking
host_key_checking = False

# change the default callback
#stdout_callback = skippy
# enable additional callbacks
callback_whitelist = timer, mail, profile_tasks

# Determine whether includes in tasks and handlers are "static" by
# default. As of 2.0, includes are dynamic by default. Setting these
# values to True will make includes behave more like they did in the
# 1.x versions.
#task_includes_static = True
#handler_includes_static = True

# Controls if a missing handler for a notification event is an error or a warning
#error_on_missing_handler = True

# change this for alternative sudo implementations
#sudo_exe = sudo

# What flags to pass to sudo
# WARNING: leaving out the defaults might create unexpected behaviours
#sudo_flags = -H -S -n

# SSH timeout
#timeout = 10

# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
#remote_user = root

# logging is off by default unless this path is defined
# if so defined, consider logrotate
#log_path = /var/log/ansible.log

# default module name for /usr/bin/ansible
#module_name = command

# use this shell for commands executed under sudo
# you may need to change this to bin/bash in rare instances
# if sudo is constrained
#executable = /bin/sh

# if inventory variables overlap, does the higher precedence one win
# or are hash values merged together?  The default is 'replace' but
# this can also be set to 'merge'.
#hash_behaviour = replace

# by default, variables from roles will be visible in the global variable
# scope. To prevent this, the following option can be enabled, and only
# tasks and handlers within the role will see the variables there
#private_role_vars = yes

# list any Jinja2 extensions to enable here:
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n

# if set, always use this private key file for authentication, same as
# if passing --private-key to ansible or ansible-playbook
#private_key_file = /path/to/file

# If set, configures the path to the Vault password file as an alternative to
# specifying --vault-password-file on the command line.
#vault_password_file = /path/to/vault_password_file

# format of string {{ ansible_managed }} available within Jinja2
# templates indicates to users editing templates files will be replaced.
# replacing {file}, {host} and {uid} and strftime codes with proper values.
#ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
# {file}, {host}, {uid}, and the timestamp can all interfere with idempotence
# in some situations so the default is a static string:
#ansible_managed = Ansible managed

# by default, ansible-playbook will display "Skipping [host]" if it determines a task
# should not be run on a host.  Set this to "False" if you don't want to see these "Skipping"
# messages. NOTE: the task header will still be shown regardless of whether or not the
# task is skipped.
#display_skipped_hosts = True

# by default, if a task in a playbook does not include a name: field then
# ansible-playbook will construct a header that includes the task's action but
# not the task's args.  This is a security feature because ansible cannot know
# if the *module* considers an argument to be no_log at the time that the
# header is printed.  If your environment doesn't have a problem securing
# stdout from ansible-playbook (or you have manually specified no_log in your
# playbook on all of the tasks where you have secret information) then you can
# safely set this to True to get more informative messages.
#display_args_to_stdout = False

# by default (as of 1.3), Ansible will raise errors when attempting to dereference
# Jinja2 variables that are not set in templates or action lines. Uncomment this line
# to revert the behavior to pre-1.3.
#error_on_undefined_vars = False

# by default (as of 1.6), Ansible may display warnings based on the configuration of the
# system running ansible itself. This may include warnings about 3rd party packages or
# other conditions that should be resolved if possible.
# to disable these warnings, set the following value to False:
#system_warnings = True

# by default (as of 1.4), Ansible may display deprecation warnings for language
# features that should no longer be used and will be removed in future versions.
# to disable these warnings, set the following value to False:
#deprecation_warnings = True

# (as of 1.8), Ansible can optionally warn when usage of the shell and
# command module appear to be simplified by using a default Ansible module
# instead.  These warnings can be silenced by adjusting the following
# setting or adding warn=yes or warn=no to the end of the command line
# parameter string.  This will for example suggest using the git module
# instead of shelling out to the git command.
# command_warnings = False


# set plugin path directories here, separate with colons
#action_plugins     = /usr/share/ansible/plugins/action
#cache_plugins      = /usr/share/ansible/plugins/cache
#callback_plugins   = /usr/share/ansible/plugins/callback
#connection_plugins = /usr/share/ansible/plugins/connection
#lookup_plugins     = /usr/share/ansible/plugins/lookup
#inventory_plugins  = /usr/share/ansible/plugins/inventory
#vars_plugins       = /usr/share/ansible/plugins/vars
#filter_plugins     = /usr/share/ansible/plugins/filter
#test_plugins       = /usr/share/ansible/plugins/test
#strategy_plugins   = /usr/share/ansible/plugins/strategy

# by default callbacks are not loaded for /bin/ansible, enable this if you
# want, for example, a notification or logging callback to also apply to
# /bin/ansible runs
#bin_ansible_callbacks = False


# don't like cows?  that's unfortunate.
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1
#nocows = 1

# set which cowsay stencil you'd like to use by default. When set to 'random',
# a random stencil will be selected for each task. The selection will be filtered
# against the `cow_whitelist` option below.
#cow_selection = default
#cow_selection = random

# when using the 'random' option for cowsay, stencils will be restricted to this list.
# it should be formatted as a comma-separated list with no spaces between names.
# NOTE: line continuations here are for formatting purposes only, as the INI parser
#       in python does not support them.
#cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\
#              hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\
#              stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www

# don't like colors either?
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
#nocolor = 1

# if set to a persistent type (not 'memory', for example 'redis') fact values
# from previous runs in Ansible will be stored.  This may be useful when
# wanting to use, for example, IP information from one group of servers
# without having to talk to them in the same playbook run to get their
# current IP information.
#fact_caching = memory


# retry files
# When a playbook fails by default a .retry file will be created in ~/
# You can disable this feature by setting retry_files_enabled to False
# and you can change the location of the files by setting retry_files_save_path

#retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry

# squash actions
# Ansible can optimise actions that call modules with list parameters
# when looping. Instead of calling the module once per with_ item, the
# module is called once with all items at once. Currently this only works
# under limited circumstances, and only with parameters named 'name'.
#squash_actions = apk,apt,dnf,homebrew,package,pacman,pkgng,yum,zypper

# prevents logging of task data, off by default
#no_log = False

# prevents logging of tasks, but only on the targets, data is still logged on the master/controller
#no_target_syslog = False

# controls whether Ansible will raise an error or warning if a task has no
# choice but to create world readable temporary files to execute a module on
# the remote machine.  This option is False by default for security.  Users may
# turn this on to have behaviour more like Ansible prior to 2.1.x.  See
# https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
# for more secure ways to fix this than enabling this option.
#allow_world_readable_tmpfiles = False

allow_world_readable_tmpfiles = True

# controls the compression level of variables sent to
# worker processes. At the default of 0, no compression
# is used. This value must be an integer from 0 to 9.
#var_compression_level = 9

# controls what compression method is used for new-style ansible modules when
# they are sent to the remote system.  The compression types depend on having
# support compiled into both the controller's python and the client's python.
# The names should match with the python Zipfile compression types:
# * ZIP_STORED (no compression. available everywhere)
# * ZIP_DEFLATED (uses zlib, the default)
# These values may be set per host via the ansible_module_compression inventory
# variable
#module_compression = 'ZIP_DEFLATED'

# This controls the cutoff point (in bytes) on --diff for files
# set to 0 for unlimited (RAM may suffer!).
#max_diff_size = 1048576

[privilege_escalation]
#become=True
#become_method=sudo
#become_user=root
#become_ask_pass=False

[paramiko_connection]

# uncomment this line to cause the paramiko connection plugin to not record new host
# keys encountered.  Increases performance on new host additions.  Setting works independently of the
# host key checking setting above.
record_host_keys = False

# by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this
# line to disable this behaviour.
#pty=False

[ssh_connection]

# ssh arguments to use
# Leaving off ControlPersist will result in poor performance, so use
# paramiko on older platforms rather than removing it, -C controls compression use
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null

# The path to use for the ControlPath sockets. This defaults to
# "%(directory)s/ansible-ssh-%%h-%%p-%%r", however on some systems with
# very long hostnames or very long path names (caused by long user names or
# deeply nested home directories) this can exceed the character limit on
# file socket names (108 characters for most platforms). In that case, you
# may wish to shorten the string below.
#
# Example:
# control_path = %(directory)s/%%h-%%r
#control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r
#control_path = %(directory)s/ansible-ssh-%%C

# Enabling pipelining reduces the number of SSH operations required to
# execute a module on the remote server. This can result in a significant
# performance improvement when enabled, however when using "sudo:" you must
# first disable 'requiretty' in /etc/sudoers
#
# By default, this option is disabled to preserve compatibility with
# sudoers configurations that have requiretty (the default on many distros).
#
#pipelining = False

# Control the mechanism for transfering files
#   * smart = try sftp and then try scp [default]
#   * True = use scp only
#   * False = use sftp only
#scp_if_ssh = smart

# if False, sftp will not use batch mode to transfer files. This may cause some
# types of file transfer failures impossible to catch however, and should
# only be disabled if your sftp version has problems with batch mode
#sftp_batch_mode = False

[accelerate]
#accelerate_port = 5099
#accelerate_timeout = 30
#accelerate_connect_timeout = 5.0

# The daemon timeout is measured in minutes. This time is measured
# from the last activity to the accelerate daemon.
#accelerate_daemon_timeout = 30

# If set to yes, accelerate_multi_key will allow multiple
# private keys to be uploaded to it, though each user must
# have access to the system via SSH to add a new key. The default
# is "no".
#accelerate_multi_key = yes

[selinux]
# file systems that require special treatment when dealing with security context
# the default behaviour that copies the existing context or uses the user default
# needs to be changed to use the file system dependent context.
#special_context_filesystems=nfs,vboxsf,fuse,ramfs

# Set this to yes to allow libvirt_lxc connections to work without SELinux.
#libvirt_lxc_noseclabel = yes

[colors]
#highlight = white
#verbose = blue
#warn = bright purple
#error = red
#debug = dark gray
#deprecate = purple
#skip = cyan
#unreachable = red
#ok = green
#changed = yellow
#diff_add = green
#diff_remove = red
#diff_lines = cyan

Wrong ansible versions in the ubuntu- and alpine- based ansible-2.11 images

It looks like that ansible-2.11 is not available in any of the ubuntu- and alpine-based images. Ansible 2.13.0 is included into these images.

For example:

docker run --rm willhallonline/ansible:2.11-alpine-3.14 ansible --version
ansible [core 2.13.0]

docker run --rm willhallonline/ansible:2.11-ubuntu-20.04 ansible --version
ansible [core 2.13.0]

docker run --rm willhallonline/ansible:2.11-alpine-3.15 ansible --version
ansible [core 2.13.0]

New architecture for image with latest tag on Docker Hub

Thank you for providing a greate image. We experienced some problems after todays update of the docker image on Docker Hub. The processor architecture for the image with latest tag changed from amd64 to arm64. Was this by perpose or was it just a misstake. Should I update my scripts to use som other tag from now?

Best regards
// Mattias Johansson

building failed from alpine image

it is on the master branch ,by docker-ansible/ansible211/alpine314,it seems caused by "cryptography",got:

File "/tmp/pip-build-env-p3651d2o/overlay/lib/python3.9/site-packages/setuptools_rust/setuptools_ext.py", line 103, in run
      build_rust.run()
    File "/tmp/pip-build-env-p3651d2o/overlay/lib/python3.9/site-packages/setuptools_rust/command.py", line 52, in run
      self.run_for_extension(ext)
    File "/tmp/pip-build-env-p3651d2o/overlay/lib/python3.9/site-packages/setuptools_rust/build.py", line 92, in run_for_extension
      dylib_paths = self.build_extension(ext)
    File "/tmp/pip-build-env-p3651d2o/overlay/lib/python3.9/site-packages/setuptools_rust/build.py", line 131, in build_extension
      metadata = json.loads(check_output(metadata_command))
    File "/usr/lib/python3.9/subprocess.py", line 424, in check_output
      return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
    File "/usr/lib/python3.9/subprocess.py", line 528, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['cargo', 'metadata', '--manifest-path', 'src/rust/Cargo.toml', '--format-version', '1']' returned non-zero exit status 101.
  ----------------------------------------
  ERROR: Failed building wheel for cryptography
Failed to build cryptography
ERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly
The command '/bin/sh -c apk --no-cache add         sudo         python3        py3-pip         openssl         ca-certificates         sshpass         openssh-client         rsync         git &&     apk --no-cache add --virtual build-dependencies         python3-dev         libffi-dev         musl-dev         gcc         cargo         openssl-dev         libressl-dev         build-base &&     pip3 install --upgrade pip wheel &&     pip3 install --upgrade cryptography cffi &&     pip3 install ansible-core==2.11.3 &&     pip3 install mitogen ansible-lint jmespath &&     pip3 install --upgrade pywinrm &&     apk del build-dependencies &&     rm -rf /var/cache/apk/* &&     rm -rf /root/.cache/pip &&     rm -rf /root/.cargo' returned a non-zero code: 1
command failed with status: 1

thanks.

Dockerfile for ansible 2.9 take the latest mitogen version

pip3 install mitogen ansible-lint jmespath && \

from mitogen site:

v0.3.0 (unreleased)
This release separates itself from the v0.2.X releases. Ansible’s API changed too much to support backwards compatibility so from now on, v0.2.X releases will be for Ansible < 2.10 and v0.3.X will be for Ansible 2.10+. See here for details.

how can we use a specific version of mitogen with this ansible image?

    - name: ansible
      image: willhallonline/ansible:2.9-alpine-3.13
      command: [cat]
      tty: true
      imagePullPolicy: Always
      env:
        - name: ANSIBLE_FORKS
          value: 20
        - name: ANSIBLE_STRATEGY
          value: mitogen_linear
        - name: ANSIBLE_PIPELINING
          value: True
        - name: ANSIBLE_STRATEGY_PLUGINS
          value: /usr/lib/python3.8/site-packages/ansible_mitogen/plugins/strategy
        - name: ANSIBLE_SSH_PIPELINING
          value: True
        - name: ANSIBLE_HOST_KEY_CHECKING
          value: False
        - name: ANSIBLE_GATHERING
          value: smart
        - name: ANSIBLE_CACHE_PLUGIN_CONNECTION
          value: /tmp/facts_cache
        - name: ANSIBLE_CACHE_PLUGIN
          value: jsonfile
        - name: ANSIBLE_CACHE_PLUGIN_TIMEOUT
          value: 7200
        - name: ANSIBLE_SSH_RETRIES
          value: 3
        - name: ANSIBLE_TIMEOUT
          value: 60

python 2.7 reaches End of Life in 2 months

First, thank you for this image! Second, could you update it, or create a new tag, for a python3 variant? For example, when I build your Dockerfile here, I see this output:

DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop su
pport for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support

I managed to get python 3 installed in Alpine 3.10 by following this Dockerfile, but adding your suggested mitogen, ansible-lint, and pywinrm packages:

    pip3 install --no-cache mitogen ansible-lint ; \
    pip3 install --no-cache --upgrade pywinrm ; \

Final working version:

FROM alpine:latest

# LABEL maintainer="Johannes Denninger"
# Details: https://github.com/joxz/alpine-ansible-py3/blob/master/Dockerfile
# This is mostly Johannes' work, but with Will Hall's 2 modifcations and bash, git, jq and delete __pycache__, .pyc
# and no entrypoint.sh or su-exec. See https://github.com/cytopia/docker-ansible/blob/master/Dockerfile-tools

RUN set -euxo pipefail ;\
    sed -i 's/http\:\/\/dl-cdn.alpinelinux.org/https\:\/\/alpine.global.ssl.fastly.net/g' /etc/apk/repositories ;\
    apk add --no-cache --update python3 ca-certificates openssh-client sshpass dumb-init bash git jq ;\
    apk add --no-cache --update --virtual .build-deps python3-dev build-base libffi-dev openssl-dev ;\

    pip3 install --no-cache --upgrade pip ;\
    pip3 install --no-cache --upgrade setuptools ansible ;\
    pip3 install --no-cache mitogen ansible-lint ; \
    pip3 install --no-cache --upgrade pywinrm ; \
    apk del --no-cache --purge .build-deps ;\
    rm -rf /var/cache/apk/* ;\
    rm -rf /root/.cache ;\
    ln -s /usr/bin/python3 /usr/bin/python ;\
    mkdir -p /etc/ansible/ ;\
    /bin/echo -e "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts ;\
    ssh-keygen -q -t ed25519 -N '' -f /root/.ssh/id_ed25519 ;\
    mkdir -p ~/.ssh && echo "Host *" > ~/.ssh/config && echo " StrictHostKeyChecking no" >> ~/.ssh/config ;\
    adduser -s /bin/ash -u 1000 -D -h /ansible ansible

COPY ./entrypoint.sh /usr/local/bin
RUN chmod +x /usr/local/bin/entrypoint.sh

# Details: https://stackoverflow.com/a/41386937/1231693
RUN find . -regex '^.*\(__pycache__\|\.py[co]\)$' -delete

ENTRYPOINT ["/usr/bin/dumb-init","--"]

USER ansible
WORKDIR /ansible

Results:

$ docker exec -ti app_ansible_1 python --version
Python 3.7.5

And:
image

Some tags are missing

Hello. The docs state that tags 2.10-alpine, 2.10-buster and 2.10-stretch exist. But when I tried to fetch them using the command (applicable to every listed tag):

docker pull willhallonline/ansible:2.10-alpine

I've got the error:

Error response from daemon: manifest for willhallonline/ansible:2.10-alpine not found: manifest unknown: manifest unknown

The "tags" page on the Docker Hub shows that those tags don't exist, too.

There's either a problem in the documentation or some misconfiguration.

Btw, thank you for your repository!

mitogen path in ubuntu 18.04

At least for ubuntu 18.04 the path mitogen is installed is
/usr/local/lib/python2.7/dist-packages/ansible_mitogen/plugins/strategy

Instead of the documented

/usr/lib/python2.7/site-packages/ansible_mitogen/plugins/strategy

This is just a note.

If you plan to move everything to python 3 soon, I guess no need for action here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.