Giter VIP home page Giter VIP logo

ansible-django-stack's Introduction

ansible-django-stack

Build Status

Ansible Playbook designed for environments running a Django app. It can install and configure these applications that are commonly used in production Django deployments:

  • Nginx
  • Gunicorn
  • PostgreSQL
  • Supervisor
  • Virtualenv
  • Memcached
  • Celery
  • RabbitMQ

Default settings are stored in roles/role_name/defaults/main.yml. Environment-specific settings are in the group_vars directory.

A certbot role is also included for automatically generating and renewing trusted SSL certificates with Let's Encrypt.

Tested with OS: Ubuntu 22.04 LTS (64-bit), Ubuntu 20.04 LTS (64-bit).

Tested with Cloud Providers: Digital Ocean, AWS, Rackspace

Getting Started

A quick way to get started is with Vagrant.

Requirements

It's recommended to use the version of Ansible specified in requirements.txt, although any version greater than Ansible 2.7 will work with this repository. When choosing an Ansible version, consider:

  • Ansible only issues security fixes for the last three major releases.
  • The included version of molecule has requirements on the Ansible version (currently, Molecule requires Ansible 2.5 or later and the 2.23 release will require Ansible 2.7 or greater)

Ansible has been configured to use Python 3 inside the remote machine when provisioning it. In Ubuntu 16.04 LTS, compatible Ansible versions are not in the main package repositories, but can be installed from the Ansible PPA by running these commands:

sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update

Configuring your application

The main settings to change are in the group_vars/[environment_name]/vars.yml file, where you can configure the location of your Git project, the project name, and the application name which will be used throughout the Ansible configuration.

Note that the default values in the playbooks assume that your project structure looks something like this:

myproject
├── manage.py
├── myapp
│   ├── apps
│   │   └── __init__.py
│   ├── __init__.py
│   ├── settings
│   │   ├── base.py
│   │   ├── __init__.py
│   │   ├── local.py
│   │   └── production.py
│   ├── templates
│   │   ├── 403.html
│   │   ├── 404.html
│   │   ├── 500.html
│   │   └── base.html
│   ├── urls.py
│   └── wsgi.py
├── README.md
└── requirements.txt

The main things to note are the locations of the manage.py and wsgi.py files. If your project's structure is a little different, you may need to change the values in these 2 files:

  • roles/web/tasks/setup_django_app.yml
  • roles/web/templates/gunicorn_start.j2

Also, if your app needs additional system packages installed, you can add them in roles/web/tasks/install_additional_packages.yml.

Creating the machine

Type this command from the project root directory:

vagrant up

(To use Docker instead of VirtualBox, add the flag --provider=docker to the command above. Note that extra configuration may be required first on your host for Docker to run systemd in a container.)

Wait a few minutes for the magic to happen. Access the app by going to this URL: https://my-cool-app.local

Yup, exactly, you just provisioned a completely new server and deployed an entire Django stack in 5 minutes with two words :).

Additional vagrant commands

SSH to the box

vagrant ssh

Re-provision the box to apply the changes you made to the Ansible configuration

vagrant provision

Reboot the box

vagrant reload

Shutdown the box

vagrant halt

Pulling from a private git repository using SSH agent forwarding

If your code is in a private repository you must use an SSH connection along with a key so ansible can checkout the code. HTTPS connections and SSH connections with a username and password do not work because ansible cannot deal with interactive logins.

Using SSH agent forwarding we can get the authentication request from the repository sent to the machine where the playbook is run. That keeps the private key to the repository as safe as possible. To set this up you need to:

  • Add a public key to the server hosting your repo.
  • Make sure ssh-agent is running on your local machine
  • Add the public key to ssh-agent

Connecting to GitHub with SSH has all the information on key generation, adding keys to the server, setting up ssh-agent and troubleshooting any problems.

Your server SSH configuration should work out-of-the-box. The "Server-Side Configuration Options" section in SSH Essentials: Working with SSH Servers, Clients, and Keys has good advice on locking down who can access the server over SSH connections.

Getting the playbook to use agent forwarding

The first thing you need to do is set the ssh_agent_forwarding flag in env_vars/base.yml to true:

ssh_forward_agent: true

This flag is used when configuring sudoers so that any user you become on the remote server will also use the same socket connection when requesting to unlock keys.

To enable SSH agent forwarding on the Vagrant box, change the following flag in VagrantFile and set it to true:

config.ssh.forward_agent = true

When running a playbook to provision a server, you enable SSH agent forwarding using the --ssh-extra-args option on the command line:

ansible-playbook --ssh-extra-args=-A -i production site.yml

This is a little bit clunky but it does not restrict you from setting other SSH options if you need to.

Security

NOTE: Do not run the Security role without understanding what it does. Improper configuration could lock you out of your machine.

Security role tasks

The security module performs several basic server hardening tasks. Inspired by this blog post:

  • Updates apt
  • Performs aptitude safe-upgrade
  • Adds a user specified by the server_user variable, found in roles/base/defaults/main.yml
  • Adds authorized key for the new user
  • Installs sudo and adds the new user to sudoers with the password specified by the server_user_password variable found in roles/security/defaults/main.yml
  • Installs and configures various security packages:
  • Restricts connection to the server to SSH and http(s) ports
  • Limits su access to the sudo group
  • Disallows password authentication (be careful!)
  • Disallows root SSH access (you will only SSH to your machine as your new user and use a password for sudo access)
  • Restricts SSH access to the new user specified by the server_user variable
  • Deletes the root password

Security role configuration

  • Change the server_user from root to something else in roles/base/defaults/main.yml
  • Change the sudo password in group_vars/[environment_name]/vars.yml
  • Change variables in ./roles/security/vars/ per your desired configuration by overriding them in group_vars/[environment_name]/vars.yml

Running the Security role

  • The security role can be run by running security.yml via:
ansible-playbook -i development security.yml

Running the Ansible Playbook to provision servers

NOTE: to enable the Security module you can use the steps above prior to following the steps below.

Create an inventory file for the environment, for example:

# development

[webservers]
webserver1.example.com
webserver2.example.com

[dbservers]
dbserver1.example.com

Next, create a playbook for the server type. See webservers.yml for an example.

Run the playbook:

ansible-playbook -i development webservers.yml [-K]

You can also provision an entire site by combining multiple playbooks. For example, I created a playbook called site.yml that includes both the webservers.yml and dbservers.yml playbook.

A few notes here:

  • The dbservers.yml playbook will only provision servers in the [dbservers] section of the inventory file.
  • The webservers.yml playbook will only provision servers in the [webservers] section of the inventory file.
  • The -K flag is for adding the sudo password you created for a new sudoer in the Security role (if applicable)

You can then provision the entire site with this command:

ansible-playbook -i development site.yml [-K]

If you're testing with vagrant, you can use this command:

ansible-playbook -i vagrant_ansible_inventory_default --private-key=~/.vagrant.d/insecure_private_key vagrant.yml [-K]

Using Ansible for Django Deployments

When doing deployments, you can simply use the --tags option to only run those tasks with these tags.

For example, you can add the tag deploy to certain tasks that you want to execute as part of your deployment process and then run this command:

ansible-playbook -i stage webservers.yml --tags="deploy"

This repo already has deploy tags specified for tasks that are likely needed to run during deployment in most Django environments.

Advanced Options

Changing the Ubuntu release

The Vagrantfile uses the Ubuntu 22.04 LTS Vagrant box for a 64-bit PC that is published by Canonical in HashiCorp Atlas. To use Ubuntu 20.04 LTS instead, change the config.vm.box setting to ubuntu/focal64.

Changing the Python version used by your application

Python 3 is used by default in the virtualenv. To use Python 2 instead, just override the virtualenv_python_version variable and set it to python.

It is possible to install other versions of Python from an unofficial PPA by Felix Krull (see disclaimer). To use this PPA, override the enable_deadsnakes_ppa variable and set it to yes. Then the virtualenv_python_version variable can be set to the name of a Python package from this PPA, such as python3.6.

Changing the Python version used by Ansible

To use Python 2 as the interpreter for Ansible, override the ansible_python_interpreter variable and set it to /usr/bin/python. This allows a machine without Python 3 to be provisioned.

Creating a swap file

By default, the playbook won't create a swap file. To create/enable swap, simply change the values in roles/base/defaults/main.yml.

You can also override these values in the main playbook, for example:

---

...

  roles:
    - { role: base, create_swap_file: true, swap_file_size_kb: 1024 }
    - db
    - rabbitmq
    - web
    - celery

This will create and mount a 1GB swap. Note that block size is 1024, so the size of the swap file will be 1024 x swap_file_size_kb.

Automatically generating and renewing Let's Encrypt SSL certificates with the certbot client

A certbot role has been added to automatically install the certbot client and generate a Let's Encrypt SSL certificate.

Requirements:

  • A DNS "A" or "CNAME" record must exist for the host to issue the certificate to.
  • The --standalone option is being used, so port 80 or 443 must not be in use (the playbook will automatically check if Nginx is installed and will stop and start the service automatically).

In roles/nginx/defaults.main.yml, you're going to want to override the nginx_use_letsencrypt variable and set it to yes/true to reference the Let's Encrypt certificate and key in the Nginx template.

In roles/certbot/defaults/main.yml, you may want to override the certbot_admin_email variable.

A cron job to automatically renew the certificate will run daily. Note that if a certificate is due for renewal (expiring in less than 30 days), Nginx will be stopped before the certificate can be renewed and then started again once renewal is finished. Otherwise, nothing will happen so it's safe to leave it running daily.

Maintenance mode

The playbook contains a maintenance page option. roles/web/templates/maintenance_off.html is the provided maintenance template. To activate the maintenance mode, you can rename the template to maintenance_on.html, in order for nginx to serve it. This can be done manually. Alternately, you can include in the playbook a step activating the maintenance page (using the renaming process) while the site requires downtime. Then switch back to running mode in the playbook when operations requiring downtime are completed.

Useful Links

Contributing

Contributions are welcome! Please make sure any PR passes the test suite.

Running the test suite locally:

The test suite uses a Docker container - make sure Docker is installed and configured before running the following commands:

pip install -r requirements-dev.txt
molecule test

ansible-django-stack's People

Contributors

adrianmoisey avatar autn avatar bhardin avatar conrado avatar darylyu avatar davidcain avatar dependabot[bot] avatar dpward avatar iceraj avatar isedwards avatar jbants avatar jcalazan avatar jpmjpmjpm avatar mprat avatar pedro-nonfree avatar pedrospdc avatar stuartmackay avatar ypcrumble avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-django-stack's Issues

Fatal: {'msg': "One or more undefined variables: 'omit' is undefined", 'failed': True}

Any ideas what might be causing it?

==> default: TASK: [web | Create the virtualenv postactivate script to set environment variables] ***
==> default: fatal: [localhost] => {'msg': "One or more undefined variables: 'omit' is undefined", 'failed': True}
==> default: fatal: [localhost] => {'msg': "One or more undefined variables: 'omit' is undefined", 'failed': True}
==> default:
==> default: FATAL: all hosts have already failed -- aborting
==> default:

Tanx

Add task/option to auto-generate cert with Let's Encrypt

Also add an option to include a cron job to auto-renew the cert.

Currently thinking of doing something like:

(before Nginx installation)

  1. Install certbot
  2. Run standalone command, pull fqdn from an env var.
  3. In the Nginx template, if option nginx_use_lets_encrypt is true, simply reference the cert and key to the default location created by certbot.
  4. Cron job to auto-renew monthly (cert is good for 3 months).

If someone has done this before, would love to hear your solution (or open a pull request :D)!

Provide guidance on `settings.py` values

After resolving #40, I was able to load the page. However, I noticed that no CSS files had been loaded. This was because I had defined STATIC_ROOT in my settings.py file and the value differed from the location used in configuring nginx. It would be good to provide a little guidance about which variables should/shouldn't be declared in settings.

An alternative would be to parse the value in the provided settings file, but this would obviously involve a lot more work.

Add Celery role

Add role to run celery workers and celery camera in the background managed my supervisor.

ERROR! environment must be a dictionary, received django_environment

I am using ansible 2.0.0.2, and I am receiving this error during provisioning:

TASK [web : Run Django database migrations] ************************************
[DEPRECATION WARNING]: Using bare variables for environment is deprecated.
Update your playbooks so that the environment value uses the full variable
syntax ('{{foo}}'). This feature will be removed in a future release.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
fatal: [default]: FAILED! => {"failed": true, "msg": "ERROR! environment must be a dictionary, received django_environment (<class 'ansible.parsing.yaml.objects.AnsibleUnicode'>)"}

How do you redirect www traffic to non-www?

Usually I use a slightly modified nginx.conf like the last example in this stackoverflow answer. This way nginx redirects to https and the non-www version of my url.

Do you do something different, i.e., maybe using a CNAME DNS record? I'm not sure what the best approach is - would love to hear your opinion. I'd also be happy to submit a PR for a modified nginx.conf that includes the redirect to non-www (or perhaps either way with a flag) if that's helpful.

ffmpeg repository error on provisioning

I ran into the following when I attempted to use this playbook. The repository pointed to doesn't exist and I am not able to find a suitable replacement.

TASK: [web | Add a custom repository for ffmpeg] ****************************** 
failed: [default] => {"failed": true, "parsed": false}
BECOME-SUCCESS-kipofcbjoqsbuaicrsiwoqujiigbueas
Traceback (most recent call last):
  File "/home/vagrant/.ansible/tmp/ansible-tmp-1434537365.12-129649650050245/apt_repository", line 2524, in <module>
    main()
  File "/home/vagrant/.ansible/tmp/ansible-tmp-1434537365.12-129649650050245/apt_repository", line 436, in main
    cache.update()
  File "/usr/lib/python2.7/dist-packages/apt/deprecation.py", line 98, in deprecated_function
    return func(*args, **kwds)
  File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 418, in update
    raise FetchFailedException(e)
apt.cache.FetchFailedException: W:Failed to fetch http://ppa.launchpad.net/jon-severinsson/ffmpeg/ubuntu/dists/precise/main/binary-amd64/Packages  404  Not Found
, W:Failed to fetch http://ppa.launchpad.net/jon-severinsson/ffmpeg/ubuntu/dists/precise/main/binary-i386/Packages  404  Not Found
, E:Some index files failed to download. They have been ignored, or old ones used instead.

Repository does not exists when Provisioning for second time

Hi,

When provisioning for second time, the task 'Setup the git repo' is failing with the following error:

failed: [default] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Warning: Permanently added the RSA host key for IP address '131.103.20.168' to the list of known hosts.
conq: repository does not exist.
fatal: The remote end hung up unexpectedly

If I go on the VM to /webapps/ and rm -rf myapp/ then it works.

Any idea why this might be?

Regards

Clean .pyc files after pulling latest code

This is useful during those times when you deleted a package and the remaining .pyc files could break the app.

A good spot to do this is right after pulling the latest code from the git repo.

find . -name “*.pyc” -delete

Supporting Ubuntu 16.04

I tried hard to make the playbook working with Ubuntu 16.04, but Gunicorn wasn't cooperating and I couldn't figure it the problem from log. It will be nice if someone can give it a shot. One tiny advantage is that 16.04 is LTS and it comes with Python 3.5 by default.

User of nginx?

Currently user of nginx is default:

f.e. debian nginx.conf:
user = www-data

So, when nginx try to access 500.html

2016/01/18 13:02:09 [error] 10841#0: *12 open() "/srv/myapp/app-project/myapp/templates/500.html" failed (13: Permission denied)

May be we must change nginx user to
user = {{gunicorn_user}}

?

Install Supervisor

I think, "Install Supervisor" task must not to be in celery role.
I don't use celery, and got error when config gunicorn with supervisor, because supervisor is not installed.

may be it must place in base or seprate role?

wrong wsgi path

In ansible-django-stack/roles/web/templates/gunicorn_start.j2, according to the README directory structure, shouldn't the last line: {{ application_name }}.wsgi be {{ project_name }}.wsgi? (since wsgi is in project path, not apps dir)

Add redis role

This seems to be a pretty popular alternative to RabbitMQ as a message broker for Celery as it's simpler and can be used for other things.

Trouble with Django collectstatic

Hi there,

Thank you for making this thorough playbook available. It's been an incredible asset as I've figured out how to deploy a Django app.

I am, however, having an issue: the "Run Django collectstatic" task in the playbook continues to fail with the error:

You're using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path.

It seems like an intuitive enough error message, but I can't seem to diagnose the problem:

  • My STATIC_ROOT variable is defined exactly the way it is in your repo:

    STATIC_ROOT: "{{ nginx_static_dir }}"

  • I added a task as follows to sanity check that the env variable is set:

   - name: Sanity check env vars
   shell: echo $STATIC_ROOT
   environment: django_environment

That task correctly outputs the path "/webapps/appname/static/"

  • I even modified manage.py to output the env variable so it definitely knows that it's set and what it's set to. This (again) returns the correct path as expected.
  • Finally, I tried setting the variable in my Django settings file instead. If I do that, then the collectstatic command works correctly and my static resources are loaded correctly.

At this point, I'm totally stumped on why I get the error message when I use the playbook configuration as opposed to setting this env variable in the Django settings directly; I can only assume the error message Django is returning is incorrect, but I don't know why.

Any ideas?

The SECRET_KEY setting must not be empty

Hi

I've added SECRET_KEY variable in web/template/virtualenv_postactivate.j2 file. But I am still getting an error.

virtualenv_postactivate.j2 file

#!/bin/sh
export SECRET_KEY='xxxxxxxx'

{% for variable_name, value in django_environment.iteritems() %}
export {{ variable_name }}="{{ value }}"
{% endfor %}

Error

    raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.

Can you please help?
Thanks

Proposal to use different tag scheme

Using this current scheme:

- name: Run the Django syncdb command
  django_manage:
    command: syncdb
    app_path: '{{ application_path }}'
    virtualenv: '{{ virtualenv_path }}'
    settings: '{{ django_settings_file }}'
  environment: django_environment
  when: run_django_syncdb
  tags: django

- name: Run Django South migrations
  django_manage:
    command: migrate
    app_path: '{{ application_path }}'
    virtualenv: '{{ virtualenv_path }}'
    settings: '{{ django_settings_file }}'
  environment: django_environment
  when: run_django_south_migration
  tags: django

- name: Run Django collectstatic
  django_manage:
    command: collectstatic
    app_path: '{{ application_path }}'
    virtualenv: '{{ virtualenv_path }}'
    settings: '{{ django_settings_file }}'
  environment: django_environment
  when: run_django_collectstatic
  tags: django

Is there any way one could run just one of these tasks from the command line? The following command runs all tasks with true values for the when clause as determined by the environment:

$ ansible-playbook development.yml --tags "django"

For example, it would be nice to migrate the app without running django's syncdb:

$ ansible-playbook development.yml --tags "django_migrate"

If I'm missing a way to do this without modifying the tags, please let me know. I can see the logic of being able to group tasks.. I just want to conditionally run it via the command line, and was hoping there was a way around verbosely supplying lots of variable overrides as migrations are much more common than syncdb, etc.

Include periodic PostgreSQL backup and restore scripts

@jcalazan have you considered adding backup and restore scripts? In particular one that also cleans the backup directory so that it doesn't become too large would be ideal. Let me know if helpful to add a PR? I'm not sure what your perspective on having too many roles in this ansible playbook is. I'm interested in adding various tasks that I use on my personal projects to have community code review and make sure I'm not doing anything silly :).

ssl_key_password isn't used anywhere

In the env_vars files there is a variable called ssl_key_password but I can't figure out what it does.
I've been looking through the rest of the files(grepping for ssl_key_password) and I can't find it in use anywhere.
Is it actually required?

requirements.txt

Hi,

I got an issue on:

TASK [web : Install packages required by the Django app inside virtualenv] ***** task path: /Users/username/vagrant/ansible-django-stack/roles/web/tasks/setup_django_app.yml:3 fatal: [default]: FAILED! => {"changed": false, "cmd": "/webapps/testdjango/bin/pip install -r /webapps/testdjango/test-django/requirements.txt", "failed": true, "msg": "\n:stderr: Could not open requirements file: [Errno 2] No such file or directory: '/webapps/testdjango/test-django/requirements.txt'\n"}

by the way thank you for providing the set up.

Add option to create swap file

Ran into an issue recently where pip-installing a package used up all 512MB of RAM on my VPS (lxml module).

  • in base/roles
  • look for var create_swap_file is true before running the task
  • other vars
    • swap_file_path (default /swapfile)
    • swap_file_size_kb (default 1024, note that block size is 1024, so this would create a 1GB swap)

Provide option to install Python 3 in virtualenv

Add a var with a default value of python2.7 and can be replaced with python3.

# roles/web/defaults.yml
virtualenv_python_version: python2.7

In the task the create the virutalenv:

virtualenv -p {{ virtualenv_python_version }} {{ virtualenv_path }} --no-site-packages

Envirponmental variables are not set after creating postactivate file

Hello

Playbook runs without errors and creates postactivate file, but it doesn't actually export environmental variables. I notice this when I try to create superuser manually and always get a error that it can't connect to database because no credentials supplied.

Task for postactivate

- name: Create the virtualenv postactivate script to set environment variables
  template: src=virtualenv_postactivate.j2
            dest={{ virtualenv_path }}/bin/postactivate
            owner={{ gunicorn_user }}
            group={{ gunicorn_group }}
            mode=0640
            backup=yes
  tags: deploy

Environmental variables

django_environment:
  DJANGO_SETTINGS_MODULE: "{{ django_settings_file }}"
  DJANGO_SECRET_KEY: "{{ django_secret_key }}"
  MEDIA_ROOT: "{{ nginx_media_dir }}"
  STATIC_ROOT: "{{ nginx_static_dir }}"
  DATABASE_NAME: "{{ db_name }}"
  DATABASE_USER: "{{ db_user }}"
  DATABASE_PASSWORD: "{{ db_pass }}"

Database settings

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': os.getenv('DATABASE_NAME'),
        'USER': os.getenv('DATABASE_USER'),
        'PASSWORD': os.getenv('DATABASE_PASSWORD'),
        'HOST': os.getenv('DATABASE_HOST'),
        # 'PORT': os.getenv('DATABASE_PORT', '5432'),
    }
}

How can I ensure that django variables exported?

Refactor project structure to follow best practices

Mainly the Playbooks:

  • rename production.yml to site.yml
  • always use inventory files
  • to run the playbook against production servers for example, we can do ansible-playbook -i production site.yml

Where site.yml has includes to group-specific playbooks (such as webservers.yml and dbservers.yml with the host: option in these playbooks set to the proper group).

Kill Apache httpd if already running?

I "stacked" both the db and web roles on an Ubuntu 14.04 installation. On browsing to the IP address of the board, I noticed that Apache served the page instead of nginx. I killed the Apache service and restarted the nginx server and all was OK (well, mostly - more on that in another issue).

I don't know if a running Apache instance is the default on all Ubuntu installations or if it was just my particular instance, but checking for any running services that may conflict with our installation (i.e. ports 80, 443) sounds like a good thing.

Add python3 support

I think, we need python3 support.

may be something like that (not tested)

vars:

#python3
python_version="3"
#python2
python_version="" 

task (install):

- python{{python_version}}
- python{{python_version}}-pip
- python{{python_version}}-pycurl
- python{{python_version}}-psycopg2
etc

and create virtualenv with

virtualenv -p python{{python_version}}

Tasks order in web role may break build when adding new requirements

Hi, I've used your playbook (awesome, btw) for deploying an app successfully but then I added a new dependency to requirements.txt, and imported this new python module on my settings. It seems that if you run the setup_supervisor task before the setup_django_app task, the supervisor restart gets broken and the following message ends up on logs:

Traceback (most recent call last):
File "/webapps/dispatcher/bin/gunicorn", line 11, in
sys.exit(run())
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 185, in run
super(Application, self).run()
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run
Arbiter(self).run()
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 196, in run
self.halt(reason=inst.reason, exit_status=inst.exit_status)
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 292, in halt
self.stop()
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 343, in stop
time.sleep(0.1)
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld
self.reap_workers()
File "/webapps/dispatcher/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>

I've changed the order of the setup_django_app and setup_supervisor (so that new requirements are installed first) and it worked.

django makemigrations permissions denied

Hello,

Impossible to deploy my models with migrate command.

application_name
|- manage.py
|- project
|--- settings.py
|--- models
|----- init.py
|----- object1.py
|----- object2.py

- name: Run Django database migrations
  django_manage:
    command: migrate
    app_path: "{{ project_path }}"
    virtualenv: "{{ virtualenv_path }}"
  environment:
    PATH: "{{ virtualenv_path }}/bin:{{ lookup('env', 'PATH') }}"
  when: run_django_db_migrations is defined and run_django_db_migrations
  tags: django.migrate

Same problem when i want migrate directly on the server.
After activate virtualenv, i try ./manage.py makemigrations project and get Permission denied error message to create migrations repository.

I have tried to change permissions on all files but nothing change.
Finally i have success to process makemigrations command like that
sudo /virtualenv_path/bin/python ./manage.py makemigrations project
But this not the goal to do this like that.

Any help will be appreciated.
Regards

pip freeze inside the virtualenv :
amqp==1.4.9
anyjson==0.3.3
billiard==3.3.0.23
celery==3.1.23
Django==1.9.8
django-push-notifications==1.4.1
djangorestframework==3.4.0
djoser==0.5.0
gunicorn==19.6.0
kombu==3.0.35
psycopg2==2.6.2
pytz==2016.6.1
Unidecode==0.4.19

Add Jenkins role

Really nice to have when you have multiple projects to manage.

For digital nomads, it's also a good idea to run your deployment scripts from a reliable VPS instead of running it from your laptop when you're working somewhere with unreliable connection.

Ansible Tower is too expensive :(

License

Could you add a license for this project, please?

I would be in favor of MIT, but it's your choice.

Postgres error when provisioning

Hi there,

Has someone encountered this error while provisioning?:

On the task:

- name: Ensure database is created
  sudo_user: postgres
  postgresql_db: name={{ db_name }}
                 encoding='UTF-8'
                 lc_collate='en_US.UTF-8'
                 lc_ctype='en_US.UTF-8'
                 template='template0'
                 state=present

Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql>/.s.PGSQL.5432"

While doing some tests on the local machine I was able to overcome this temporarly by logging in, doing apt-get remove --purge postgres and then installing postgres-9.1 in the task Install PostgreSQL, notice the version instead.

However this is not reliable, Have you guys encountered this error?

Cheers!

Use handlers for the nginx role

Just like the memcached role uses a handler to restart the service in case the config file changes, it would be good practice for the nginx role to do the same. Also, since both packages support it, they should just use reload instead of restart.

Best practices for pulling from private git repo?

Hi, just wondering if you have thoughts on best way to get this to work if you're pulling from a private repo. My initial thought is:

  1. create a non-passphased ssh key pair just for github
  2. add it to the github admin
  3. within roles/web/tasks/setup_git_repo.yml have ansible add private key to the user account on the target machine.

Does that sound like an ok plan?

Help with usage on multiple servers

This is more of a question on how to use this project rather than a specific issue.

This playbook works great on a local Vagrant VM but I'm not sure how to use it on a cloud provider[1]. Here is what I've tried:

  • I setup four servers: webserver-1, webserver-2, dbserver, and memcachedserver (mirroring the server inventory in local)
  • I created an inventory file called development which pointed to these 4 servers under the appropriate sections
  • I ran: ansible-playbook -i development -v development.yml

As it's running, I notice all roles being applied to all servers (and this seems to be reflected in deployment.yml that lists hosts as all and mentions all roles). How do I go about ensuring that only the base + appropriate role is applied to the relevant server? Or have I misunderstood this entirely?

And thank you for this excellent project, I've learnt quite a bit from it as I've been learning Ansible and using it for my own Django apps.

[1] I'm attempting this with Google Cloud, but that's not relevant to the question.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.