Giter VIP home page Giter VIP logo

nexus3-oss's Introduction

Build Status GitHub release (latest by date) GitHub commits since latest release Ansible Role GitHub contributors GitHub licence

Ansible Role: Nexus 3 OSS

This role installs and configures Nexus Repository Manager OSS version 3.x.

All configuration can be updated by re-running the role, except for the blobstores related settings, which are immutable in nexus.

travis-ci.com logo This role's CI is proudly using OSS credits allocated by https://travis.com

Table of Contents

Note: TOC links will not function appropriately when viewing it from ansible galaxy site. View it on github

(Created with gh-md-toc)

History / Credits

This role is a fork of ansible-nexus3-oss by @savoirfairelinux after they announced end of maintenance. You can have a look at the following tickets in the original repository for explanations:

We would like to thank the original authors for the work done.

In Memoriam Lionel Lecha (note from main author):

Picture of Lionel Lecha This work would never have reached the community as an Open Source project without the unconditional trust of Lionel Lecha, director of SMAP APPUI @La Poste when I started to automate the deployment of nexus for his unit in 2018 as an external contractor. Lionel died too early on the 17th of february 2023 at the age of 60. Thanks for your always equal good mood and your confidence.

Requirements

  • Fairly Up-to-date version of ansible. We follow ansible versions during maintenance/development and will take advantage of new features if needed (and update meta/main.yml for minimum version)
  • Compatible OS. This role is tested through molecule on travis CI for CentOS 8, Ubuntu Bionic (18.04), and Debian buster. Other molecule scenarios can be played locally for CentOS 7, Ubuntu Xenial (16.04), and Debian stretch
  • Rsync has to be installed on the target machine (it is not needed on the host running ansible if different)
  • jmespath library needs to be installed on the host running the playbook (needed for the json_query filter). See requirements.txt
  • Java 8 (mandatory)
  • Apache HTTPD (optional)
    • Used to setup a SSL reverse-proxy
    • The following modules must be enabled in your configuration: mod_ssl, mod_rewrite, mod_proxy, mod_proxy_http, mod_headers.

(see Dependencies section below for matching roles on galaxy)

Role Variables

Ansible variables, along with the default values (see default/main.yml) :

General variables

    nexus_version: ''
    nexus_timezone: 'UTC'
    nexus_download_url: "http://download.sonatype.com/nexus/3"
    # nexus_download_ssl_verify: <unset>
    # nexus_version_running: <unset>

The role will install latest nexus available version by default. You may fix the version by setting the nexus_version variable. See available versions at https://www.sonatype.com/download-oss-sonatype. When having a slow pull through proxy, a retry can be useful to prevent timeouts. You can add retries to the download by setting these variables:

    nexus_download_retries: 3 # 0 by default
    nexus_download_delay: 15

If you fix the version and change it to a different one, the role will try to upgrade your installation. Make sure to change to a later version in release history. Downgrading will fail (unless you re-install from scratch using the nexus_purge special var)

If you don't fix the version and play the role on an existing installation, the current installed version will be used (detecting target of {{ nexus_installation_dir}}/nexus-latest). If you want to upgrade nexus, you will have to pass the special var nexus_upgrade=true on the ansible-playbook command line. See Upgrade nexus to latest version

If you use an older version of nexus than the lastest, you should make sure you do not use features which are not available in the installed release (e.g. yum hosted repositories for nexus < 3.8.0, git lfs repo for nexus < 3.3.0, etc.)

nexus_timezone is a Java Timezone name and can be useful in combinationwith nexus_scheduled_tasks cron expressions below.

You may change the download site for packages by tuning nexus_download_url (e.g. closed environment, proxy/cache on your network...). In this case, the automatic detection of the latest version will most likelly fail and you will have to fix the version to download. If you still want to take advantage of automatic latest version detection, a call to <your_custom_location>/latest-unix.tar.gz must return an HTTP 302 redirect to the latest available version in your cache/proxy. If your download location uses https with a self-signed certificate (or a from a private PKI) and you are having troubles getting it validated (i.e. download errors in the role) and you fully trust the target you can set nexus_download_ssl_verify: false.

nexus_version_running is a variable used internally. As such, it should never be set directly It will exist only if nexus is currently installed on the host and will register the current version prior to running the role. It can be used later in your playbook if needed (e.g. for an upgrade notification email)

Download dir for nexus package

    nexus_download_dir: '/tmp'

Directory on target where the nexus package will be downloaded.

Important note: if you intend to run the role periodically to maintain/provision your nexus install, you should make sure the downloaded files will persist between run. On RHEL/Centos specifically, you should change this dir to a location that is not cleaned up automatically. If the package file does not persist, it will be downloaded again which might cause an unnecessary restart of nexus.

Local tmp dir on controller

nexus_local_tmp_dir: /tmp

This directory is used to create a local archive of groovy script prior to sending them to the target. On shared ansible controller, you should modify this path to one you own (e.g. /home/<user>/tmp). Important: this directory must exist.

Nexus port, context path and listening IP

    nexus_default_port: 8081
    nexus_application_host: '{{ httpd_setup_enable | ternary("127.0.0.1", "0.0.0.0") }}'
    nexus_default_context_path: '/'

Listening port/ip, and context path of the java nexus process.

  • the listening IP/Interface (i.e. nexus_application_host) is by default dependant on the httpd_setup_enable setting. Nexus will listen only on localhost (127.0.0.1) if reverse proxy is enabled or all configured IP (0.0.0.0) if not. You can change this setting to your actual need (i.e. don't install proxy and still bind to 127.0.0.1 only if you install your own proxy)
  • nexus_default_context_path has to keep the trailing slash when set, for ex. : nexus_default_context_path: '/nexus/'.

Nexus OS user and group

    nexus_os_group: 'nexus'
    nexus_os_gid: 1000
    nexus_os_user: 'nexus'
    nexus_os_uid: 1000

User and group used to own the nexus files and run the service, those will be created by the role if absent. If defined a uid and gid will be used apon creation.

    nexus_os_user_home_dir: '/home/nexus'

Allow to change the nexus user default home directory

Nexus instance directories

    nexus_installation_dir: '/opt'
    nexus_data_dir: '/var/nexus'
    nexus_tmp_dir: "{{ (ansible_os_family == 'RedHat') | ternary('/var/nexus-tmp', '/tmp/nexus') }}"

Nexus directories.

  • nexus_installation_dir contains the installed executable(s)
  • nexus_data_dir contains all configuration, repositories and uploaded artifacts. Custom blobstores paths outside of nexus_data_dir can be configured, see nexus_blobstores below.
  • nexus_tmp_dir contains all temporary files. Default path for redhat has been moved out of /tmp to overcome potential problems with automatic cleaning procedures. See #168.

Nexus JVM setting

    nexus_min_heap_size: "1200M"
    nexus_max_heap_size: "{{ nexus_min_heap_size }}"
    nexus_max_direct_memory: "2G"

These are the defaults for Nexus. Please do not modify those values unless you have read the memory section of nexus system requirements and you understand what you are doing.

As a second warning, here is an extract from the above document:

Increasing the JVM heap memory larger than recommended values in an attempt to improve performance is not recommended. This actually can have the opposite effect, causing the operating system to thrash needlessly.

    nexus_custom_jvm_settings: []

Additionnal settings to pass to the jvm. Those are empty by default and should not contain any option related to memory above (i.e. anything which starts with Xms, Xmx, or XX:MaxDirectMemorySize=). Each option should be set as an item of the list without the leading dash (-).

Here is an example to change the Garbabe collector to G1 and set GC logs with rotation.

    nexus_custom_jvm_settings:
      - XX:+UseG1GC
      - XX:+PrintGCDetails
      - Xloggc:{{ nexus_installation_dir }}log/gc.log
      - XX:+UseGCLogFileRotation
      - XX:NumberOfGCLogFiles=10
      - XX:GCLogFileSize=50m

Plugin installation

nexus_plugin_urls: []

Put list of urls pointing to plugins build for your Nexus version. Only *.kar bundles can be installed this way.

Onboarding Wizard

nexus_onboarding_wizard: false

Controls whether the nexus onboarding wizard runs when the admin user logs in for the first time

Admin password

    nexus_admin_password: 'changeme'

The 'admin' account password to setup. This works only on first time install by default. Please see Change admin password after first install if you want to change it later with the role.

It is strongly advised that you do not keep your password in clear text in you playbook and use ansible-vault encryption (either inline or in a separate file loaded with include_vars for example)

Default anonymous access

    nexus_anonymous_access: false

Allow anonymous access to nexus.

Public hostname

    nexus_public_hostname: 'nexus.vm'
    nexus_public_scheme: https

The fully qualified domain name and scheme under which the nexus instance will be accessible to its clients.

API access for this role

    nexus_api_hostname: localhost
    nexus_api_scheme: http
    nexus_api_validate_certs: "{{ nexus_api_scheme == 'https' }}"
    nexus_api_context_path: "{{ nexus_default_context_path }}"
    nexus_api_port: "{{ nexus_default_port }}"
    nexus_api_timeout: 60

These vars control how the role connects to the nexus API for provisionning. For advance usage only. You most probably do not want to change these default settings

Note: the nexus_api_timeout was added in v2.4.19 and overrides the default uri module timeout of 30s for all calls to the API

Branding capabalities

    nexus_branding_header: ""
    nexus_branding_footer: "Last provisionned {{ ansible_date_time.iso8601 }}"

Header and footer branding, those can contain HTML.

Audit capability

    nexus_audit_enabled: false

The Auditing capability of nexus is off by default. You can turn it on by switching this to true. Please note that the audit data is stored in nexus db, persits accross reboots and is not automatically rotated/cleared.

Log4j Visualizer

    nexus_log4j_visualizer_enabled: false

By default the log4j visualizer is set to false. You can enable this by switching to true. This will add the log4j-visualizer capability to your Nexus instance.

Reverse proxy setup

    httpd_setup_enable: false
    httpd_server_name: "{{ nexus_public_hostname }}"
    httpd_default_admin_email: "[email protected]"
    httpd_ssl_certificate_file: 'files/nexus.vm.crt'
    httpd_ssl_certificate_key_file: 'files/nexus.vm.key'
    # httpd_ssl_certificate_chain_file: "{{ httpd_ssl_certificate_file }}"
    httpd_copy_ssl_files: true

Setup an SSL Reverse-proxy. This needs httpd installed. Note : when httpd_setup_enable is set to true, nexus binds by default to 127.0.0.1:8081 thus not being directly accessible on HTTP port 8081 from an external IP. (If you want to change this, you can explicitely set nexus_application_host: 0.0.0.0)

The default hostname used is nexus_public_hostname. If you need different names for whatever reason, you can set httpd_server_name to a different value.

With httpd_copy_ssl_files: true (default), the above certs must exist in your playbook dir and will be copied to the server and configured in apache. httpd_ssl_certificate_chain_file is optional and must be left unset if you do not want to configure a chain file.

If you want to use existing certificates on the server, set httpd_copy_ssl_files: false and provide the following variables

    # These specifies to the vhost where to find on the remote server file
    # system the certificate files.
    httpd_ssl_cert_file_location: "/etc/pki/tls/certs/wildcard.vm.crt"
    httpd_ssl_cert_key_location: "/etc/pki/tls/private/wildcard.vm.key"
    # httpd_ssl_cert_chain_file_location: "{{ httpd_ssl_cert_file_location }}"

httpd_ssl_cert_chain_file_location is optional and must be left unset if you do not want to configure a chain file

    httpd_default_admin_email: "[email protected]"

Set httpd default admin email address

LDAP configuration

Ldap connections and security realm are disabled by default

    nexus_ldap_realm: false
    ldap_connections: []

LDAP connection(s) setup, each item goes as follow :

    nexus_ldap_realm: true
    ldap_connections:
      - ldap_name: 'My Company LDAP' # used as a key to update the ldap config
        ldap_protocol: 'ldaps' # ldap or ldaps
        ldap_hostname: 'ldap.mycompany.com'
        ldap_port: 636
        ldap_use_trust_store: false # Wether or not to use certs in the nexus trust store
        ldap_search_base: 'dc=mycompany,dc=net'
        ldap_auth: 'none' # or simple
        ldap_auth_username: 'username' # if auth = simple
        ldap_auth_password: 'password' # if auth = simple
        ldap_user_base_dn: 'ou=users'
        ldap_user_filter: '(cn=*)' # (optional)
        ldap_user_object_class: 'inetOrgPerson'
        ldap_user_id_attribute: 'uid'
        ldap_user_real_name_attribute: 'cn'
        ldap_user_email_attribute: 'mail'
        ldap_user_subtree: false
        ldap_map_groups_as_roles: false
        ldap_group_base_dn: 'ou=groups'
        ldap_group_object_class: 'posixGroup'
        ldap_group_id_attribute: 'cn'
        ldap_group_member_attribute: 'memberUid'
        ldap_group_member_format: '${username}'
        ldap_group_subtree: false

Example LDAP config for anonymous authentication (anonymous bind), this is also the "minimal" config :

    nexus_ldap_realm: true
    ldap_connection:
      - ldap_name: 'Simplest LDAP config'
        ldap_protocol: 'ldaps'
        ldap_hostname: 'annuaire.mycompany.com'
        ldap_search_base: 'dc=mycompany,dc=net'
        ldap_port: 636
        ldap_use_trust_store: false
        ldap_user_id_attribute: 'uid'
        ldap_user_real_name_attribute: 'cn'
        ldap_user_email_attribute: 'mail'
        ldap_user_object_class: 'inetOrgPerson'

Example LDAP config for simple authentication (using a DSA account) :

    nexus_ldap_realm: true
    ldap_connections:
      - ldap_name: 'LDAP config with DSA'
        ldap_protocol: 'ldaps'
        ldap_hostname: 'annuaire.mycompany.com'
        ldap_port: 636
        ldap_use_trust_store: false
        ldap_auth: 'simple'
        ldap_auth_username: 'cn=mynexus,ou=dsa,dc=mycompany,dc=net'
        ldap_auth_password: "{{ vault_ldap_dsa_password }}" # better keep passwords in an ansible vault
        ldap_search_base: 'dc=mycompany,dc=net'
        ldap_user_base_dn: 'ou=users'
        ldap_user_object_class: 'inetOrgPerson'
        ldap_user_id_attribute: 'uid'
        ldap_user_real_name_attribute: 'cn'
        ldap_user_email_attribute: 'mail'
        ldap_user_subtree: false

Example LDAP config for simple authentication (using a DSA account) + groups mapped as roles :

    nexus_ldap_realm: true
    ldap_connections
      - ldap_name: 'LDAP config with DSA'
        ldap_protocol: 'ldaps'
        ldap_hostname: 'annuaire.mycompany.com'
        ldap_port: 636
        ldap_use_trust_store: false
        ldap_auth: 'simple'
        ldap_auth_username: 'cn=mynexus,ou=dsa,dc=mycompany,dc=net'
        ldap_auth_password: "{{ vault_ldap_dsa_password }}" # better keep passwords in an ansible vault
        ldap_search_base: 'dc=mycompany,dc=net'
        ldap_user_base_dn: 'ou=users'
        ldap_user_object_class: 'inetOrgPerson'
        ldap_user_id_attribute: 'uid'
        ldap_user_real_name_attribute: 'cn'
        ldap_user_email_attribute: 'mail'
        ldap_map_groups_as_roles: true
        ldap_group_base_dn: 'ou=groups'
        ldap_group_object_class: 'groupOfNames'
        ldap_group_id_attribute: 'cn'
        ldap_group_member_attribute: 'member'
        ldap_group_member_format: 'uid=${username},ou=users,dc=mycompany,dc=net'
        ldap_group_subtree: false

Example LDAP config for simple authentication (using a DSA account) + groups mapped as roles dynamically :

    nexus_ldap_realm: true
    ldap_connections:
      - ldap_name: 'LDAP config with DSA'
        ldap_protocol: 'ldaps'
        ldap_hostname: 'annuaire.mycompany.com'
        ldap_port: 636
        ldap_use_trust_store: false
        ldap_auth: 'simple'
        ldap_auth_username: 'cn=mynexus,ou=dsa,dc=mycompany,dc=net'
        ldap_auth_password: "{{ vault_ldap_dsa_password }}" # better keep passwords in an ansible vault
        ldap_search_base: 'dc=mycompany,dc=net'
        ldap_user_base_dn: 'ou=users'
        ldap_user_object_class: 'inetOrgPerson'
        ldap_user_id_attribute: 'uid'
        ldap_user_real_name_attribute: 'cn'
        ldap_user_email_attribute: 'mail'
        ldap_map_groups_as_roles: true
        ldap_map_groups_as_roles_type: 'dynamic'
        ldap_user_memberof_attribute: 'memberOf'

@nliebelt proposed a configuration with explanations in an issue to configure nexus for Active Directory

Privileges

    nexus_privileges:
      - name: all-repos-read # used as key to update a privilege
        # type: <one of application, repository-admin, repository-content-selector, repository-view, script or wildcard>
        description: 'Read & Browse access to all repos'
        repository: '*'
        actions: # can be add, browse, create, delete, edit, read or  * (all)
          - read
          - browse
        # pattern: pattern
        # domain: domain
        # script_name: name

List of the privileges to setup. Please see documentation and GUI to check out which variables should be set depending on the type of privilege.

Those items are combined with the following default values :

    _nexus_privilege_defaults:
      type: repository-view
      format: maven2
      actions:
        - read

Roles

    nexus_roles:
      - id: Developpers # can map to a LDAP group id, also used as a key to update a role
        name: developers
        description: All developers
        privileges:
          - nx-search-read
          - all-repos-read
        roles: [] # references to other role names

Besides creating roles, it's also possible to define a default role which will be applied to users and anonymous requests when Nexus can not find or map the according role. Default role can be defined using:

nexus_default_role: "developers" # uses the 'developers' role to all users/requests without an explicitly assigned role. Default: ""

List of the roles to setup.

Users

    nexus_local_users: []
      # - username: jenkins # used as key to update
      #   state: present # default value if ommited, use 'absent' to remove user
      #   first_name: Jenkins
      #   last_name: CI
      #   email: [email protected]
      #   password: "s3cr3t"
      #   roles:
      #     - developers # role ID

Local (non-LDAP) users/accounts list to create in nexus. State absent will remove the user if it exists

      nexus_ldap_users: []
      # - username: j.doe
      #   state: present
      #   roles:
      #     - "nx-admin"

Ldap users/roles mappings. State absent will remove roles from the existing user if already present. Ldap users are not removed. Trying to set roles on a non existing user will result in an error.

Content selectors

  nexus_content_selectors:
  - name: docker-login
    description: Selector for docker login privilege
    search_expression: format=="docker" and path=~"/v2/"

For more info on Content selector see documentation

To use content selector add new privilege with type: repository-content-selector and proper contentSelector

- name: docker-login-privilege
  type: repository-content-selector
  contentSelector: docker-login
  description: 'Login to Docker registry'
  repository: '*'
  actions:
  - read
  - browse

Cleanup policies

nexus_repos_cleanup_policies:
#   - name: mvn_cleanup
#     format: maven2
#     mode:
#     notes: ""
#     criteria:
#       lastBlobUpdated: 60
#       lastDownloaded: 120
#       preRelease: RELEASES
#       regexKey: "foo.*"

Cleanup policies definitions. Can be added to repo definitions with the option cleanup_policies

Blobstores and repositories

    nexus_delete_default_repos: false

Delete the repositories from the nexus install initial default configuration. This step is only executed on first-time install (when nexus_data_dir has been detected empty).

    nexus_delete_default_blobstore: false

Delete the default blobstore from the nexus install initial default configuration. This can be done only if nexus_delete_default_repos: true and all configured repositories (see below) have an explicit blob_store: custom. This step is only executed on first-time install (when nexus_data_dir has been detected empty).

    nexus_blobstores: []
    # example blobstore item :
    # - name: separate-storage
    #   type: file
    #   path: /mnt/custom/path
    # - name: s3-blobstore
    #   type: S3
    #   config:
    #     bucket: s3-blobstore
    #     accessKeyId: "{{ VAULT_ENCRYPTED_KEY_ID }}"
    #     secretAccessKey: "{{ VAULT_ENCRYPTED_ACCESS_KEY }}"

Blobstores to create. A blobstore path and a repository blobstore cannot be updated after initial creation (any update here will be ignored on re-provisionning).

Configuring blobstore on S3 is provided as a convenience and is not part of the automated tests we run on travis. Please note that storing on S3 is only recommended for instances deployed on AWS.

    nexus_repos_maven_proxy:
      - name: central
        remote_url: 'https://repo1.maven.org/maven2/'
        layout_policy: permissive
        # cleanup_policies:
        #    - mvn_cleanup
        # maximum_component_age: -1
        # maximum_metadata_age: 1440
        # negative_cache_enabled: true
        # negative_cache_ttl: 1440
        # Content disposition is only supported for raw and maven2 proxies and can be set to attachment or inline. Inline is Nexus default, even when the property is not set explicitly.
        # content_disposition: inline
      - name: jboss
        remote_url: 'https://repository.jboss.org/nexus/content/groups/public-jboss/'
        # cleanup_policies:
        #    - mvn_cleanup
        # maximum_component_age: -1
        # maximum_metadata_age: 1440
        # negative_cache_enabled: true
        # negative_cache_ttl: 1440
        # Content disposition is only supported for raw and maven2 proxies and can be set to attachment or inline. Inline is Nexus default, even when the property is not set explicitly.
        # content_disposition: inline
    # example with a login/password :
    # - name: secret-remote-repo
    #   remote_url: 'https://company.com/repo/secure/private/go/away'
    #   remote_username: 'username'
    #   remote_password: 'secret'
    #   # maximum_component_age: -1
    #   # maximum_metadata_age: 1440
    #   # negative_cache_enabled: true
    #   # negative_cache_ttl: 1440
    # Content disposition is only supported for raw and maven2 proxies and can be set to attachment or inline. Inline is Nexus default, even when the property is not set explicitly.
    # To set HTTP request settings:
    #   # enable_circular_redirects: true
    #   # enable_cookies: true

Maven proxy repositories configuration.

    nexus_repos_maven_hosted:
      - name: private-release
        version_policy: release
        write_policy: allow_once  # one of "allow", "allow_once" or "deny"
        # cleanup_policies:
        #    - mvn_cleanup

Maven hosted repositories configuration. Negative cache config is optionnal and will default to the above values if omitted.

    nexus_repos_maven_group:
      - name: public
        member_repos:
          - central
          - jboss

Maven group repositories configuration.

All three repository types are combined with the following default values :

    _nexus_repos_maven_defaults:
      blob_store: default # Note : cannot be updated once the repo has been created
      strict_content_validation: true
      version_policy: release # release, snapshot or mixed
      layout_policy: strict # strict or permissive
      write_policy: allow_once # one of "allow", "allow_once" or "deny"
      maximum_component_age: -1  # Nexus gui default. For proxies only
      maximum_metadata_age: 1440  # Nexus gui default. For proxies only
      negative_cache_enabled: true # Nexus gui default. For proxies only
      negative_cache_ttl: 1440 # Nexus gui default. For proxies only

Docker repositories

nexus_repos_docker_group:
  - name: some-docker-group
    sub_domain: hub-proxy # When set this will expose a subdomain url e.g: https://hub-proxy.your-nexus-instance.com
    writable_member_repo: docker-hosted-repo
    blob_store: docker-blob
    v1_enabled: False
    member_repos:
      - docker-hosted-repo
nexus_repos_docker_hosted:
  - name: some-docker-repo
    blob_store: docker-blob
    v1_enabled: false
    write_policy: allow_once # Values: "allow", "allow_once" or "deny"
    # When set, it will ignore the defined write_policy and allows to redeploy container images with the tag 'latest' only.
    allow_redeploy_latest: true

Maven, Pypi, Docker, Raw, Rubygems, Bower, NPM, Git-LFS, yum, apt, helm, r, p2, conda and go repository types: see defaults/main.yml for these options. For historical reasons and to keep backward compatibility, maven is configured by default

      nexus_config_maven: true
      nexus_config_pypi: false
      nexus_config_docker: false
      nexus_config_raw: false
      nexus_config_rubygems: false
      nexus_config_bower: false
      nexus_config_npm: false
      nexus_config_gitlfs: false
      nexus_config_yum: false
      nexus_config_apt: false
      nexus_config_helm: false
      nexus_config_r: false
      nexus_config_p2: false
      nexus_config_conda: false
      nexus_config_go: false

These are all false unless you override them from playbook / group_var / cli, these all utilize the same mechanism as maven.

Note that you might need to enable certain security realms if you want to use other repository types than maven. These are false by default

nexus_nuget_api_key_realm: false
nexus_npm_bearer_token_realm: false
nexus_docker_bearer_token_realm: false  # required for docker anonymous access

The Remote User Realm can also be enabled with

nexus_rut_auth_realm: true

and the header can be configured by defining

nexus_rut_auth_header: "CUSTOM_HEADER"

Scheduled tasks

These are quick examples and instruction to setup scheduled tasks. For in depth information on available tasks types and schedule types, please refer to the specific section in the repo wiki

    nexus_scheduled_tasks: []
    #  #  Example task to compact blobstore :
    #  - name: compact-docker-blobstore
    #    cron: '0 0 22 * * ?'
    #    typeId: blobstore.compact
    #    task_alert_email: [email protected]  # optional
    #    taskProperties:
    #      blobstoreName: {{ nexus_blob_names.docker.blob }} # all task attributes are stored as strings by nexus internally
    #  #  Example task to purge maven snapshots
    #  - name: Purge-maven-snapshots
    #    cron: '0 50 23 * * ?'
    #    typeId: repository.maven.remove-snapshots
    #    task_alert_email: [email protected]  # optional
    #    taskProperties:
    #      repositoryName: "*"  # * for all repos. Change to a repository name if you only want a specific one
    #      minimumRetained: "2"
    #      snapshotRetentionDays: "2"
    #      gracePeriodInDays: "2"
    #    booleanTaskProperties:
    #      removeIfReleased: true
    #  #  Example task to purge unused docker manifest and images
    #  - name: Purge unused docker manifests and images
    #    cron: '0 55 23 * * ?'
    #    typeId: "repository.docker.gc"
    #    task_alert_email: [email protected]  # optional
    #    taskProperties:
    #      repositoryName: "*"  # * for all repos. Change to a repository name if you only want a specific one
    #  #  Example task to purge incomplete docker uploads
    #  - name: Purge incomplete docker uploads
    #    cron: '0 0 0 * * ?'
    #    typeId: "repository.docker.upload-purge"
    #    task_alert_email: [email protected]  # optional
    #    taskProperties:
    #      age: "24"

Scheduled tasks to setup. typeId and task-specific taskProperties/booleanTaskProperties can be guessed either:

  • from the java type hierarchy of org.sonatype.nexus.scheduling.TaskDescriptorSupport
  • by inspecting the task creation html form in your browser
  • from peeking at the browser AJAX requests while manually configuring a task.

Task properties must be declared in the correct yaml block depending on their type:

  • taskProperties for all string properties (i.e. repository names, blobstore names, time periods...).
  • booleanTaskProperties for all boolean properties (i.e. mainly checkboxes in nexus create task GUI).

Backups

      nexus_backup_configure: false
      nexus_backup_schedule_type: cron
      nexus_backup_cron: '0 0 21 * * ?'  # See cron expressions definition in nexus create task gui
      # nexus_backup_start_date_time: "yyyy-MM-dd'T'HH:mm:ss"
      # nexus_backup_weekly_days: ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT']
      # nexus_backup_monthly_days: {{ range(1,32) | list + [999] }}
      nexus_backup_dir: '/var/nexus-backup'
      nexus_backup_dir_create: true
      nexus_restore_log: '{{ nexus_backup_dir }}/nexus-restore.log'
      nexus_backup_rotate: false
      nexus_backup_rotate_first: false
      nexus_backup_keep_rotations: 4  # Keep 4 backup rotation by default (current + last 3)

Backup will not be configured unless you switch nexus_backup_configure: true. In this case, a script task will be configured in nexus.

The script task schedule will be set as cron by default and runs every day at 21:00. You can define whatever schedule you like by setting accordingly the variables nexus_backup_schedule_type, nexus_backup_cron, nexus_backup_start_date_time, nexus_backup_weekly_days and nexus_backup_monthly_days. To understand their usage depending on the type of schedule you choose, please see Scheduled tasks

See the groovy template for this task for details. This scheduled task is independent from the other nexus_scheduled_tasks you declare in your playbook

If you want to rotate backups, set nexus_backup_rotate: true and adjust the number of rotations you would like to keep with nexus_backup_keep_rotations (defaults to 4).

When using rotation, if you want to save extra disk space during the backup process, you can set nexus_backup_rotate_first: true. This will configure a pre-rotation rather than the default post-rotation. Please note than in this case, old backup(s) is/are removed before the current one is done and successful.

If you want to backup to a mounted directory (like s3fs), you can set the nexus_backup_dir_create to false.

Restore procedure

Run your playbook with parameter -e nexus_restore_point=<YYYY-MM-dd-HH-mm-ss> (e.g. 2017-12-17-21-00-00 for 17th of December 2017 at 21h00m00s)

Possible limitations

Blobstore copies are made directly from nexus by the script scheduled task. This has only been tested on rather small blobstores (less than 50Go) and should be used with caution and tested carefully on larger installations before moving to production. In any case, you are free to implement your own backup scenario outside of this role.

Special maintenance/debug variables

These are not present in defaults/main.yml and are meant to be used on the command line only for maintenance/debug reasons.

Purge nexus

** Warning: this will completely erase the current data. Make sure to backup previously if needed **

Use the nexus_purge variable if you need to restart from scratch and re-install a blank instance of nexus.

ansible-playbook -i your/inventory.ini your_nexus_playbook.yml -e nexus_purge=true

Force groovy scripts registration

This one is safe and will only make the playbook run longer if it wasn't needed

For performance sake, we use a little trick with several rsync to detect which maintenance groovy scripts need to be registered in Nexus. On some occasions (e.g. bad admin password, recovering a backup from a previous nexus instance with unregistered scripts...), this can lead to situation where the role will fail when attempting to run the needed groovy scripts.

The symptom: you get HTTP 404 errors when the role tries to run scripts like in the following example (use -v option for ansible playbook):

fatal: [nexus3-oss]: FAILED! => {"changed": false, "connection": "close", "content": "", "date": "Tue, 11 Sep 2018 07:57:44 GMT", "msg": "Status code was 404 and not [200, 204]: HTTP Error 404: Not Found", "redirected": false, "server": "Nexus/3.13.0-01 (OSS)", "status": 404, "url": "http://localhost:8081/service/rest/v1/script/update_admin_password/run", "x_content_type_options": "nosniff", "x_siesta_faultid": "914acef2-f644-4bd6-9a7d-ce19255ea3dd"}

In such cases, you can force the (re-)registration of the groovy scripts with the nexus_force_groovy_scripts_registration variable:

ansible-playbook -i your/inventory.ini your_playbook.yml -e nexus_force_groovy_scripts_registration=true

Change admin password after first install

    nexus_default_admin_password: 'admin123'

This should not be changed in your playbook. This var is filled with the default nexus admin password on first install and ensures we can change the admin password to nexus_admin_password.

If you want to change your admin password after first install, you can temporarily change this to your old password from the command line. After changing nexus_admin_password in your playbook, you can run:

ansible-playbook -i your/inventory.ini your_playbook.yml -e nexus_default_admin_password=oldPassword

Upgrade nexus to latest version

    nexus_upgrade: true

This variable has no effect if nexus_version is fixed in your vars

Unless you set this variable, the role will keep the current installed nexus version when running against an already provisioned host. Passing this extra var will trigger automatic latest nexus version detection and upgrade if a newer version is available.

Setting this var as part of your playbook breaks idempotence (i.e. your playbook will make changes to your system if a new version is available although no parameters have changed)

We strongly suggest to use this variable only as an extra var to ansible-playbook call

ansible-playbook -i your/inventory.ini your_playbook.yml -e nexus_upgrade=true
Fix upgrade failing on timeout waiting for nexus port

If you have a large nexus repository, you may occasionally see an error message when upgrading

RUNNING HANDLER [nexus3-oss : wait-for-nexus-port] *************
fatal: [nexushost]: FAILED! => {"changed": false, "elapsed": 300, "msg": "Timeout when waiting for 127.0.0.1:8081"}

This is most likely because the nexus upgrade process (i.e. migrating internal orientdb) is taking longer than the default 300 seconds. You can overcome this situation by setting a custom timeout in seconds to or/and a number of retries for the handler task.

ansible-playbook -i your/inventory.ini your_playbook.yml \
-e nexus_upgrade=true \
-e nexus_wait_for_port_timeout=600
-e nexus_wait_for_port_retries=2

Skip provisionning tasks

    nexus_run_provisionning: false

This var is unset by default and will default to true. Setting it to false will cause the role to skip all of the provisionning tasks and will therefore not create/update:

  • ldap configurations
  • content selectors
  • privileges
  • roles
  • users (except checking/updating admin password)
  • blobstores
  • repositories
  • tasks (backup will still be configured if enabled)

This can save time if you have lots of configured repositories/users/roles... and you want to play the role to simply check nexus is correctly installed, or restore a backup, or upgrade nexus version.

We strongly suggest to use this variable only as an extra var to ansible-playbook call

ansible-playbook -i your/inventory.ini your_playbook.yml -e nexus_run_provisionning=false

Force recursive ownership check of blobstores directories

Introduced in version 2.4.9

    nexus_blobstores_recurse_owner: true

In versions prior to 2.4.9, the task creating the blobstores directories was recursively checking the ownership of all files. This was not a problem on creation (where dir is empty) or with installations with small blobstores, but could lead to extremely long delays for large blobstores with lots of files.

Recursive checking of ownership has been turned off by default to prevent this extra delay. If for some reason you need to make sure all files in the blobstore directories are owned by the nexus user, you can force the check:

ansible-playbook -i your/inventory.ini your_playbook.yml -e nexus_blobstores_recurse_owner=true

Dependencies

The java and httpd requirements /can/ be fulfilled with the following galaxy roles :

Feel free to use them or implement your own install scenario at your convenience.

Example Playbook

---
- name: Nexus
  hosts: nexus
  become: yes

  vars:
    nexus_timezone: 'Canada/Eastern'
    nexus_admin_password: "{{ vault_nexus_admin_password }}"
    nexus_public_hostname: 'nexus.vm'
    httpd_setup_enable: true
    httpd_ssl_certificate_file: "{{ vault_httpd_ssl_certificate_file }}"
    httpd_ssl_certificate_key_file: "{{ vault_httpd_ssl_certificate_key_file }}"
    ldap_connections:
      - ldap_name: 'Company LDAP'
        ldap_protocol: 'ldaps'
        ldap_hostname: 'ldap.company.com'
        ldap_port: 636
        ldap_search_base: 'dc=company,dc=net'
        ldap_user_base_dn: 'ou=users'
        ldap_user_object_class: 'inetOrgPerson'
        ldap_user_id_attribute: 'uid'
        ldap_user_real_name_attribute: 'cn'
        ldap_user_email_attribute: 'mail'
        ldap_group_base_dn: 'ou=groups'
        ldap_group_object_class: 'posixGroup'
        ldap_group_id_attribute: 'cn'
        ldap_group_member_attribute: 'memberUid'
        ldap_group_member_format: '${username}'
    nexus_privileges:
      - name: all-repos-read
        description: 'Read & Browse access to all repos'
        repository: '*'
        actions:
          - read
          - browse
      - name: company-project-deploy
        description: 'Deployments to company-project'
        repository: company-project
        actions:
          - add
          - edit
    nexus_roles:
      - id: Developpers # maps to the LDAP group
        name: developers
        description: All developers
        privileges:
          - nx-search-read
          - all-repos-read
          - company-project-deploy
        roles: []
    nexus_local_users:
      - username: jenkins # used as key to update
        first_name: Jenkins
        last_name: CI
        email: [email protected]
        password: "s3cr3t"
        roles:
          - Developpers # role ID here
    nexus_blobstores:
      - name: company-artifacts
        path: /var/nexus/blobs/company-artifacts
    nexus_scheduled_tasks:
      - name: compact-blobstore
        cron: '0 0 22 * * ?'
        typeId: blobstore.compact
        taskProperties:
          blobstoreName: 'company-artifacts'
    nexus_repos_maven_proxy:
      - name: central
        remote_url: 'https://repo1.maven.org/maven2/'
        layout_policy: permissive
      - name: alfresco
        remote_url: 'https://artifacts.alfresco.com/nexus/content/groups/private/'
        remote_username: 'secret-username'
        remote_password: "{{ vault_alfresco_private_password }}"
      - name: jboss
        remote_url: 'https://repository.jboss.org/nexus/content/groups/public-jboss/'
      - name: vaadin-addons
        remote_url: 'https://maven.vaadin.com/vaadin-addons/'
      - name: jaspersoft
        remote_url: 'https://jaspersoft.artifactoryonline.com/jaspersoft/jaspersoft-repo/'
        version_policy: mixed
    nexus_repos_maven_hosted:
      - name: company-project
        version_policy: mixed
        write_policy: allow
        blob_store: company-artifacts
    nexus_repos_maven_group:
      - name: public
        member_repos:
          - central
          - jboss
          - vaadin-addons
          - jaspersoft
    nexus_repos_docker_group:
       - name: some-docker-group
         sub_domain: hub-proxy
         writable_member_repo: docker-hosted-repo
         blob_store: docker-blob
         v1_enabled: False
         member_repos:
           - docker-hosted-repo
    nexus_repos_npm_proxy:
      - name: npm-proxy-name
        blob_store: company-artifacts
        blocked: false # Default is false
        auto_block: true # Default is true
        connection_timeout: 200 # Default is unset
        connection_retries: 5 # Default is unset
        user_agent_suffix: custom-agent # Default is unset
        remote_url: https://some-private-registry.dev/
        remote_username: 'secret-username'
        remote_password: "{{ vault_alfresco_secret_password }}"
        # You can use a Preemptive Bearer Token as well by defining the bearerToken property
        # bearerToken: "{{ vault_alfresco_secret_bearertoken }}"

  roles:


    - { role: geerlingguy.java, vars: See role doc for your distribution/version }
    # Debian/Ubuntu only
    # - { role: geerlingguy.apache, apache_create_vhosts: no, apache_mods_enabled: ["proxy.load", "proxy_http.load", "headers.load", "ssl.load", "rewrite.load"], apache_remove_default_vhost: true, tags: ["geerlingguy.apache"] }
    # RedHat/CentOS only
    - { role: geerlingguy.apache, apache_create_vhosts: no, apache_remove_default_vhost: true, tags: ["geerlingguy.apache"] }
    - { role: ansible-thoteam.nexus3-oss, tags: ['ansible-thoteam.nexus3-oss'] }

Development, Contribution and Testing

Contributions

All contributions to this role are welcome, either for bugfixes, new features or documentation.

If you wish to contribute:

  • Fork the repo under your own name/organisation through github interface
  • Create a branch in your own repo with a meaningfull name. We suggest the following naming convention:
    • feat/<someFeature> for features
    • fix/<someBugFix> for bug fixes
    • docfix/<someDocFix> for documentation only fixes
  • If starting an important feature change, open a pull request early describing what you want to do so we can discuss it if needed. This will prevent you from doing a lot of hard work on a lot of code for changes that we cannot finally merge.
  • If there are build error on your pull request, have a look at the travis log and fix the relevant errors.

Moreover, if you have time to devote for code review, merge for realeases, etc... drop an email to [email protected] to get in touch.

Testing

This role includes tests and CI integration through travis. At time being, we test:

  • groovy scripts syntax
  • yaml syntax and coding standard (yamllint)
  • ansible good practices (ansible lint)
  • a set of basic deployments on 2 different linux platforms
    • Rockylinux 9 (as a close parent of RHEL products since centos is deprecated)
    • Debian 12 Bookworm

Other tests are available for older/different platforms but not played on CI for performance reasons:

  • Rockylinux 8
  • Debian 11 bullseye
  • Ubuntu 20.04 Focal
  • Ubuntu 22.04 Jammy

Groovy syntax

This role contains a set of groovy files used to provision nexus.

If you submit changes to groovy files, please run the groovy syntax check locally before pushing your changes

./tests/test_groovySyntax.sh

This will ensure you push groovy files with correct syntax limiting the number of check errors on travis.

You will need the groovy package installed locally to run this test.

Molecule default-xxxx scenarii

The role is tested on travis with molecule. You can run these tests locally. The best way to achieve this is through a python virtualenv. You can find some more details in requirements.txt.

# Note: the following path should be outside the working dir
virtualenv /path/to/some/pyenv
. /path/to/some/pyenv/bin/activate
pip install -r requirements.txt
molecule [create|converge|destroy|test] -s <scenario name>
deactivate

Please have a look at molecule documentation (a good start is molecule --help) for further usage.

The current proposed scenarii refer to the tested platforms (see molecule/ directory). If you launch a scenario and leave the container running (i.e. using converge for a simple deploy), you can access the running instance from your browser at https://localhost:. See the molecule/<scenario>/molecule.yml file for detail. As a convenience, here is the correspondence between scenarii and configured ports:

To speed up tests, molecule uses prebuilt docker hub images.

Note that these images are built and pushed on a best effort basis whenever required for changes on this repo

License

GNU GPLv3

Author Information

See: https://github.com/ansible-ThoTeam

nexus3-oss's People

Contributors

baloo42 avatar brialius avatar brianveltman avatar chrislevi avatar cwardgar avatar cyrilbesse avatar daviddelannoy avatar dependabot[bot] avatar felome avatar ganievs avatar hagzag avatar klucsik avatar ktreptow avatar kvandenhoute avatar linsomniac avatar ljackiewicz avatar mikecantcode avatar msegura501 avatar patsevanton avatar samherve avatar sarcouy avatar scrhicks avatar serienmorder avatar shurikg avatar stuarthendren avatar thibaultlemaire avatar touchardv avatar vitkhab avatar wiget avatar zeitounator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nexus3-oss's Issues

DEPRECATION WARNING

Currently I observe message like the following during run:

[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|changed` instead use `result is changed`. This feature will be removed in...

Currently I'm running on Ansible 2.5.2....

SSL proxy does not configure docker endpoints

I tried switching on docker support using the switch:

nexus_config_docker: true

That successfully created a docker group, docker hosted and docker proxy repos, listening on ports 9080, 9081 and 9082.

However, I cannot access the docker repos using docker client without SSL. It seems we either have to enable SSL on the nexus server directly, or configure Apache to forward docker requests.

As far as I can tell, the httpd config for the vhost has not changed at all to accomodate docker repos.

Am I missing something, or does the httpd configuration need to be augmented for docker support?

Note, I have found documentation for a similar setup using HAproxy: http://www.sonatype.org/nexus/2016/06/29/using-nexus-3-as-a-private-docker-registry/

Nexus 3.8 will have a new URL for Rest API calls

We should be aware of the planned change, described in this PR sonatype/nexus-book-examples#20 and in NEXUS-14940 ticket.

It seems to be targeted for some Nexus 3.7.x version (later than 3.7.0), the URL for the REST API will change from service/siesta/... to service/rest/...

I believe this will only affect the following:

url: "http://localhost:{{ nexus_default_port }}{{ nexus_default_context_path }}service/siesta/rest/v1/script/{{ script_name }}/run"

url: "http://localhost:{{ nexus_default_port }}{{ nexus_default_context_path }}service/siesta/rest/v1/script/{{ item }}"

url: "http://localhost:{{ nexus_default_port }}{{ nexus_default_context_path }}service/siesta/rest/v1/script"

Backups can only be ran once a day

The current restore point naming convention for backups (YY-MM-dd) allows for backups only once a day. #16 is currently allowing to configure time for backup and will delegate the full backup task to a single scheduled task in nexus (rather than a task for nexus db export and a cron for blobstore copy).

Running this task more than once a day can confuse the restore script.

We should allow for as many backup as needed so that users can choose frequency and even lauch the task manually from the GUI more than once a day.

Too many repos are created.

Currently, maven repositories are always created.

It's also not possible to create only a proxy repo for instance

flush_handlers in nexus_purge.yml is not skipped when purge!=true

I have a role where I do things before and after I include this nexus3-oss role.
I have a notify in the before part for a handler, which should run after nexus3-oss role.
But this line:

- meta: flush_handlers

is executed everytime, it doesn't respect the when statement, causing my handler to run too early.

TASK [ansible-ThoTeam.nexus3-oss : Check if SystemD service is installed] **********************************************************************************************************************************
ok: [10.x.x.x]

TASK [ansible-ThoTeam.nexus3-oss : Make sure nexus is stopped] *********************************************************************************************************************************************
skipping: [10.x.x.x]

RUNNING HANDLER [nexus : Remove RSA key] *******************************************************************************************************************************************************************
changed: [10.x.x.x]

TASK [ansible-ThoTeam.nexus3-oss : Purge Nexus] ************************************************************************************************************************************************************
skipping: [10.x.x.x] => (item=/nexus-data) 
skipping: [10.x.x.x] => (item=/opt/nexus-3.13.0-01) 

Unfortunately this is an ansible bug (or as they say a feature):
ansible/ansible#41313
but maybe there is a way to change a code somehow to provide the same functionality + respect the when as well.

Support s3 blob stores

S3 backed blob stores was added to nexus 3.12.
Are you able to create an s3 backed blob store given the current implementation?

Cannot rsync without ssh args

Hi there,

First of all, thanks for your work on this role.

I run Ansible with the variable ANSIBLE_SSH_ARGS where I explicitly set a ssh configuration file.
However, I run into a failure on the Upload new scripts task where rsync is performed without loading my ssh config, indeed connection cannot be done:

TASK [vendor.nexus3 : Upload new scripts] *****************************************************************************************
fatal: [int-nexus01]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --delete-after --checksum --recursive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L /home/user/Workspace/git/infrastructure/vendor/roles/vendor.nexus3/files/groovy/ [email protected]:/var/nexus/groovy-raw-scripts/new/", "msg": "ssh: connect to host 10.132.0.15 port 22: Connection timed out\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.1]\n", "rc": 255}

Would it be possible to add the use_ssh_args option to the task in order to load ssh arguments (if any is set) ?

The following modification solved my problem:

  - name: Upload new scripts
    synchronize:
      archive: no
      checksum: yes
      recursive: yes
      delete: yes
      mode: push
+     use_ssh_args: yes
      src: "files/groovy/"
      dest: "{{ nexus_data_dir }}/groovy-raw-scripts/new/"

Stuck at handler "wait-for-nexus"

Hello.

I'm currently having an issue where the process is stuck at "wait-for-nexus" handler.

This is my playbook (it's simple as I'm just testing first)

 ---
 - name: Nexus
   hosts: nexus
   become: yes
 
   vars:
     nexus_admin_password: "admin"
 
   roles:
     - { role: ansible-ThoTeam.nexus3-oss, tags: ['savoirfairelinux.nexus3-oss'] }

And the output where it gets stuck:

RUNNING HANDLER [ansible-ThoTeam.nexus3-oss : wait-for-nexus] ************************************************************************************************
 Monday 16 April 2018  16:48:37 -0300 (0:00:10.410)       0:04:19.080 **********
 Using module_utils file /Library/Python/2.7/site-packages/ansible/module_utils/_text.py
 Using module_utils file /Library/Python/2.7/site-packages/ansible/module_utils/basic.py
 Using module_utils file /Library/Python/2.7/site-packages/ansible/module_utils/six/__init__.py
 Using module_utils file /Library/Python/2.7/site-packages/ansible/module_utils/parsing/convert_bool.py
 Using module_utils file /Library/Python/2.7/site-packages/ansible/module_utils/parsing/__init__.py
 Using module_utils file /Library/Python/2.7/site-packages/ansible/module_utils/pycompat24.py
 Using module file /Library/Python/2.7/site-packages/ansible/modules/utilities/logic/wait_for.py
 <10.1.1.85> ESTABLISH SSH CONNECTION FOR USER: ubuntu
 <10.1.1.85> SSH: ansible.cfg set ssh_args: (-o)(StrictHostKeyChecking=no)(-o)(ControlPath=none)
 <10.1.1.85> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="../terraform/ssh_keys/awstools-key")
 <10.1.1.85> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
 <10.1.1.85> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ubuntu)
 <10.1.1.85> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
 <10.1.1.85> SSH: PlayContext set ssh_common_args: (-o)(ProxyCommand=ssh -i ../terraform/ssh_keys/awstools-key -W %h:%p -q ubuntu@<public-ip-removed> -o StrictHostKeyChecking=no UserKnownHostsFile=/dev/null)
 <10.1.1.85> SSH: EXEC ssh -vvv -o StrictHostKeyChecking=no -o ControlPath=none -o 'IdentityFile="../terraform/ssh_keys/awstools-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ProxyCommand=ssh -i ../terraform/ssh_keys/awstools-key -W %h:%p -q ubuntu@<public-ip-removed> -o StrictHostKeyChecking=no UserKnownHostsFile=/dev/null' 10.1.1.85 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nxnsrkoiykmcfgazabmiiztixljtcpaz; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''

All I can see in the handler file provided by you is:

 - name: wait-for-nexus
   wait_for:
     path: "{{ nexus_data_dir }}/log/nexus.log"
     search_regex: "Started Sonatype Nexus OSS .*"
     timeout: 1800

Tried to find that nexus.log but couldn't locate it.

Thank you in advanced,

Add documentation for molecule selinux scenario

While crafting v2.0, a new selinux test scenario was added to molecule.

This scenario uses vagrant and virtualbox to deploy nexus (instead of docker for default) and is meant for local testing.

Add some doc for usage.

Calling Groovy script update_admin_password step fails when upgrading to 3.10.0-04

Perhaps this is already known, but I just updated to 3.10.0-04 and there is a failure in:

Calling Groovy script update_admin_password

HTTP Error 405: HTTP method POST is not supported by this URL", "pragma": "no-cache", "redirected": false, "server": "Nexus/3.10.0-04 (OSS)", "status": 405, "url": "http://localhost:8081/service/siesta/rest/v1/script/update_admin_password/run", "x_content_type_options": "nosniff"

Either this step should be skipped or run successful.

When the default version, i.e. nexus_version: '3.9.0-01' is used again the script succeeds.

Provisioning fails on Debain when using nexus3-oss role with Packer

Hello,

I am getting an error running the nexus3-oss role. I am trying to install nexus repository on an EC2 instance provisioned using packer.

Error Details

The error I'm getting is coming from the invocation of this task that is part of the nexus3-oss role. Below is the dump of error output:

amazon-ebs: TASK [ansible-ThoTeam.nexus3-oss : Removing (potential) previously declared Groovy script create_repo_yum_proxy] ***
amazon-ebs: fatal: [default]: FAILED! => {"msg": "failed to resolve remote temporary directory from ansible-tmp-1547681805.35-1997090464384: `( umask 77 && mkdir -p \"` echo /home/admin/.ansible/tmp/ansible-tmp-1547681805.35-1997090464384 `\" && echo ansible-tmp-1547681805.35-1997090464384=\"` echo /home/admin/.ansible/tmp/ansible-tmp-1547681805.35-1997090464384 `\" )` returned empty string"}
amazon-ebs: 	to retry, use: --limit @/home/ubuntu/platformation/ansible/nexus.retry
amazon-ebs:
amazon-ebs: PLAY RECAP *********************************************************************
amazon-ebs: default                    : ok=155  changed=22   unreachable=0    failed=

I want to note that this is not the first time the task in question is invoked during the installation process and the previous calls have succeed before the failure happens. Rerunning installation causes the failure to happen at a different invocation of the same task.

System Details

I have Ubuntu 18.04.1 LTS (Bionic Beaver) on my machine (host) that is running Ansible V2.7.5 and Packer V0.12.0. The provisioned EC2 instance (remote) on which nexus is being installed has Debian GNU/Linux 9 (stretch) as its OS.

Code

Here is a link to my nexus playbook as well as a link to my packer file.

Nexus homedir path is hard coded

Hi,

In the file nexus_install.yml
Nexus home directory is hard coded and break if nexus user is not under /home

- name: Set NEXUS_HOME for the service user
  lineinfile:
    dest: "/home/{{ nexus_os_user }}/.bashrc"
    regexp: "^export NEXUS_HOME=.*"
    line: "export NEXUS_HOME={{ nexus_installation_dir }}/nexus-latest"
  notify:
    - nexus-service-stop

Error with Nexus 3.12.0.01 deployment

Hello,

First, thanks to have take the maintenance of this role.

I've try to install the latest Nexus version with the latest release of the role, but I've this error:

...
TASK [ansible-ThoTeam.nexus3-oss__2.1.2 : Removing (potential) previously declared Groovy script create_blobstore] *******************************
fatal: [ci-dev01]: FAILED! => {"changed": false, "connection": "close", "content": "", "date": "Mon, 28 May 2018 08:28:56 GMT", "failed": true, "msg": "Status code was not [204, 404]: HTTP Error 405: Http method DELETE is not supported by this URL", "redirected": false, "server": "Nexus/3.12.0-01 (OSS)", "status": 405, "url": "http://localhost:8085/service/siesta/rest/v1/script/create_blobstore", "x_content_type_options": "nosniff"}
...

Run against kubernetes deployed Nexus

Hi,

we are using the official docker image of nexus3 to run nexus on Openshift. As this only solved the "installation" of Nexus3 and not the configuration, I would like to use this playbook for setting up nexus via groovy only.

Is this possible (similar to nexus_run_provisionning: false but other way around ) ?

Synchronizing groovy scripts silently fails if rsync is not installed on the target machine

I'm using this role to build an AMI. When using the latest Amazon Linux 2 AMI as base, synchronizing the groovy scripts fails, because rsync is not installed by default.

    amazon-ebs: TASK [ansible-thoteam.nexus3-oss : Sync new scripts to old and get differences] ***
    amazon-ebs: ok: [default] => {"changed": false, "cmd": "rsync -ric /var/nexus/groovy-raw-scripts/new/ /var/nexus/groovy-raw-scripts/current/ | cut -d\" \" -f 2 | sed \"s/\\.groovy//g\"", "delta": "0:00:00.003462", "end": "2018-10-19 01:26:19.553477", "rc": 0, "start": "2018-10-19 01:26:19.550015", "stderr": "/bin/sh: rsync: command not found", "stderr_lines": ["/bin/sh: rsync: command not found"], "stdout": "", "stdout_lines": []}

Unfortunately the sync task does not fail and provisioning continues without groovy scripts. Eventually the execution fails with 404 HTTP responses.

    amazon-ebs: TASK [ansible-thoteam.nexus3-oss : Calling Groovy script update_admin_password] ***
    amazon-ebs: fatal: [default]: FAILED! => {"changed": false, "connection": "close", "content": "", "date": "Fri, 19 Oct 2018 01:26:21 GMT", "msg": "Status code was 404 and not [200, 204]: HTTP Error 404: Not Found", "redirected": false, "server": "Nexus/3.14.0-04 (OSS)", "status": 404, "url": "http://localhost:8080/service/rest/v1/script/update_admin_password/run", "x_content_type_options": "nosniff", "x_siesta_faultid": "3c9f488d-febc-49e5-b3b8-176488f167dd"}

I can of course work around this by installing rsync beforehand.

Nonetheless I think it does make sense to ensure that required tooling is present on the target.

No suitable Java Virtual Machine could be found on your system.

Hello!
Try install nexus with role

RUNNING HANDLER [ansible-ThoTeam.nexus3-oss : wait-for-nexus] ***********************************************************************************
fatal: [nexus]: FAILED! => {
    "changed": false, 
    "elapsed": 1800
}

MSG:

Timeout when waiting for search string Started Sonatype Nexus OSS .* in /var/nexus/log/nexus.log


RUNNING HANDLER [ansible-ThoTeam.nexus3-oss : wait-for-nexus-port] ******************************************************************************

NO MORE HOSTS LEFT ******************************************************************************************************************************
	to retry, use: --limit @/home/user/nexus.retry

PLAY RECAP **************************************************************************************************************************************
nexus                      : ok=62   changed=25   unreachable=0    failed=1   

 systemctl status nexus
โ— nexus.service - nexus service
   Loaded: loaded (/etc/systemd/system/nexus.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2018-12-08 07:37:28 EST; 31min ago
  Process: 5225 ExecStart=/opt/nexus-latest/bin/nexus start (code=exited, status=83)

Dec 08 07:37:27 localhost.localdomain systemd[1]: Starting nexus service...
Dec 08 07:37:28 localhost.localdomain systemd[1]: nexus.service: control process exited, code=exited status=83
Dec 08 07:37:28 localhost.localdomain systemd[1]: Failed to start nexus service.
Dec 08 07:37:28 localhost.localdomain systemd[1]: Unit nexus.service entered failed state.
Dec 08 07:37:28 localhost.localdomain systemd[1]: nexus.service failed.
[root@localhost ~]# /opt/nexus-latest/bin/nexus start 
No suitable Java Virtual Machine could be found on your system.
The version of the JVM must be at least 1.8 and at most 1.8.
Please define INSTALL4J_JAVA_HOME to point to a suitable JVM.
[root@localhost ~]# java --version
java 11.0.1 2018-10-16 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.1+13-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.1+13-LTS, mixed mode)
[root@localhost ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core) 

Provisioning fails when using nexus3-oss role with packer from Mac OSX

Hello,

I am getting an error running the nexus3-oss role. I am trying to install nexus repository on an EC2 instance provisioned using packer.

Error Details

The error I'm getting is coming from the invocation of this task that is part of the nexus3-oss role. Below is the dump of error output:

amazon-ebs: TASK [ansible-ThoTeam.nexus3-oss : Archive scripts] ****************************
amazon-ebs: fatal: [default -> localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

It seems that the task is trying to package all the groovy scripts that ship with the role into a tar file so that they can be uploaded to the remote machine where nexus is being installed. To this end its trying to write the tar file at the location /tmp/. In macOS X 10.14.2 /tmp is a symlink with the following permissions:

0 lrwxr-xr-x@ 1 root  wheel    11B 17 Dec 15:43 /tmp -> private/tmp

Since I'm running ansible role through packer, it fails writing to this directory.

Workaround

I can run packer with sudo and it would succeed but I believe I shouldn't need to use sudo.

System Details

I have macOS X 10.14.2 (18C54) on my machine (host) that is running Ansible V2.7.5 and Packer V1.3.3. The provisioned EC2 instance (remote) on which nexus is being installed has Debian GNU/Linux 9 (stretch) as its OS.

Code

Here is a link to my nexus playbook as well as a link to my packer file.

Fix yaml files to obey yamllint coding standards

Since molecule was put in place for the test, we are checking yaml syntax and coding style on each build on travis

Yamllint is currently reporting warning on yaml syntax.

  • These warning should be fixed.
  • We should fail the build in molecule if coding standards are not obeyed. This will prevent to merge PRs with unsatisfactory yaml standards in the future.

Make nexus backup time configurable.

For time being the schedule task to export nexus config/metadata and the cron task to backup blobstores are configured with a role hard coded time. This should be configurable.

Moreover, we should introduce a short delay between the scheduled task in nexus and the final backup. Running them at the exact same minutes might lead to unusable restore points.

Role should not update to latest nexus version by default

Since v2.2.0, this role is able to detect and install the latest version of nexus by default if no specific nexus_version is set in your playbook.

In case of a brand new install, installing the latest version is the expected result.

When running the role against an existing install, automatic upgrade does not obey idempotence: running the role a second time without changing any parameters can still perform changes on the system.

When no nexus_version is set, the role should keep the already installed version by default unless we specifically indicate that we want to upgrade (i.e. extra var on the command line).

Review the example playbook

Example playbook was left untouched since forking the role and adding new features. Already saw some quirks in there.

Review and fix

new scripts path different for latest nexus version

Hi,

I get the following error for nexus 3.13.x:
fatal: [nexus01]: FAILED! => {"changed": false, "connection": "close", "content": "", "date": "Tue, 11 Sep 2018 06:46:25 GMT", "msg": "Status code was 401 and not [204, 404]: HTTP Error 401: Unauthorized", "redirected": false, "server": "Nexus/3.13.0-01 (OSS)", "status": 401, "url": "http://localhost:8081/service/rest/v1/script/setup_capability", "www_authenticate": "BASIC realm=\"Sonatype Nexus Repository Manager\"", "x_content_type_options": "nosniff"}

Similar errors are present for all scripts and the reason is the PATH of the API after version 3.8.x:
https://help.sonatype.com/repomanager3/rest-and-integration-api/script-api/managing-and-running-scripts

Can you please parameterise the API path so it can be changed from vars ?

Thank you.
Cosmin

Playbook hang when running install of Nexus PRO

Hi,

The handler "wait-for-nexus" regexp on the string "Started Sonatype Nexus OSS ."
unfortunately the valid string if run against a Nexus PRO is "Started Sonatype Nexus PRO .

Changing the test for "Started Sonatype Nexus .*" will allow to support both version.
Cheers,
Nicolas

Default blobstore not deleted for yum

This is a very minor side issue.

When nexus_delete_default_blobstore is set to true, the default blobstore for yum is not taken into account. It is currently set to default which is the same as other repository types so it "works"

Meanwhile, this should be fixed as a good practice.

Note to self: check if the issue is present fo nuget.

Poll: Which ansible version are you using ?

Hi users and thanks for your interest !

I'm questioning myself for the future releases on which ansible minimum version we should support for this role. My overall view is that it is quite easy to get the latest ansible version via pip install, either globally or through a python virtualenv. If I only listen to myself, I would then move to latest version.

Meanwhile, some of you might not be able to run ansible 2.4, either because of server access restriction or because of other roles you are using requiring an older version.

In order to take an enlighten decision, could you please answer the following questions:

  • Which ansible version are you currently using ?
  • if answer above is either 2.2 or 2.3, would you be ok if we switch requirements to the latest available ansible 2.4 ?
  • if answer above is no, would you be ok to switch requirements to ansible 2.3 ?

Thanks in advance for your answers.

Nexus is not restarted when upgrading to a previously installed/extracted version.

Problem: if you upgrade to a pre-extracted version of Nexus, role will not restart the service and leave the previous version running. The symlink nexus-latest is upgraded correctly. You need to restart nexus manually for changes to take effect.

How to reproduce:

  1. install latest version of nexus with the role
  2. install a previous version of nexus with a purge (-e nexus_version=x.x.x -e purge=true)
  3. change anything in the install (or recover from a backup for example).
  4. upgrade to the latest version which was already extracted in step 1 by running the role again.
  5. check the nexus-latest symlink in nexus_installation_dir. It is pointing to the good nexus version.
  6. connect to your nexus instance with your browser: it is still running the previous nexus version.

Expected result:
nexus is upgraded and restarted correctly by the role even if the new version was previously downloaded/extracted

Implement rotation for backups

Backups are currently kept ad-vitam unless you configure a rotation by yourself.

Include backup rotation based on maximum backup count to keep in the new backup strategy introduced in v2.0

Support SSLCertificateChainFile

I used this role to configure my nexus instance, secured with a letsencrypt certificate

However, when trying to deploy, I got errors similar to:

PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

Note, I'm using java8 1.8.0_171, which does support letsencrypt certificates

Turns out that the issue was the apache proxy needs to be configured with the SSLCertificateChainFile.

Stackoverflow post describing solution and related letsencrypt community post confirming which file to use.

As described in those posts, the fix was simple to apply, I just edited my apache2 config at /etc/apache2/sites-enabled/{mySite}
as follows:

  SSLCertificateFile /etc/letsencrypt/live/mysite/cert.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/mysite/privkey.pem
  SSLCertificateChainFile /etc/letsencrypt/live/mysite/chain.pem <--- Added this line
  ServerName mysite
  ServerAdmin [email protected]

However, now my server is a snowflake.

Would be awesome if I could specify this along with the other certs in the role. ie:

    httpd_ssl_cert_file_location: "/etc/letsencrypt/live/{{public_hostname}}/cert.pem"
    httpd_ssl_cert_key_location: "/etc/letsencrypt/live/{{public_hostname}}/privkey.pem"
    httpd_ssl_cert_chain_location: "/etc/letsencrypt/live/{{public_hostname}}/chain.pem"

Enhancement needed for reprovisionning

This role already takes into account reprovisionning of nexus. Meanwhile, when you just want to push some provisioning changes (add a repository, change a user password, add a new nexus role....), the nexus server will go down even though it is not always necessary.

Also, groovy scripts will be systematically removed/declared even if they didn't change

This overall process should be strengthen by removing unnecessary stop/starts/script updates when you just need to reprovision.

Note: this enhancement should fix a good part of what is needed for #7

Scheduled tasks should be modified when they exists

When the role is replayed for provisionning, schedulled tasks are systematically deleted and recreated.

We should have them modified when they already exists.

A step further: we should only modify them when their attributes have changed

A step in the future: we should have the api call return "changed" when a task is created/updated and ok
when the task was left untouched.

Nexus scripts transfert creates content in {{ role_dir }}

This was introduced with #116

As part of the change, nexus custom groovy scripts are now uploaded as tar.gz archive being unarchived rather than with synchronize (better support for non ssh ansible connection methods)

The role needs to dynamically create the archive which is stored in the directory containing the role (and git ignored).

Although this works in most situation, it might cause issues when running in Tower or AWX, or simply on a system where the role would be imported in a non writeable directory for the user launching a playbook.

Archive should be written to a generally writable location for the user. Given the size of the scripts, /tmp is the best candidate.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.