Giter VIP home page Giter VIP logo

jfrog-cloud-installers's Introduction

JFrog-Cloud-Installers

Template to deploy/manage JFrog Artifactory enterprise cluster on various cloud providers.

jfrog-cloud-installers's People

Contributors

aayush-sood94 avatar adambrauns avatar alexhung avatar anupteal avatar bbaassssiiee avatar brucechen94539 avatar chukka avatar danielmkn avatar diginc avatar edpeixoto1 avatar gavmain avatar giri-vsr avatar jainishshah17 avatar jefferyfry avatar lafrenierejm avatar logeshwarsn avatar madotis avatar maheshjfrog avatar oumkale avatar peters95 avatar robinino avatar serienmorder avatar shahiinn avatar vasukinjfrog avatar vinayagg avatar vlinevych avatar yahavi avatar zwarebear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jfrog-cloud-installers's Issues

[ansible/postgres] Default values are defined in vars/ instead of defaults/

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Which installer: Ansible

Which product and version (eg: ansible & collection version - 1.1.2): Ansible version ansible 2.9.24, collection version 7.23.3

Which operating system and version(eg: ubuntu & version - 20.4): Ubuntu 20.04.2

What happened:
Values are always pulled from roles/postgres/vars/Debian.yml first because of Ansible's variable precedence.
That's expected behavior of Ansible, but it prevents me from setting the vaules myself in my group_vars.

What you expected to happen:
Values for the following variables being pulled from my group_vars file:
postgresql_data_dir:
postgresql_bin_path:
postgresql_config_path
postgresql_daemon
postgresql_external_pid_file
postgres_apt_key_url
postgres_apt_key_id
postgres_apt_repository_repo

How to reproduce it (as minimally and precisely as possible):
Use postgres role and define variables in group_vars

Anything else we need to know:
Suggestion: put the default values in roles/postgres/defaults/main.yml instead, like in the other roles.

[ansible/nginx-ssl] Support for Docker registries via subdomain

Is this a request for help?:

No.


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

IMO it's a BUG REPORT.

Which installer:

Ansible

Which product and version (eg: ansible & collection version - 1.1.2):

Artifactory. Collection version 7.19.8, Artifactory 7.19.4

Which operating system and version(eg: ubuntu & version - 20.4):

Ubuntu 20.04

What happened:

There is a missing facility in Jfrog collection to configure properly the Docker Registry using subdomain method.

https://www.jfrog.com/confluence/display/JFROG/Getting+Started+with+Artifactory+as+a+Docker+Registry#GettingStartedwithArtifactoryasaDockerRegistry-TheSubdomainMethod

This typically requires 2 bits of nginx config (as provided by Artifactory's own Nginx config generator):

  • setting server name to handle wildcard domains, e.g.
    server_name ~(?<repo>.+)\.artifactory.example.com artifactory.example.com;
    
  • adding a redirect:
    rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
    

Setting "server_name" is currently already possible by overriding server_name variable of the role (although
the variable name should probably be called nginx_server_name as it's very generic`

Setting up the required redirect however is currently not possible with artifactory_nginx_ssl role
which results in broken Docker Registry setup.

What you expected to happen:

nd more flexible - I would like to see the template support any custom rule

For example:

  server_name {{ artifactory_nginx_ssl_server_name | default(server_name) }};
  {{ artifactory_nginx_ssl_config_custom }}

  if ($http_x_forwarded_proto = '') {
  set $http_x_forwarded_proto  $scheme;
  }

Alternatively - and maybe clearer to the end users of the role - this entry to be added, based on some boolean flag (e.g. artifactory_docker_registry_subdomain: true)

  rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;

How to reproduce it (as minimally and precisely as possible):

Install Artifactory using example roles. Enable Docker registry and try to configure it using subdomain method documented here: https://www.jfrog.com/confluence/display/JFROG/Getting+Started+with+Artifactory+as+a+Docker+Registry

Anything else we need to know:

[ansible/xray]Running only xray installation, fails to access to the server

Which installer:
Ansible, posgres and xray role ONLY
playbook:
collections:
- jfrog.platform
roles:
- postgres
- xray

Which product and version (eg: ansible & collection version - 1.1.2):
Ansible 2.9.6
XRAY latest (following base galaxy collection)

Which operating system and version(eg: ubuntu & version - 20.4):
Ubuntu 20.4

What happened:
In the docs: The xray role will install Xray software onto the host. An Artifactory server and Postgress database is required.
But i have already a server dedicated to artifactory.
so i defined the variable
jfrog_url: https://myartifactory

I run a playbook to install only posgress and xray. Ansible installation works, but i cant access to xray portal, by example.
http://myservername -> nothing
http://myservername:8000 -> nothing

netstat -tunlp
(nothing running on port 8000, 80,)
server running:
● xray.service - JFrog Xray service
Loaded: loaded (/lib/systemd/system/xray.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-05-12 21:13:37 GMT; 44s ago
Process: 416158 ExecStart=/opt/jfrog/xray/app/bin/xrayManage.sh start (code=exited, status=0/SUCCESS)
Main PID: 418184 (jf-server)
Tasks: 0 (limit: 7031)
Memory: 1.0M
CGroup: /system.slice/xray.service
‣ 418184 /opt/jfrog/xray/app/server/bin/jf-server

May 12 21:13:12 TisXRYx21 systemd[1]: Starting JFrog Xray service...
May 12 21:13:12 TisXRYx21 xrayManage.sh[416158]: Resolved .shared.user (xray) from /opt/jfrog/xray/var/etc/system.yaml
May 12 21:13:12 TisXRYx21 xrayManage.sh[416158]: Resolved .shared.group (xray) from /opt/jfrog/xray/var/etc/system.yaml
May 12 21:13:12 TisXRYx21 su[416308]: (to xray) root on none
May 12 21:13:12 TisXRYx21 su[416308]: pam_unix(su:session): session opened for user xray by (uid=0)
May 12 21:13:37 TisXRYx21 systemd[1]: Started JFrog Xray service.

install.log -> no errors
console.log ->
2021-05-12T21:13:39.119Z [jfrou] [FATAL] [15f760333f212368] [bootstrap.go:105 ] [main ] - Could not join access, err: cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 404; message: HTTP 404 Not Found

...

2021-05-12T21:15:36.246Z [jfxr ] [INFO ] [ ] [access_client_bootstrap:167 ] [main ] (--wrapper--)Cluster join: Retry 120: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused
root@TisXRYx21:/opt/jfrog/xray/var/log#

systendiagnostics.log -> all good here, only DEBUG lines

What you expected to happen:

To be able to access to xray server

Anything else we need to know:

Ansible role fails in --check mode

Is this a request for help?:
Yeah.

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Which installer: 7.18.6

Which product and version (eg: ansible & collection version - 1.1.2): 7.18.6

Which operating system and version(eg: ubuntu & version - 20.4): ubuntu 18.04

What happened:
Ran artifactory installation from the collection jforg.platform

TASK [jfrog.platform.artifactory : Wait for artifactory to be fully deployed] *****************************************************************************************
fatal: [artifactory.mydomain]: FAILED! =>
msg: 'The conditional check ''result.status == 200'' failed. The error was: error while evaluating conditional (result.status == 200): ''dict object'' has no attribute ''status'''

What you expected to happen:
I expect it to go through.

How to reproduce it (as minimally and precisely as possible):
Run the collection installation in --check mode

Anything else we need to know:
This is required for our CI to pass the --check mode.

[ansible/postgres] database creation may fail due to locale mismatch

Is this a request for help?:

It's a bug report.

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Which installer:

postgres

Which product and version:

Artifactory - latest (unspecified); Postgres version - unspecified (role defaults).

Ansible collection of installers - 1.1.2

What happened:

Ansible task to create a database fails because of mismatch in locale. between

TASK [jfrog.installers.postgres : initialize PostgreSQL database cluster] ***********************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : install postgres configuration] *******************************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
changed: [artifactory.ourdomain.io] => (item=pg_hba.conf)
changed: [artifactory.ourdomain.io] => (item=postgresql.conf)

TASK [jfrog.installers.postgres : enable postgres service] **************************************************************************************************************************************************
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : Hold until Postgresql is up and running] **********************************************************************************************************************************
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : Create users] *************************************************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
changed: [artifactory.ourdomain.io] => (item=None)
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : Create a database] ********************************************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: DETAIL:  The chosen LC_CTYPE setting requires encoding "LATIN1".
failed: [artifactory.ourdomain.io] (item={'db_name': 'artifactory', 'db_owner': 'artifactory'}) => {"ansible_loop_var": "item", "changed": false, "item": {"db_name": "artifactory", "db_owner": "artifactory"}, "msg": "Database query failed: encoding \"UTF8\" does not match locale \"en_US\"\nDETAIL:  The chosen LC_CTYPE setting requires encoding \"LATIN1\".\n"}

PLAY RECAP **************************************************************************************************************************************************************************************************
artifactory.ourdomain.io : ok=22   changed=8    unreachable=0    failed=1    skipped=44   rescued=0    ignored=0

What you expected to happen:

Artifactory database created.

How to reproduce it (as minimally and precisely as possible):

- name: Install and configure Artifactory
  hosts: artifactory_test
  become: true
  tasks:
    - include_role:
        name: jfrog.installers.postgres
      vars:
        db_users:
          - db_user: "artifactory"
            db_password: "mypassword"
        dbs:
          - db_name: "artifactory"
            db_owner: "artifactory"      
    - include_role:
        name: jfrog.installers.artifactory

Anything else we need to know:

The OS locale is:

root@artifactory:~# localectl
   System Locale: LANG=en_US
                  LANGUAGE=en_US:
       VC Keymap: n/a
      X11 Layout: us
       X11 Model: pc105
root@artifactory:~# cat /etc/default/locale
LANG="en_US"
LANGUAGE="en_US:"

Although with this role overriding locales, this should not matter - perhaps it wasn't tested in this setup.

Terraform template_file limit hit when using several licenses

The terraform template_file has a limit of 16k characters. The default userdata.sh file is itself 6.5k characters and a single E+ license can be about 4k characters. In addition, SSL certs and other variables also push the character count up. This causes the terraform apply action to fail:

expected length of user_data to be in the range (0 - 16384), got #!/bin/bash ....

[ansible/artifactory] installation fails due to undefined variable

Is this a request for help?:

No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Which installer:

Ansible 7.18.6

Which product and version (eg: ansible & collection version - 1.1.2):

Artifactory, 7.18.6

Which operating system and version(eg: ubuntu & version - 20.4):

Ubuntu 20.04

What happened:

TASK [jfrog.platform.artifactory : Create required directories] ******************************************************************************************************************************************
ok: [artifactory.example.com] => (item=/data/jfrog-filestore)
ok: [artifactory.example.com] => (item=/opt/jfrog/artifactory/var/data)
ok: [artifactory.example.com] => (item=/opt/jfrog/artifactory/var/etc)
ok: [artifactory.example.com] => (item=/opt/jfrog/artifactory/var/etc/security/)
ok: [artifactory.example.com] => (item=/opt/jfrog/artifactory/var/etc/artifactory/info/)

TASK [jfrog.platform.artifactory : Configure systemyaml] *************************************************************************************************************************************************
changed: [artifactory.example.com]

TASK [jfrog.platform.artifactory : Configure master key] *************************************************************************************************************************************************
ok: [artifactory.example.com]

TASK [jfrog.platform.artifactory : Configure join key] ***************************************************************************************************************************************************
changed: [artifactory.example.com]

TASK [jfrog.platform.artifactory : Configure installer info] *********************************************************************************************************************************************
fatal: [artifactory.example.com]: FAILED! => changed=false
  msg: 'AnsibleUndefinedVariable: ''platform_collection_version'' is undefined'

What you expected to happen:

Successful installation.

How to reproduce it (as minimally and precisely as possible):

- name: Install and configure Artifactory
  hosts: artifactory_test
  become: true
  tasks:
    - include_role:
        name: jfrog.platform.artifactory

Anything else we need to know:

Workaround is relatively simple - to define this value in inventory.

[ansible/artifactory] providing Artifactory license has no effect

Is this a request for help?:

No.


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Bug report.

Which installer:

Ansible installer.

Which product and version (eg: ansible & collection version - 1.1.2):

Artifactory 7.10.2

Ansible collection 1.1.2 from Galaxy.

Which operating system and version(eg: ubuntu & version - 20.4):

Ubuntu 18.04.5 LTS

What happened:

Initially I ran installation of Artifactory with basic settings (see details in #76) and it came up with Getting Started dialog, asking me to change password and apply the license.

I need the installation to be fully automated so I specified the license for this instance:

artifactory_is_primary: true
artifactory_license1: "this is the license"

but it had no effect. I can see that the value is used:

TASK [jfrog.installers.artifactory : use license file] ******************************************************************************************************************************************************
task path: /Users/myuser/git/auto/galaxy-roles/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml:167
skipping: [artifactory.lab.mydomain.com] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [jfrog.installers.artifactory : use license strings] ***************************************************************************************************************************************************
task path: /Users/myuser/git/auto/galaxy-roles/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml:174
redirecting (type: lookup) ansible.builtin.hashi_vault to community.general.hashi_vault
changed: [artifactory.lab.mydomain.com] => {"changed": true, "checksum": "82f4c40b55b899e7b9ba9667722b59344b880427", "dest": "/opt/jfrog/artifactory/var/etc/artifactory/artifactory.cluster.license", "gid": 0, "group": "root", "md5sum": "b6777008a3f2d35a27b7b67b917afed5", "mode": "0644", "owner": "root", "size": 933, "src": "/tmp/.ansible-adminuser/ansible-tmp-1613384191.057653-5889-7286596822962/source", "state": "file", "uid": 0}

but the UI still prompts me to enter the license afterwards.

What you expected to happen:

After specifying the license in Ansible variables, I expect the license to be applied.

How to reproduce it (as minimally and precisely as possible):

- name: Install and configure Artifactory
  hosts: artifactory_test
  become: true
  tasks:
    - include_role:
        name: jfrog.installers.postgres
    - include_role:
        name: jfrog.installers.artifactory

variables:

artifactory_is_primary: true
artifactory_license1: "this is the license"

# postgres role:
db_users:
  - db_user: "artifactory"
    db_password: "mypassword"

# artifactory role:
server_name: "{{ ansible_fqdn }}"
artifactory_version: 7.10.2

artifactory_is_primary: true
db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.18.jar"
db_type: "postgresql"
db_driver: "org.postgresql.Driver"
db_url: "jdbc:postgresql://localhost:5432/artifactory"
db_user: "artifactory"
db_password: "mypassword"

Anything else we need to know:

  1. Arguably this may be not a bug but a new feature request. Documentation for the Artifactory role isn't very clear:
    https://github.com/jfrog/JFrog-Cloud-Installers/tree/master/Ansible/ansible_collections/jfrog/installers/roles/artifactory#primary-vars-vars-used-by-the-primary-artifactory-server
    From end user's perspective - If installer is asking me to provide the license and I do, it's a reasonable expectation for it to apply it; otherwise - what is the point of that?
  2. https://www.jfrog.com/confluence/display/JFROG/Artifactory+Bootstrap+YAML+File suggests creating a bootstrap config which is what I'll be trying to do on my own; presumabely the Ansible installer collection should also generate such bootstrap Yaml out of the specified artifactory_license1-5 variables (or the input file)

new platform version fails on Artifactory restart

Is this a request for help?: YES


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Which installer: 7.19.4

Which product and version (eg: ansible & collection version - 1.1.2): 7.19.4

Which operating system and version(eg: ubuntu & version - 20.4): 18.04

What happened:
I ran the platform the same way I ran the older 7.18.6 version (on a clean machine), and this time Artifactory doesn't start correctly.
Got 502 Bad Gateway.
Also

TASK [jfrog.platform.artifactory : Wait for artifactory to be fully deployed] ****************************************************************************
FAILED - RETRYING: Wait for artifactory to be fully deployed (25 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (24 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (23 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (22 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (21 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (20 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (19 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (18 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (17 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (16 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (15 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (14 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (13 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (12 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (11 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (10 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (9 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (8 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (7 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (6 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (5 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (4 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (3 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (2 retries left).
FAILED - RETRYING: Wait for artifactory to be fully deployed (1 retries left).
fatal: [artichoke.core.speechmatics.io]: FAILED! => changed=false
  attempts: 25
  elapsed: 0
  msg: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>'
  redirected: false
  status: -1
  url: http://127.0.0.1:8082/router/api/v1/system/health

PLAY RECAP ***********************************************************************************************************************************************
artichoke.core.speechmatics.io : ok=50   changed=0    unreachable=0    failed=1    skipped=50   rescued=0    ignored=0

What you expected to happen:
I would expect Artifactory to restart successfully

How to reproduce it (as minimally and precisely as possible):
Clean the sever from old Artifactory, also drop the database (I use postgres), start the installation Playbook.

Anything else we need to know:

7.18.6 runs both nginx roles when ssl enabled.

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUR REPORT

Which installer: 7.18.6

Which product and version (eg: ansible & collection version - 1.1.2): 7.18.6

Which operating system and version(eg: ubuntu & version - 20.4): ubuntu 18.04

What happened:
Both artifactory_nginx and artifactory_nginx_ssl roles run, but it should be just one of them.

according to the defaults:

# Set this to true when SSL is enabled (to use artifactory_nginx_ssl role), default to false (implies artifactory uses artifactory_nginx role )
artifactory_nginx_ssl_enabled: false

# Set this to false when ngnix is disabled, defaults to true (implies artifactory uses artifactory_nginx role )
artifactory_nginx_enabled: true

What you expected to happen:

I expect only artifactory_nginx_ssl to run when
artifactory_nginx_ssl_enabled: true

How to reproduce it (as minimally and precisely as possible):

Set:
artifactory_nginx_ssl_enabled: true

then run installation.

Anything else we need to know:
artifactory_nginx_enabled should be used to enable/disable nginx completely, even if artifactory_nginx_ssl_enabled is true.

artifactory_nginx_ssl_enabled should be used to default to standard nginx or ssl version if true.

Create Upgrade.yaml version that supports using docker-compose

Is this a request for help?:

No, we have received help via email, this is an enhancement request.

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

This is a feature request. Currently the wiki page for how to upgrade Mission Control (https://www.jfrog.com/confluence/display/JFROG/Upgrading+Mission+Control#UpgradingMissionControl-UpgradingfromVersion4.xto4.x) supports upgrade using Docker Compose, RPM and Debian. As with many orgs, we are set up to use Docker Compose. The Ansible Playbook does not user docker-compose for its processes, and will require manually modifying steps in the playbook files when we go to run it to keep in format with our installation method. This request is asking to have options available based on how you want to upgrade; for example, an upgrade-debian.yaml, upgrade-rpm.yaml and upgrade-docker-compose.yaml options to choose from, or parameters we can pass to choose which option from the upgrade.yaml role is used.

Which installer:
Mission Control (though Distribution and XRay would also be great)

Which product and version (eg: ansible & collection version - 1.1.2):
Ansible 2.9, Mission Control 4.7.11

Which operating system and version(eg: ubuntu & version - 20.4):
RHEL 7.8

What happened:
We need to upgrade and wish to automate the process, published playbook does not use docker-compose method

What you expected to happen:
N/A

How to reproduce it (as minimally and precisely as possible):
N/A

Anything else we need to know:
We have received manual steps from JFrog Enterprise Solution Lead, we are not holding up our upgrade because of this, would simply like this to be available for future upgrades/installations for Mission Control and other JFrog Products.

[ansible/artifactory] installation fails on systemd unit creation

Is this a request for help?:

No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Which installer:

Ansible; collection 1.1.2 (latest)

Which product and version:

Artifactory 7.12.5

Verified (and reproduced the same way) also on 7.10.2 which is listed as certified in README.md

OS: Ubuntu 18.04

What happened:

Ansible fails at jfrog.installers.artifactory : create artifactory service task:

... REDACTED FOR BREVITY...

TASK [jfrog.installers.postgres : install postgres configuration] **************
[WARNING]: Using world-readable permissions for temporary files Ansible needs
to create when becoming an unprivileged user. This may be insecure. For
information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-
unprivileged-user
changed: [artifactory.ourdomain.io] => (item=pg_hba.conf)
changed: [artifactory.ourdomain.io] => (item=postgresql.conf)

TASK [jfrog.installers.postgres : enable postgres service] *********************
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : Hold until Postgresql is up and running] *****
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : Create users] ********************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs
to create when becoming an unprivileged user. This may be insecure. For
information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-
unprivileged-user
changed: [artifactory.ourdomain.io] => (item=None)
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : Create a database] ***************************

TASK [jfrog.installers.postgres : Grant privs on db] ***************************

TASK [jfrog.installers.postgres : restart postgres] ****************************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.postgres : debug] ***************************************
ok: [artifactory.ourdomain.io] => {
    "msg": "Restarted postgres service [email protected]"
}

TASK [include_role : jfrog.installers.artifactory] *****************************

TASK [jfrog.installers.artifactory : Check to see if artifactory has a service and stop it] ***
fatal: [artifactory.ourdomain.io]: FAILED! => {"changed": false, "msg": "Could not find the requested service artifactory: host"}

TASK [jfrog.installers.artifactory : Check to see if artifactory has a service and stop it] ***
fatal: [artifactory.ourdomain.io]: FAILED! => {"changed": false, "msg": "Could not find the requested service artifactory: host"}

TASK [jfrog.installers.artifactory : perform installation] *********************
included: /Users/waldekm/git/auto/galaxy-roles/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml for artifactory.ourdomain.io

TASK [jfrog.installers.artifactory : debug] ************************************
ok: [artifactory.ourdomain.io] => {
    "msg": "Performing installation of Artifactory..."
}

TASK [install nginx] ***********************************************************

TASK [jfrog.installers.artifactory_nginx : debug] ******************************
ok: [artifactory.ourdomain.io] => {
    "msg": "Attempting nginx installation without dependencies for potential offline mode."
}

TASK [jfrog.installers.artifactory_nginx : install nginx without dependencies] ***
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory_nginx : configure main nginx conf file.] ****
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory_nginx : configure the artifactory nginx conf] ***
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory_nginx : restart nginx] **********************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : create group for artifactory] *************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : create user for artifactory] **************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : ensure jfrog_home_directory exists] *******
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : Local Copy artifactory] *******************
skipping: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : download artifactory] *********************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : Create artifactory home folder] ***********
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : Create Symlinks for var folder] ***********
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : Create Symlinks for app folder] ***********
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : ensure artifactory_file_store_dir exists] ***
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : ensure data exists] ***********************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : ensure etc exists] ************************
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : use specified system yaml] ****************
skipping: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : configure system yaml template] ***********
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : ensure /opt/jfrog/artifactory/var/etc/security/ exists] ***
ok: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : configure master key] *********************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : configure join key] ***********************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : ensure /opt/jfrog/artifactory/var/etc/artifactory/info/ exists] ***
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : configure installer info] *****************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : use specified binary store] ***************
skipping: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : use default binary store] *****************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : use license file] *************************
skipping: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : use license strings] **********************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : Copy local database driver] ***************
skipping: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : download database driver] *****************
changed: [artifactory.ourdomain.io]

TASK [jfrog.installers.artifactory : create artifactory service] ***************
fatal: [artifactory.ourdomain.io]: FAILED! => {"changed": true, "cmd": "/opt/jfrog/artifactory/app/bin/installService.sh", "delta": "0:00:01.104974", "end": "2021-01-07 11:25:17.556423", "msg": "non-zero return code", "rc": 1, "start": "2021-01-07 11:25:16.451449", "stderr": "", "stderr_lines": [], "stdout": "\nInstalling artifactory as a Unix service that will run as user artifactory and group artifactory\nInstalling artifactory with home /opt/jfrog/artifactory/app\nCreating user artifactory...already exists... DONE\nCreating Group artifactory...already exists... DONE\nModifying environment file /opt/jfrog/artifactory/app/bin/artifactory.default... DONE\n\u001b[33m** INFO: Please create/edit system.yaml file in /opt/jfrog/artifactory/var/etc to set the correct environment\u001b[0m\n\u001b[33m         Templates with information can be found in the same directory\u001b[0m\nInitializing artifactory.service service with systemctl... DONE\nRemoving old custom drivers : /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_*\nCopying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/postgresql-42.2.18.jar to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_postgresql-42.2.18.jar\nCopying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/README.md to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_README.md\n\nSetting file permissions...\nChecking permissions on /opt/jfrog/artifactory/var\n/opt/jfrog/artifactory/var is already owned by artifactory:artifactory.\n\u001b[38;5;197m[ERROR\u001b[0m\u001b[38;5;197m] \u001b[0mFailed to set create shared.user with artifactory in /opt/jfrog/artifactory/var/etc/system.yaml, \\ncommand used : /opt/jfrog/artifactory/app/bin/../third-party/yq/yq w -i \"/opt/jfrog/artifactory/var/etc/system.yaml\" \"shared.user\" \"artifactory\"", "stdout_lines": ["", "Installing artifactory as a Unix service that will run as user artifactory and group artifactory", "Installing artifactory with home /opt/jfrog/artifactory/app", "Creating user artifactory...already exists... DONE", "Creating Group artifactory...already exists... DONE", "Modifying environment file /opt/jfrog/artifactory/app/bin/artifactory.default... DONE", "\u001b[33m** INFO: Please create/edit system.yaml file in /opt/jfrog/artifactory/var/etc to set the correct environment\u001b[0m", "\u001b[33m         Templates with information can be found in the same directory\u001b[0m", "Initializing artifactory.service service with systemctl... DONE", "Removing old custom drivers : /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_*", "Copying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/postgresql-42.2.18.jar to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_postgresql-42.2.18.jar", "Copying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/README.md to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_README.md", "", "Setting file permissions...", "Checking permissions on /opt/jfrog/artifactory/var", "/opt/jfrog/artifactory/var is already owned by artifactory:artifactory.", "\u001b[38;5;197m[ERROR\u001b[0m\u001b[38;5;197m] \u001b[0mFailed to set create shared.user with artifactory in /opt/jfrog/artifactory/var/etc/system.yaml, \\ncommand used : /opt/jfrog/artifactory/app/bin/../third-party/yq/yq w -i \"/opt/jfrog/artifactory/var/etc/system.yaml\" \"shared.user\" \"artifactory\""]}

PLAY RECAP *********************************************************************
artifactory.ourdomain.io : ok=50   changed=31   unreachable=0    failed=2    skipped=51   rescued=1    ignored=0

What you expected to happen:

Ansible succesfully installing the unit file and moving on with installation.

How to reproduce it (as minimally and precisely as possible):

Standard playbook based on
https://github.com/jfrog/JFrog-Cloud-Installers/blob/master/Ansible/examples/playbook-rt.yml

Ran against a Ubuntu 18.04.5 LTS VM.

- name: Install and configure Artifactory
  hosts: artifactory_test
  become: true
  tasks:
    - include_role:
        name: jfrog.installers.postgres
    - include_role:
        name: jfrog.installers.artifactory

Additional group vars (in artifactory_test.yml)

# postgres role:
db_users:
  - db_user: "artifactory"
    db_password: "mypassword"

# artifactory role:
server_name: "{{ ansible_fqdn }}"
artifactory_version: 7.10.2

artifactory_is_primary: true
db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.18.jar"
db_type: "postgresql"
db_driver: "org.postgresql.Driver"
db_url: "jdbc:postgresql://localhost:5432/artifactory"
db_user: "artifactory"
db_password: "mypassword"

Anything else we need to know:

I did some investigation and looking at this task:

has shown me it's running /opt/jfrog/artifactory/app/bin/installService.sh script.

I executed it manually and the output is as follows:

root@artifactory:/opt/jfrog/artifactory/app/bin# ./installService.sh

Installing artifactory as a Unix service that will run as user artifactory and group artifactory
Installing artifactory with home /opt/jfrog/artifactory/app
Creating user artifactory...already exists... DONE
Creating Group artifactory...already exists... DONE
Modifying environment file /opt/jfrog/artifactory/app/bin/artifactory.default... DONE
** INFO: Please create/edit system.yaml file in /opt/jfrog/artifactory/var/etc to set the correct environment
         Templates with information can be found in the same directory
Stopping the artifactory service...
Initializing artifactory.service service with systemctl... DONE
Removing old custom drivers : /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_postgresql-42.2.18.jar /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_README.md
Copying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/postgresql-42.2.18.jar to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_postgresql-42.2.18.jar
Copying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/README.md to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_README.md

Setting file permissions...
Checking permissions on /opt/jfrog/artifactory/var
/opt/jfrog/artifactory/var is already owned by artifactory:artifactory.
[ERROR] Failed to set create shared.user with artifactory in /opt/jfrog/artifactory/var/etc/system.yaml, \ncommand used : /opt/jfrog/artifactory/app/bin/../third-party/yq/yq w -i "/opt/jfrog/artifactory/var/etc/system.yaml" "shared.user" "artifactory"
root@artifactory:/opt/jfrog/artifactory/app/bin#

Content of this file afterwards is:

root@artifactory:/opt/jfrog/artifactory/app/bin# ls -al /opt/jfrog/artifactory/var/etc/system.yaml
-rw-r--r-- 1 root root 1314 Jan  7 11:25 /opt/jfrog/artifactory/var/etc/system.yaml
root@artifactory:/opt/jfrog/artifactory/app/bin# cat /opt/jfrog/artifactory/var/etc/system.yaml
## @formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1

## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog

## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.

## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:

  ## Node Settings
  node:
    ## A unique id to identify this node.
    ## Default: auto generated at startup.
    id: c679d8c16d38498c9598c8e8c84ba759

    ## Sets this node as primary in HA installation
    primary: True

    ## Sets this node as part of HA installation
    haEnabled: True

  ## Database Configuration
  database:
    ## One of: mysql, oracle, mssql, postgresql, mariadb
    ## Default: Embedded derby

    ## Example for mysql/postgresql
    type: "postgresql"
driver: "org.postgresql.Driver"
     url: "jdbc:postgresql://localhost:5432/artifactory"
     username: "artifactory"
password: "mypassword"

Note the invalid indentation of the lines driver and password in the Yaml file.

Sticky Session for ELB & Health Check not using doc's recommendations

My team is spinning up a Terraform HA stack and when testing an upload to Artifactory we noticed it was bouncing between our 2 different artifactory ids between the upload/depoly requests. The JFrog documentation on how to setup an HA cluster specifically mentions needing sticky sessions. A quick search of the repo for 'sticky' didn't show any hits so this may apply to CFN/Azure too?

Another small difference is for the LB health check the documentation recommends hitting the /artifactory/api/system/ping URI instead of /webapp/#/login which is currently configured in terraform.

[ansible/postgres] [ansible/artifactory] roles are not idempotent

Is this a request for help?:

Bug report.


Which installer:

Ansible

Which product and version:

Artifactory 7.12.5

What happened:

Running the playbook twice against an existing host restarts postgres and nginx despite no changes.

On second run of the playbook the following happens:

Restarted postgresql:

TASK [include_role : jfrog.installers.postgres] *************************************************************************************************************************************************************

TASK [jfrog.installers.postgres : define distribution-specific variables] ***********************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : create directory for bind mount if necessary] *****************************************************************************************************************************
skipping: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : perform bind mount if necessary] ******************************************************************************************************************************************
skipping: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : perform installation] *****************************************************************************************************************************************************
included: /Users/waldekm/git/auto/galaxy-roles/ansible_collections/jfrog/installers/roles/postgres/tasks/Debian.yml for artifactory.lab.speechmatics.io

TASK [jfrog.installers.postgres : install python2 psycopg2] *************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : install python3 psycopg2] *************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : add postgres apt key] *****************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : register APT repository] **************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : install postgres packages] ************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : extend path] **************************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : initialize PostgreSQL database cluster] ***********************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : install postgres configuration] *******************************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
ok: [artifactory.lab.speechmatics.io] => (item=pg_hba.conf)
ok: [artifactory.lab.speechmatics.io] => (item=postgresql.conf)

TASK [jfrog.installers.postgres : enable postgres service] **************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : Hold until Postgresql is up and running] **********************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : Create users] *************************************************************************************************************************************************************
[WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see
https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user
ok: [artifactory.lab.speechmatics.io] => (item=None)
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : Create a database] ********************************************************************************************************************************************************

TASK [jfrog.installers.postgres : Grant privs on db] ********************************************************************************************************************************************************

TASK [jfrog.installers.postgres : restart postgres] *********************************************************************************************************************************************************
changed: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.postgres : debug] ********************************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io] => {
    "msg": "Restarted postgres service [email protected]"
}

and later on nginx:

TASK [install nginx] ****************************************************************************************************************************************************************************************

TASK [jfrog.installers.artifactory_nginx : debug] ***********************************************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io] => {
    "msg": "Attempting nginx installation without dependencies for potential offline mode."
}

TASK [jfrog.installers.artifactory_nginx : install nginx without dependencies] ******************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.artifactory_nginx : configure main nginx conf file.] *********************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.artifactory_nginx : configure the artifactory nginx conf] ****************************************************************************************************************************
ok: [artifactory.lab.speechmatics.io]

TASK [jfrog.installers.artifactory_nginx : restart nginx] ***************************************************************************************************************************************************
changed: [artifactory.lab.speechmatics.io]

What you expected to happen:

I'd expect this playbook to to idempotent, i.e. not take any actions if it's not making any changes to postgres or nginx. Especially an action as intrusive like service restart.

How to reproduce it (as minimally and precisely as possible):

https://github.com/jfrog/JFrog-Cloud-Installers/blob/master/Ansible/examples/playbook-rt.yml

Run it twice:

ansible-playbook -i inventory.yml playbook-rt.yml

Anything else we need to know:

n/a

artifactory requires net-tools, which is missing on ubuntu 20.04. Please install it.

Which installer: Ansible

Which product and version: ansible 2.11.2 jfrog-platform collection 7.21.12

Which operating system and version : ubuntu - 20.04

What happened: I tried to install on a clean Ubuntu VM using the artifactory role. Artifactory would fail to start.

What you expected to happen: Artifactory to start

How to reproduce it (as minimally and precisely as possible): try molecule and tests?

Anything else we need to know: After hours of troubleshooting, I pinpointed the issue: Ubuntu 20.04 doesn't ship with net-tools by default. I reverted the VM, installed net-tools and it all worked.

Unable to install artifactory with ssl enabled

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Which installer:
artifactory_nginx_ssl

Which product and version (eg: ansible & collection version - 1.1.2):
ansible collection latest version

Which operating system and version(eg: ubuntu & version - 20.4):
Ubuntu 20.04

What happened:

TASK [install nginx with SSL] *******************************************************************************************************************************************************************************

TASK [jfrog.platform.artifactory_nginx_ssl : Configure the artifactory nginx conf] **************************************************************************************************************************
fatal: [XXX]: FAILED! => {"changed": false, "checksum": "2782e7940a7e9e1255537bb79a10d1a4d2a276ae", "msg": "Destination directory /etc/nginx/conf.d does not exist"}

What you expected to happen:
nginx to be installed

How to reproduce it (as minimally and precisely as possible):
Try to install artifactory with artifactory_nginx_enabled set to false and artifactory_nginx_ssl_enabled set to true on a fresh system installation.

Anything else we need to know:
The role artifactory_nginx_ssl does not install and configure nginx the same way artifactory_nginx does it. It is impossible to run the role without previously running artifactory_nginx or install nginx separately.

[ansible/missioncontrol] Elasticsearch cluster status is yellow

Is this a request for help?: no


Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report

Which installer: Ansible

Which product and version (eg: ansible & collection version - 1.1.2): 7.24.3

Which operating system and version(eg: ubuntu & version - 20.4): Ubuntu 20.04.2

What happened: After installing mission control via the collection role, Elasticsearch cluster health API reports cluster status "yellow" and only one cluster node.

What you expected to happen: Elasticsearch cluster status being green, with 3 nodes.

How to reproduce it (as minimally and precisely as possible): Install mission control on a Ubuntu server and curl http://localhost:9200/_cluster/health

Anything else we need to know:
curl http://localhost:9200/_cluster/health output:
{"cluster_name":"elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":10,"active_shards":10,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":9,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":52.63157894736842}

Artifactory and xray postgres users with incorrect permissions

Which operating system and version(eg: ubuntu & version - 20.4):
Ubuntu 20.04

What happened:
After run the ansible playbook for artifactory, in a clean install, i got multiple failures when starting the service.

postgres=# \du
List of roles
Role name | Attributes | Member of
-------------+------------------------------------------------------------+-----------
artifactory | | {}
mc | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
xray | | {}

How to reproduce it (as minimally and precisely as possible):
Install artifactory or Xray role with posgress in a new server

Anything else we need to know:
To fix the issue i needed to grant "owner" and more persmissions to artifactory users to the artifactory database. Same for Xray user.

[ansible/nginx] Unable to restart service nginx: Job for nginx.service failed because...

New issue, this time the last tasks to restart nginx is failing due to SELinux permission not set.

TASK [jfrog.installers.artifactory_nginx_ssl : restart nginx] ****************************************************************
fatal: [art2]: FAILED! => {"changed": false, "msg": "Unable to restart service nginx: Job for nginx.service failed because the control process exited with error code. See \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}
fatal: [art1]: FAILED! => {"changed": false, "msg": "Unable to restart service nginx: Job for nginx.service failed because the control process exited with error code. See \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}

Checking journalctl -xe can see the issue seLinux permission is not correct for nginx to read the certificate file.

Nov 12 12:39:11 atfapp01 systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
-- Subject: Unit nginx.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit nginx.service has failed.
--
-- The result is failed.
Nov 12 12:39:11 atfapp01 systemd[1]: Unit nginx.service entered failed state.
Nov 12 12:39:11 atfapp01 systemd[1]: nginx.service failed.
Nov 12 12:39:12 atfapp01 dbus[716]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'
Nov 12 12:39:13 atfapp01 setroubleshoot[13181]: failed to retrieve rpm info for /var/opt/jfrog/nginx/ssl/cert.pem
Nov 12 12:39:14 atfapp01 setroubleshoot[13181]: SELinux is preventing /usr/sbin/nginx from read access on the file /var/opt/jf
Nov 12 12:39:14 atfapp01 python[13181]: SELinux is preventing /usr/sbin/nginx from read access on the file /var/opt/jfrog/ngin

Looking at the artifactory_nginx_ssl role its appears its missing applying the required seLinux restorecon permission. Is this know bug and fixed in the upcoming release?

If bug is known and have revised version of the new role in another branch, can you share so I can update my local copy to progress with this Artifactory build as its pre-req the X-Ray build which I need to complete with this collection?

[xray][rabbitmq] install db5.3-util and db-util fail. Failed to satisfy all dependencies (broken cache)"

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug

Which installer: ansible

Which product and version (eg: ansible & collection version - 1.1.2): x-ray collection version 7.19.8

Which operating system and version(eg: ubuntu & version - 20.4): ubuntu 20.04

What happened: when install role x-ray, task "Install db5-util" package and task "Install db-util package" fail with error FAILED! => {"changed": false, "msg": "Failed to satisfy all dependencies (broken cache)"}

What you expected to happen: install third party db5-util and db-util successfully

How to reproduce it (as minimally and precisely as possible): install role xray

Anything else we need to know:

Potential problems on Openshift 4.4+

Openshift 4.4+ is dealing differently with modified or mutated SCCs.
At every update the default SCCs are restored to their default, meaning that all additions (users) are removed and all modifications are undone.
See https://bugzilla.redhat.com/show_bug.cgi?id=1818893 for details.
Since you require to add a service account to anyuid SCC and patch the restricted SCC, your installation will break after the first Openshift update.
Recommendation is to create own SCCs and use RBAC to access the SCC.
A much better way of course would be to create images that are able to deal with dynamic, unprivileged uids... and not rely on SCCs :-)

[ansible/postgres] No package matching 'python3-psycopg2' on RHEL 7.8

Hi

Using collection - jfrog.installers:1.1.2'

Trying to use collection on a local virtualbox lab and on our RHEL 7.8 VM install runs to this point and fails trying to install a package no available. Is this known error in the latest collection release?

TASK [jfrog.installers.postgres : install python3 psycopg2] ******************************************************************
fatal: [postgres]: FAILED! => changed=false
  msg: No package matching 'python3-psycopg2' found available, installed or updated
  rc: 126
  results:
  - No package matching 'python3-psycopg2' found available, installed or updated

[Ansible/all] Documentation should describe ports used by service

Which installer:

All

Which product and version (eg: ansible & collection version - 1.1.2):

All

Which operating system and version(eg: ubuntu & version - 20.4):

All

What happened:

Setting up service failed due to port issues.

What you expected to happen:

Documentation (README) should describe which ports must be opened in order for the services to run. The default settings for any modern cloud environment is to lock everything down, so a service should never assume that ports, etc., will be open.

Solution needed for private subnet support

I am trying to use this CFT in a private subnet. The following section in CFT has a hard dependency on public IP - can you please suggest how this will work in a private subnet?

"set_artifactory_context" : {
"command" : { "Fn::Join" : ["", ["sed -i -e "s/127.0.0.1/$(curl http://169.254.169.254/latest/meta-data/public-ipv4)/" /var/opt/jfrog/artifactory/etc/ha-node.properties"]]}
},

Database latency

Hi Guys,

Thanks for putting the cloudformation json together it's much appreciated. Quick question on RDS though, the documentation calls out the following:

latency well below 1 ms

https://www.jfrog.com/confluence/display/RTF/System+Requirements

which isn't performance figure I'd expect from RDS. How are your test servers performing on RDS under load and is this considered a supported deployment architecture?

Failed Upgrade when only using 1 member node.

Upgrade process fails when only using 1 member node.

2019-03-29 15:47:42,542 [art-init] [INFO ] (o.a.v.c.v.ConanV2DefaultLayoutConverter:58) - Conan default repository layout v2 conversion finished successfully
2019-03-29 15:47:42,676 [art-init] [ERROR] (o.a.c.ConvertersManagerImpl:216) - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: unstable environment: Found one or more servers with different version Config Reload denied.
2019-03-29 15:47:42,679 [art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:96) - Application could not be initialized: unstable environment: Found one or more servers with different version Config Reload denied.
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:211)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.access$200(ArtifactoryContextConfigListener.java:67)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:92)
Caused by: java.lang.RuntimeException: unstable environment: Found one or more servers with different version Config Reload denied.
at org.artifactory.converter.ConvertersManagerImpl.handleException(ConvertersManagerImpl.java:223)
at org.artifactory.converter.ConvertersManagerImpl.serviceConvert(ConvertersManagerImpl.java:171)
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:257)
at org.artifactory.spring.ArtifactoryApplicationContext.(ArtifactoryApplicationContext.java:144)
... 7 common frames omitted

Access/Secrect Key

Does file '/var/opt/jfrog/artifactory/etc/binarystore.xml' is only for S3 access, if so, probably use role instead of key for security reason.

[ansible/artifactory] missed dependency on net-tools package (netstat)

Is this a request for help?:

No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Which installer:

Ansible

Which product and version (eg: ansible & collection version - 1.1.2):

  • Artifactory: 7.10.2
  • jfrog.installers: 1.1.2

Which operating system and version(eg: ubuntu & version - 20.4):

Ubuntu 18.04

What happened:

Ansible standard playbook fails

...
TASK [jfrog.installers.artifactory : Copy local database driver] ********************************************************************************************************************************************
skipping: [artifactory.mydomain.io]

TASK [jfrog.installers.artifactory : download database driver] **********************************************************************************************************************************************
changed: [artifactory.mydomain.io]

TASK [jfrog.installers.artifactory : create artifactory service] ********************************************************************************************************************************************
changed: [artifactory.mydomain.io]

TASK [jfrog.installers.artifactory : Ensure permissions are correct] ****************************************************************************************************************************************
changed: [artifactory.mydomain.io]

TASK [jfrog.installers.artifactory : start and enable the primary node] *************************************************************************************************************************************
fatal: [artifactory.mydomain.io]: FAILED! => {"changed": false, "msg": "Unable to start service artifactory: Job for artifactory.service failed because the control process exited with error code.\nSee \"systemctl status artifactory.service\" and \"journalctl -xe\" for details.\n"}

PLAY RECAP **************************************************************************************************************************************************************************************************
artifactory.mydomain.io : ok=53   changed=33   unreachable=0    failed=2    skipped=51   rescued=1    ignored=0

After logging into the machine and checking, syslog presents:

Jan  8 16:06:09 artifactory systemd[1]: Starting Artifactory service...
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.246Z [shell] #033[38;5;69m[INFO ]#033[0m [] [systemYamlHelper.sh:512       ] [main] - Resolved shared.user (artifactory) from /opt/jfrog/artifactory/var/etc/system.yaml
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.279Z [shell] #033[38;5;69m[INFO ]#033[0m [] [systemYamlHelper.sh:512       ] [main] - Resolved shared.group (artifactory) from /opt/jfrog/artifactory/var/etc/system.yaml
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: /opt/jfrog/artifactory/app/bin/artifactoryCommon.sh: line 395: netstat: command not found
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.454Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryManage.sh:68       ] [main] - Starting Artifactory tomcat as user artifactory...
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.468Z [shell] #033[38;5;69m[INFO ]#033[0m [] [installerCommon.sh:1432       ] [main] - Checking open files and processes limits
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.482Z [shell] #033[38;5;69m[INFO ]#033[0m [] [installerCommon.sh:1435       ] [main] - Current max open files is 1024
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.497Z [shell] #033[38;5;69m[INFO ]#033[0m [] [installerCommon.sh:1446       ] [main] - Current max open processes is 31683
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: [DEBUG] Resolved system configuration file path: /opt/jfrog/artifactory/var/etc/system.yaml
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: userYaml decode failed at node .shared.database
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: yaml validation failed
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.525Z [shell] #033[1;31m[WARN ]#033[0m [] [installerCommon.sh:706        ] [main] - System.yaml validation failed
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: [DEBUG] Resolved system configuration file path: /opt/jfrog/artifactory/var/etc/system.yaml
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: Database connection successful
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.578Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:112      ] [main] - Final command: -server -Xms512m -Xmx2g -Xss256k -XX:+UseG1GC -XX:OnOutOfMemoryError="kill -9 %p" --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/java.lang.invoke=ALL-UNNAMED --add-opens java.base/java.text=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.desktop/java.awt.font=ALL-UNNAMED -Dfile.encoding=UTF8 -Djruby.compile.invokedynamic=false -Djruby.bytecode.version=1.8 -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Djava.security.egd=file:/dev/./urandom -Dartdist=zip -Djf.product.home=/opt/jfrog/artifactory
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.612Z [shell] #033[38;5;69m[INFO ]#033[0m [] [installerCommon.sh:3290       ] [main] - Found explicit shared.node.id in system.yaml with value : cc553e9684464b9fb9f97d425d99b24f
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.651Z [shell] #033[38;5;69m[INFO ]#033[0m [] [installerCommon.sh:3285       ] [main] - Setting JF_SHARED_NODE_IP to 127.0.1.1
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.686Z [shell] #033[38;5;69m[INFO ]#033[0m [] [installerCommon.sh:3285       ] [main] - Setting JF_SHARED_NODE_NAME to artifactory
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.779Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:157      ] [main] - Using Tomcat template to generate : /opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.862Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${artifactory.port||8081} to default value : 8081
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.899Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${artifactory.tomcat.connector.maxThreads||200} to default value : 200
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.958Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.port||8091} to default value : 8091
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.995Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.maxThreads||5} to default value : 5
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.031Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.acceptCount||5} to default value : 5
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.097Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${access.http.port||8040} to default value : 8040
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.134Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${access.tomcat.connector.maxThreads||50} to default value : 50
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.217Z [shell] #033[38;5;69m[INFO ]#033[0m [] [systemYamlHelper.sh:512       ] [main] - Resolved JF_PRODUCT_HOME (/opt/jfrog/artifactory) from environment variable
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.345Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:797      ] [main] - Resolved ${shared.tomcat.workDir||/opt/jfrog/artifactory/var/work/artifactory/tomcat} to default value : /opt/jfrog/artifactory/var/work/artifactory/tomcat
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: tr: write error: Broken pipe
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: tr: write error
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.477Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:305      ] [main] - Removing old custom drivers : /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_postgresql-42.2.18.jar /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_README.md
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.496Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:311      ] [main] - Copying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/postgresql-42.2.18.jar to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_postgresql-42.2.18.jar
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:10.516Z [shell] #033[38;5;69m[INFO ]#033[0m [] [artifactoryCommon.sh:311      ] [main] - Copying /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/README.md to /opt/jfrog/artifactory/app/artifactory/tomcat/lib/jf_README.md
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: ========================
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF Environment variables
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: ========================
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_SHARED_NODE_NAME                 : artifactory
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_SYSTEM_YAML                      : /opt/jfrog/artifactory/var/etc/system.yaml
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_ARTIFACTORY_PID                  : /var/run/artifactory.pid
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_PRODUCT_HOME                     : /opt/jfrog/artifactory
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES : jfrt,jfac,jfmd,jffe,jfevt
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_SHARED_NODE_IP                   : 127.0.1.1
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: JF_SHARED_NODE_ID                   : cc553e9684464b9fb9f97d425d99b24f
Jan  8 16:06:11 artifactory systemd[1]: Started Session c16 of user artifactory.
Jan  8 16:06:11 artifactory artifactoryManage.sh[10301]: Max number of open files: 1024
Jan  8 16:06:11 artifactory artifactoryManage.sh[10301]: Using JF_PRODUCT_HOME: /opt/jfrog/artifactory
Jan  8 16:06:11 artifactory artifactoryManage.sh[10301]: Using JF_ARTIFACTORY_PID: /var/run/artifactory.pid
Jan  8 16:06:11 artifactory artifactoryManage.sh[10301]: Tomcat started.
Jan  8 16:06:11 artifactory artifactoryManage.sh[10301]: /opt/jfrog/artifactory/app/bin/artifactoryCommon.sh: line 395: netstat: command not found
Jan  8 16:07:10 artifactory artifactoryManage.sh[10301]: message repeated 59 times: [ /opt/jfrog/artifactory/app/bin/artifactoryCommon.sh: line 395: netstat: command not found]
Jan  8 16:07:10 artifactory systemd[1]: Started Session c17 of user artifactory.
Jan  8 16:07:15 artifactory artifactoryManage.sh[10301]: #033[31m** ERROR: Artifactory Tomcat server did not start in 60 seconds, tomcat will be stopped. This timeout can be modified by setting shared.script.serviceStartTimeout (default: 60) in /opt/jfrog/artifactory/var/etc/system.yaml. #033[0m
Jan  8 16:07:15 artifactory systemd[1]: artifactory.service: Control process exited, code=exited status=1
Jan  8 16:07:15 artifactory systemd[1]: artifactory.service: Failed with result 'exit-code'.
Jan  8 16:07:15 artifactory systemd[1]: Failed to start Artifactory service.

Note the following error messages

Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: userYaml decode failed at node .shared.database
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: yaml validation failed
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: 2021-01-08T16:06:09.525Z [shell] #033[1;31m[WARN ]#033[0m [] [installerCommon.sh:706        ] [main] - System.yaml validation failed
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: [DEBUG] Resolved system configuration file path: /opt/jfrog/artifactory/var/etc/system.yaml
Jan  8 16:06:09 artifactory artifactoryManage.sh[10301]: Database connection successful
...
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: tr: write error: Broken pipe
Jan  8 16:06:10 artifactory artifactoryManage.sh[10301]: tr: write error

...
Jan  8 16:07:10 artifactory artifactoryManage.sh[10301]: message repeated 59 times: [ /opt/jfrog/artifactory/app/bin/artifactoryCommon.sh: line 395: netstat: command not found]

What you expected to happen:

Successful installation and start.

How to reproduce it (as minimally and precisely as possible):

  • Make sure the base system Ubuntu 18.04 does not contain netstat command. This is quite typical, you get ss utility instead (coming from iproute2 package) in default setup. If not, just run sudo apt-get purge net-tools -y.
  • Run Ansible default installer against the VM, e.g the playbook from #76

Anything else we need to know:

Workaround is pretty simple - add net-tools package to the playbook before running Artifactory role.

[ansible/missioncontrol] Ansible missioncontrol_version variable typo

Is this a request for help?: no
Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report
Which installer: mission control
Which product and version (eg: ansible & collection version - 1.1.2): ansible - latest
Which operating system and version(eg: ubuntu & version - 20.4): all
What happened: missioncontrol_version is set, but role also expects missionControl_version to be set
What you expected to happen: to just set missioncontrol_version
How to reproduce it (as minimally and precisely as possible): set just mission_control version and run mission control role

Anything else we need to know: the following files and lines have the typos:
Ansible/ansible_collections/jfrog/platform/roles/missioncontrol/defaults/main.yml | 15
Ansible/ansible_collections/jfrog/platform/roles/missioncontrol/defaults/main.yml | 19
Ansible/ansible_collections/jfrog/platform/roles/missioncontrol/templates/installer-info.json.j2 | 3

License? Contribution?

FEATURE REQUEST: open source license and instructions for contributing

This code base is posted, but there is no open source license associated with it. I'd like to expand on what's here by adding to the code, but would like some contribution instructions and an open source license to know how to proceed. If the instructions are that no external contributions outside of jfrog are accepted, that would be nice to know as well.

[ansible/missioncontrol] unable to start elasticsearch via missioncontrol role

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug

Which installer: Ansible

Which product and version (eg: ansible & collection version - 1.1.2): ansible & platform collection version - 7.18.5

Which operating system and version(eg: ubuntu & version - 20.4): all

What happened: Unable to start elasticsearch

What you expected to happen: start elasticsearch via playbook

How to reproduce it (as minimally and precisely as possible): run missioncontrol role

Anything else we need to know:

TASK [missioncontrol : Start elasticsearch] ******************************************************************************************************************************************************************************************************************
task path: ansible/ansible_collections/jfrog/platform/roles/missioncontrol/tasks/setup-elasticsearch.yml:154
changed: [missionControl-1] => {
    "changed": true,
    "cmd": "/usr/share/elasticsearch/bin/elasticsearch -d",
    "delta": "0:00:02.005063",
    "end": "2021-05-03 09:59:39.645185",
    "rc": 0,
    "start": "2021-05-03 09:59:37.640122"
}

TASK [missioncontrol : Wait for elasticsearch to start] ******************************************************************************************************************************************************************************************************
task path: ansible/ansible_collections/jfrog/platform/roles/missioncontrol/tasks/setup-elasticsearch.yml:164
Pausing for 15 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [missionControl-1] => {
    "changed": false,
    "delta": 15,
    "echo": true,
    "rc": 0,
    "start": "2021-05-03 15:29:40.237961",
    "stop": "2021-05-03 15:29:55.239143",
    "user_input": ""
}

STDOUT:

Paused for 15.0 seconds

TASK [missioncontrol : Init searchguard plugin] **************************************************************************************************************************************************************************************************************
task path: ansible/ansible_collections/jfrog/platform/roles/missioncontrol/tasks/setup-elasticsearch.yml:168
fatal: [missionControl-1]: FAILED! => {
    "changed": true,
    "cmd": "./sgadmin.sh -p 9300 -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -cd /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/ -nhnv -icl\n",
    "delta": "0:00:00.317357",
    "end": "2021-05-03 09:59:59.160270",
    "rc": 255,
    "start": "2021-05-03 09:59:58.842913"
}

STDOUT:

Search Guard Admin v7
Will connect to localhost:9300
ERR: Seems there is no Elasticsearch running on localhost:9300 - Will exit

Custom binarystore.xml is not installed in correct directory and not read by Artifactory

Is this a request for help?: no


BUG REPORT

Which installer: ansible collection 1.1.2

Which product and version (eg: ansible & collection version - 1.1.2): version 1.1.2

Which operating system and version(eg: ubuntu & version - 20.4): 18.4 kernel 4.15.0-140-generic

What happened:
A custom binarystore.xml file (variable binary_store_file points to a file) is available and it's copied to $ARTIFACTORYHOME/var/etc/binarystore.xml
However Artifactory ignores that.
When I copied that file into $ARTIFACTORYHOME/var/etc/artifactory/binarystore.xml
Then it got picked up, however Artifactory did not create the directory structure in the correct path.

What you expected to happen:
I expect it this file to be placed in correct path, so when Artifactory is initialized, it creates the directory structure in the place I want it to.

How to reproduce it (as minimally and precisely as possible):
Create a custom binarystore.xml file and provide it via the variable.
Then run the playbook

Anything else we need to know:
This forces me to use the default installation file-store which is not what I require.

[ansible/xray] MV untar directory to xray home exists

Installer: Ansible Galaxy

JFrog Ansible Collection: jfrog.installers v1.1.2

What happened:
Execute basic play to install XRay failed, fixed issue and re-execute play as per normal for Ansible playbook due to its idempotency state.

What you expected to happen:
X-Ray to be installed with issue and Success.

How to reproduce it (as minimally and precisely as possible):
Create you inventory host file as per example
Create basic playbook to install X-Ray as per example
Execute the playbook, it will error first time due to the know Postgres Init - D error under issue #36
Fix the err by manually init Postgres as prescribed in issue #36
Re-execute playbook and it will now fail on the "MV untar directory xray home" task

Anything else we need to know:
The error which is displayed when fails because the destination already exists:

TASK [jfrog.installers.xray : MV untar directory to xray home] **********************************************************************
task path: /root/.ansible/collections/ansible_collections/jfrog/installers/roles/xray/tasks/install.yml:37
fatal: [xry1]: FAILED! => {"changed": true, "cmd": ["mv", "/opt/jfrog/jfrog-xray-3.3.0-linux", "/opt/jfrog/xray"], "delta": "0:00:00.012358", "end": "2020-11-19 08:00:28.848768", "msg": "non-zero return code", "rc": 1, "start": "2020-11-19 08:00:28.836410", "stderr": "mv: cannot move ‘/opt/jfrog/jfrog-xray-3.3.0-linux’ to ‘/opt/jfrog/xray/jfrog-xray-3.3.0-linux’: File exists", "stderr_lines": ["mv: cannot move ‘/opt/jfrog/jfrog-xray-3.3.0-linux’ to ‘/opt/jfrog/xray/jfrog-xray-3.3.0-linux’: File exists"], "stdout": "", "stdout_lines": []}

Looking at the code the issue is because the task is using a Command to execute and thus not able to establish idempotency. Its better to untar the tarball file to the destination folder the MV is try to make the destination.

Request License

Please add a license file to make it easier to know how you want your work shared. If you don't have a preference, ISC is business friendly.

[Mission Control][Elasticsearch] . searchguard plugin anonymous_auth_enabled configuration

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Which installer: Ansible

Which product and version (eg: ansible & collection version - 1.1.2): Mission Control

Which operating system and version(eg: ubuntu & version - 20.4): all

What happened: searchGuard plugin is not configured to allow anonymous HTTP request. the file /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/sg_config.yml has the default config:
sg_config:
dynamic:
http:
anonymous_auth_enabled: false

What you expected to happen: when run playbook the plugin searchGuard is configured to allow anonymous request. the file /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/sg_config.yml must have this configuration:
sg_config:
dynamic:
http:
anonymous_auth_enabled: true

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know: in case mission control deployment is using helm chart a script "initializeSearchGuard.sh" is run.

xray role changes owner of artifactory folder, causing error in artifactory

Which installer: Ansible

Which product and version: ansible 2.11.2 jfrog-platform collection 7.21.12

Which operating system and version: Ubuntu 20.04

What happened:

I used this playbook to install artifactory and xray on a single server

- hosts: postgres_servers
  roles:
    - jfrog.platform.postgres

- hosts: artifactory_servers
  roles:
    - jfrog.platform.artifactory

- hosts: xray_servers
  roles:
    - jfrog.platform.xray

While artifactory started working for a while, when the xray role ran, it chowned /opt/artifactory to user xray:xray!
I had to chown -R artifactory:artifactory artifactory artifactory-pro-7.21.12 jfrog-artifactory-pro-7.21.12-linux.tar.gz

What you expected to happen:

artifactory and xray to work without spending more than a day troubleshooting

Change where Artifactory logs are written to?

Is this a request for help?:
Yeah.

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST

In standalone Artifactory installer , it was rather easy to change the location of the log directory.
I would like to be able to do the same with jfrig.platform, however I have not found any variable that would allow me to change the directory for logs.

[OPENSHIFT] Upgrade Path From 7.7.3 to 7.9.0 broken

Upgrade path is currently broken in Openshift operators from v1.0.3 to v1.1.0 due to changes in the underlying base charts that we are trying to resolve. For now customers are recommended not to install v1.1.0 unless it is a new install.

[ansible] Add variable for PostgreSQL repo URLs

Is this a request for help?: no


Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Which installer: Ansible

Which product and version (eg: ansible & collection version - 1.1.2): Ansible & collection version 7.18.6

Which operating system and version(eg: ubuntu & version - 20.4): Ubuntu 20.04.2

What happened: Postgres role fails to install, throwing "connection refused" when apt_key and apt_repository try to add the repository at https://www.postgresql.org

What you expected to happen: have a variable for the key and repo URL

How to reproduce it (as minimally and precisely as possible): run the Postgres role in an air-gapped environment

Anything else we need to know: I suggest to add a variable for the PostgreSQL repo and repo key URLs. The default variables can contain the internet URL, which can be overridden in case of an air-gapped environment.

[ansible/artifactory] AnsibleError: template error while templating string: tag name expected.

When running the Artifactory play the artifactory role errors with:

TASK [jfrog.installers.artifactory : configure system yaml template] *********************************************************
task path: /root/.ansible/collections/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml:112
fatal: [art1]: FAILED! => changed=false
  msg: |-
    AnsibleError: template error while templating string: tag name expected. String: ## @formatter:off
    ## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
    ## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
    configVersion: 1

This is caused due to bad formed jinja2 template system.yaml.j2 in the role artifactory/templates, if-else-endif block at the bottom

## Example for mysql/postgresql
    type: "{{ db_type }}"
{%+ if db_type == 'derby' -%}
#    driver: "{{ db_driver }}"
#    url: "{{ db_url }}"
#    username: "{{ db_user }}"
{%+ else -%}
     driver: "{{ db_driver }}"
     url: "{{ db_url }}"
     username: "{{ db_user }}"
{%+ endif -%}
    password: "{{ db_password }}"

This can be fixed by removing the jinja2 if-else-endif block which appears to not have logic to do any changes differently?

## Example for mysql/postgresql
    type: "{{ db_type }}"
    driver: "{{ db_driver }}"
    url: "{{ db_url }}"
    username: "{{ db_user }}"
    password: "{{ db_password }}"

[ansible/postgres] RHEL 7.8 - ''postgres_server_initdb_become'' is undefined'

Hi

Using collection - jfrog.installers:1.1.2'

Trying to use collection on a local virtualbox lab and on our RHEL 7.8 VM install runs to the initialize PostgresSQL database cluster" task then errors as below.

TASK [jfrog.installers.postgres : initialize PostgreSQL database cluster] ****************************************************
task path: /root/.ansible/collections/ansible_collections/jfrog/installers/roles/postgres/tasks/main.yml:33
fatal: [postgres]: FAILED! =>
  msg: 'The field ''become'' has an invalid value, which includes an undefined variable. The error was: ''postgres_server_initdb_become'' is undefined'

On executing with -vv verbose I can see the variables being created by the collections postgres role as follows:

TASK [jfrog.installers.postgres : define distribution-specific variables] ****************************************************
task path: /root/.ansible/collections/ansible_collections/jfrog/installers/roles/postgres/tasks/main.yml:2
ok: [postgres] => changed=false
  ansible_facts:
    postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/postgresql{{ postgres_server_pkg_version }}-setup initdb -D
    postgres_server_config_data_directory: null
    postgres_server_config_external_pid_file: null
    postgres_server_config_hba_file: null
    postgres_server_config_ident_file: null
    postgres_server_config_location: '{{ postgres_server_data_location }}'
    postgres_server_data_location: /var/lib/pgsql/{{ postgres_server_version }}/data
    postgres_server_service_name: postgresql-{{ postgres_server_version }}
  ansible_included_var_files:
  - /root/.ansible/collections/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat.yml

As you can see it appears the required postgres_server_initdb_become var is missing from the /postgres/vars/RedHat.yml although I have checked and it is included in the RedHat_pg-9.6.yml vars file. Should this one have been used and pulled through the play instead or aswell?

[ansible/artifactory] artifactory role is still not idempotent

Is this a request for help?:

Bug report.

This is continuation of #75 which was only partially fixed.

See #75 (comment)

One more observation - which was probably already spotted during internal testing, but just in case: there is an idempotence issue with generating system.yaml file as well. Namely, there are 3 situations when it's being updated:

  1. by this Ansible role when generating it for the 1st time
  2. by installService.sh script ran from this Ansible role (probably one time only as well, but I'm not sure if that's so) - adding the user.shared
  3. by Artifactory itself when it's encrypting the DB key

This means that re-running Ansible overwrites changes done by (2) and (3).


Which installer:

Ansible

Which product and version:

Artifactory 7.18.5

What happened:

Running the playbook twice against an existing host restarts artifactory despite no changes.

On second run of the playbook the following happens:

TASK [jfrog.platform.artifactory : Configure systemyaml] ************************************************************************************************************
changed: [artifacts.host]
 
TASK [jfrog.platform.artifactory : Configure join key] **************************************************************************************************************
changed: [artifacts.host]
 
TASK [jfrog.platform.artifactory : Create artifactory service] ******************************************************************************************************
changed: [artifacts.host]
 
RUNNING HANDLER [jfrog.platform.artifactory : restart artifactory] **************************************************************************************************
changed: [artifacts.host]

What you expected to happen:

Idempotency is a crucial rule in Ansible.

I'd expect this playbook to to idempotent, i.e. not take any actions if it's not making any actual changes. Especially an action as intrusive like service restart.

How to reproduce it (as minimally and precisely as possible):

https://github.com/jfrog/JFrog-Cloud-Installers/blob/master/Ansible/examples/playbook-rt.yml

Run it twice:

ansible-playbook -i inventory.yml playbook-rt.yml

Anything else we need to know:

n/a

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.