uselagoon / lagoon Goto Github PK
View Code? Open in Web Editor NEWLagoon, the developer-focused application delivery platform
Home Page: https://docs.lagoon.sh/
License: Apache License 2.0
Lagoon, the developer-focused application delivery platform
Home Page: https://docs.lagoon.sh/
License: Apache License 2.0
When building the centos7-mariadb10
warning: /var/cache/yum/x86_64/7/mariadb/packages/MariaDB-10.2.8-centos7-x86_64-common.rpm: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY
Public key for MariaDB-10.2.8-centos7-x86_64-common.rpm is not installed
I solved the build locally by adding
RUN rpm --import https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
Will push a MR to fix that issue in a bit
/bastian
The api service is currently not using the yarn workspace system, therefore during building the api Dockerimage we download already downloaded packages, which slows down the process of building.
On order to do that we probably need (just on top of my head though)
dependencies
and devDependencies
from https://github.com/amazeeio/lagoon/blob/master/services/api/package.json into https://github.com/amazeeio/lagoon/blob/master/package.jsonlagoon-packages-builder
build image, see example here https://github.com/amazeeio/lagoon/blob/master/services/openshiftdeploy/DockerfileRunning make build on macos 10.12.6 crashes with the error "ssh-keygen: /lib64/libcrypto.so.10: version `OPENSSL_1.0.2' not found (required by ssh-keygen)"
Steps to reproduce :
Clone lagoon repo and run make build
Wait a while and on building the auth-ssl the process stops
docker build --quiet --build-arg IMAGE_REPO=lagoon -t lagoon/auth-ssh -f services/auth-ssh/Dockerfile .
steps 12 in the process is the crash.
See attached console log
it looks like an upstream issue with "${IMAGE_REPO:-amazeeiolagoon}/centos7:" upstream.
Trying to figure out how to fix it, because I do think it happened on a 'regular' Centos 7 server I have in production to, but that was when installing apache 2.4.27 on a clean install.
Looks like the drush role cannot access siteHost
:
curl 'http://localhost:3000/graphql?' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc2hLZXkiOiJBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFDQVFDNWI5QXdiNml3V1J6MXorUUR3TGxTdnRweGcvN3BSN1JmK2tKVDNraFFHaUx1WDRRazJhRENydDFmS25kanlVdWo1NW9jUmM4T2xFeHJBOEZQZlpicG5mYU5YZlZoOVRseWJoaklXVUh0TGJjOWdpQ29EdWlWMlRFYjZaVy9lc0tsVnRXb216Zlkvbmp3UEF1WEVVMmkzajVZeFZRdVoweStBTjR5Y2E0VjF5N3kyQmxVT3ZwNXFTdnovOFBjQkplZ2FvZXNTU3VGcFFXQ1I3ODgvaTBzVVJKaWFHNit0Wk4rYkdWU25KZ3RiWFFKdmMwcjQ1YXUzUEw1anlNY0FwYkhPQ0RRVUlpV3lDNmlwVzJFSVlUVUJWTzgwUVZFanltbUJacFpFUWwvMERRUEV6QmRxQ2k0WDR3bXdob2FuN2hRSlE3ZE5kcW1STCtXNEw3NXlQTnluZ05nSlZOVGxJWE9IWGV2cE5za0gvL0hVbHdFcDRXUEh1VDc2QlpUM2NxMVJDYXRuNmlNc0Zpd3BFU2s5eHlEYkVWUGZ4S0FyNzdjUnBGSUh1SmQ3YW1EalRrNy9LUDBQVGxpTzJWaUN5akR1ZGlhandOaDdYbVkwWnFZcmhISTE2ZWJUU0VTRHNsaDBhMEpsWnZXbFlhNHhEQkt0S3dmSU5ScmNyWW11WVV6U0Y3d243RjFvNDVjSFoya3VmdGlvT1FoTE5neDBMcXp0Uk1uM1JCb2VHM1FIYnhtdjlXZk8vMGhIYTRGZGtRTEVyRWR6RU05SlI0V2V3Um5oMDRkY2plalgxZGFXS2JvMlg4bmRhQ2MwQnZkTkRML0hwU2dxZkNuSjJObHE0cE02UG9ta3ZkcDV3VFdwWndDakRsMDhVZ3R0QUZyUXdqWW9CeTlVcFVtRFE9PSIsImlzcyI6ImF1dGgtc2VydmVyLmRldiIsInJvbGUiOiJkcnVzaCIsImF1ZCI6ImFwaS5kZXYiLCJpYXQiOjE1MDQxNzYyMDh9.MabPlSIRC-heC35MRZrYXdRDvUsPuBCjOWNAufq1k1c' -H 'Origin: http://localhost:3000' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Referer: http://localhost:3000/graphql' -H 'Cookie: _ga=GA1.1.1402557004.1486995190; ACEGI_SECURITY_HASHED_REMEMBER_ME_COOKIE=YWRtaW46MTUwNTM2NDQ3MjU3Mjo4ZjY4OThkNDBiZDhjM2ZkYzkzNjgxNjg5MmYzNmFiYzgwMjlkODdkM2IzZTdhYjM3NjFjODNlYzg2OTU3Yjcz' -H 'Connection: keep-alive' --data-binary '{"query":"{\n siteGroup:siteGroupByName(name: \"credentialtest\") {\n gitUrl\n slack {\n webhook\n channel\n informStart\n informChannel\n }\n client {\n clientName\n }\n sites {\n siteName\n siteBranch\n siteEnvironment\n siteHost\n serverInfrastructure\n serverIdentifier\n serverNames\n deployStrategy\n webRoot\n domains\n jumpHost\n }\n }\n}","variables":null,"operationName":null}' --compressed
result:
{
"data": {
"siteGroup": {
"gitUrl": "git@git:/git/credentialtest.git",
"slack": {
"webhook": "https://hooks.slack.com/services/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",
"channel": "amazeeio-testing",
"informStart": null,
"informChannel": null
},
"client": null,
"sites": [
{
"siteName": "credentialtest_branch2",
"siteBranch": "branch2",
"siteEnvironment": "development",
"siteHost": null,
"serverInfrastructure": "compact",
"serverIdentifier": "credentialtest",
"serverNames": [
"credentialtest.compact"
],
"deployStrategy": null,
"webRoot": null,
"domains": [
"credentialtest"
],
"jumpHost": null
}
]
}
}
}
curl 'http://localhost:3000/graphql?' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc2hLZXkiOiJBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFDQVFDNWI5QXdiNml3V1J6MXorUUR3TGxTdnRweGcvN3BSN1JmK2tKVDNraFFHaUx1WDRRazJhRENydDFmS25kanlVdWo1NW9jUmM4T2xFeHJBOEZQZlpicG5mYU5YZlZoOVRseWJoaklXVUh0TGJjOWdpQ29EdWlWMlRFYjZaVy9lc0tsVnRXb216Zlkvbmp3UEF1WEVVMmkzajVZeFZRdVoweStBTjR5Y2E0VjF5N3kyQmxVT3ZwNXFTdnovOFBjQkplZ2FvZXNTU3VGcFFXQ1I3ODgvaTBzVVJKaWFHNit0Wk4rYkdWU25KZ3RiWFFKdmMwcjQ1YXUzUEw1anlNY0FwYkhPQ0RRVUlpV3lDNmlwVzJFSVlUVUJWTzgwUVZFanltbUJacFpFUWwvMERRUEV6QmRxQ2k0WDR3bXdob2FuN2hRSlE3ZE5kcW1STCtXNEw3NXlQTnluZ05nSlZOVGxJWE9IWGV2cE5za0gvL0hVbHdFcDRXUEh1VDc2QlpUM2NxMVJDYXRuNmlNc0Zpd3BFU2s5eHlEYkVWUGZ4S0FyNzdjUnBGSUh1SmQ3YW1EalRrNy9LUDBQVGxpTzJWaUN5akR1ZGlhandOaDdYbVkwWnFZcmhISTE2ZWJUU0VTRHNsaDBhMEpsWnZXbFlhNHhEQkt0S3dmSU5ScmNyWW11WVV6U0Y3d243RjFvNDVjSFoya3VmdGlvT1FoTE5neDBMcXp0Uk1uM1JCb2VHM1FIYnhtdjlXZk8vMGhIYTRGZGtRTEVyRWR6RU05SlI0V2V3Um5oMDRkY2plalgxZGFXS2JvMlg4bmRhQ2MwQnZkTkRML0hwU2dxZkNuSjJObHE0cE02UG9ta3ZkcDV3VFdwWndDakRsMDhVZ3R0QUZyUXdqWW9CeTlVcFVtRFE9PSIsInN1YiI6ImFuc2libGUtdGVzdCIsImlzcyI6ImF1dGgtc2VydmVyLmRldiIsInJvbGUiOiJhZG1pbiIsImF1ZCI6ImFwaS5kZXYiLCJpYXQiOjE1MDM1MDI3MDB9.PGr-w3Wicb3X1ggF71emPnPbps3Zyh0DgKsmNxUAVoc' -H 'Origin: http://localhost:3000' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Referer: http://localhost:3000/graphql' -H 'Cookie: _ga=GA1.1.1402557004.1486995190; ACEGI_SECURITY_HASHED_REMEMBER_ME_COOKIE=YWRtaW46MTUwNTM2NDQ3MjU3Mjo4ZjY4OThkNDBiZDhjM2ZkYzkzNjgxNjg5MmYzNmFiYzgwMjlkODdkM2IzZTdhYjM3NjFjODNlYzg2OTU3Yjcz' -H 'Connection: keep-alive' --data-binary '{"query":"{\n siteGroup:siteGroupByName(name: \"credentialtest\") {\n gitUrl\n slack {\n webhook\n channel\n informStart\n informChannel\n }\n client {\n clientName\n }\n sites {\n siteName\n siteBranch\n siteEnvironment\n siteHost\n serverInfrastructure\n serverIdentifier\n serverNames\n deployStrategy\n webRoot\n domains\n jumpHost\n }\n }\n}","variables":null,"operationName":null}' --compressed
result
{
"data": {
"siteGroup": {
"gitUrl": "git@git:/git/credentialtest.git",
"slack": {
"webhook": "https://hooks.slack.com/services/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",
"channel": "amazeeio-testing",
"informStart": null,
"informChannel": null
},
"client": {
"clientName": "credentialtestclient"
},
"sites": [
{
"siteName": "credentialtest_branch2",
"siteBranch": "branch2",
"siteEnvironment": "development",
"siteHost": "credentialtest.compact",
"serverInfrastructure": "compact",
"serverIdentifier": "credentialtest",
"serverNames": [
"credentialtest.compact"
],
"deployStrategy": null,
"webRoot": null,
"domains": [
"credentialtest"
],
"jumpHost": null
}
]
}
}
}
"siteHost": "credentialtest.compact",
also for role drush
For Drush we need to find a new role that allows READ access to graphql queries like this one:
{
siteGroup:siteGroupByName(name: "amazee_io") {
gitUrl
slack {
webhook
channel
informStart
informChannel
}
sites {
siteName
siteBranch
siteEnvironment
siteHost
serverInfrastructure
serverIdentifier
serverNames
deployStrategy
webRoot
domains
jumpHost
}
}
}
I'm perfectly fine with generating one token with a long lifetime which is then hardcoded in here: https://github.com/amazeeio/lagoon/blob/master/helpers/drush-alias/web/aliases.drushrc.php.stub, we will over time slowly convert drush calls to real authed calls via the cli, so it's just a temporary solution.
UPDATE:
For now, our credential system is not capable of attribute-based read permissions... so we will limit the drush
role to only access Site / SiteGroup information. Access Client information will be denied.
Alpine Image with Varnish 5
In #72 we send service logs to elasticsearch, find a way on how to alert from that
During DrupalCon Vienna we had a BOF about Docker and Drupal, together with @zaporylie we discussed that it would make sense to combine efforts for preconfigured Drupal Images.
The idea would be that the Drupal Images that are built within Lagoon are donated to https://github.com/drupal-docker and maintained over there.
There are a couple of questions left:
.amazeeio.yml
-> .lagoon.yml
.amazeeio.env.$BRANCH
-> .lagoon.env.$BRANCH
com.amazeeio
-> lagoon
Currently requests to the API are allowed via JWT tokens. For intra-service communication we create an admin
token that is used then by every service to talk to the api.
Unfortunately this makes bootstrapping of a new Lagoon very hard, as you need the Lagoon system running in order to create a new JWT token, but in the very first stages we don't have Lagoon running yet :)
So my idea would be to also allow communication to the API via JWTSECRET
directly. As we distribute the JWTSECRET
to all Services already, they could just use that to talk to the API.
.lagoon.yml
Development Environments should have some special headers (x-robots, etc.)
we need a system that automatically injects them (probably via LUA?)
BLACKFIRE_SERVER_ID
and BLACKFIRE_SERVER_TOKEN
env variables are existingfor common tasks (redirects, basic auth) it would be nice to tell the developers to define some ENV variables instead of writing full nginx configs.
These ENV variables are read by an entrypoint script and auto generates nginx configs based on them.
Figure out how to run New Relic inside a fully dockerized environment (maybe run the NR Agent only once and let the php new relic modules talk to that agent?)
Should be a new key on API Project storage, which allows to define which environment (aka branch name) should be used as the production environment
Instead of telling developers to overwrite the whole nginx config if they just need to change a small piece of the config (like blocking IPs, special redirects, etc.), the main nginx config should include some files in a known directory.
remove AMAZEEIO_
from all environment variables
keep LAGOON_AVAILABILITY_CLASS
keep LAGOON_LOCATION
(add name of openshift)
AMAZEEIO_SITE_BRANCH
-> LAGOON_GIT_BRANCH
AMAZEEIO_SITE_ENVIRONMENT
-> LAGOON_ENVIRONMENT_TYPE
AMAZEEIO_SITE_GROUP
-> LAGOON_PROJECT
remove AMAZEEIO_SITE_NAME
AMAZEEIO_SITE_URL
remove for now (will be added later with LAGOON_ROUTES
see #79)
AMAZEEIO_TMP_PATH
replace with just /tmp
remove AMAZEEIO_WEBROOT
The Image oc-build-deploy-dind
is mostly responsible for checking out git code, running Docker builds, create openshift resources and monitor deployments.
It is all implemented in Bash and we hit limits in terms of handling special cases, etc.
So the idea would be to reimplement it in Go with using Kompose as inspiration for this.
Current idea:
We don't want our users to install oc
on their systems. Instead we would like to have an SSH Server that when connected runs a forced command with oc rsh
that connects to the wished container.
We already have a system that can dynamically look at incoming ssh keys and figure out to which sites an ssh key has access to: https://github.com/amazeeio/lagoon/blob/develop/services/auth-ssh/sshd_config#L15
So the idea of the flow is:
[email protected]
to the SSH Server endpoint that is running at server.com
. (amazeeio
is the sitegroup, prod
is the site to connect to`)amazeeio
and prod
oc rsh -n amazeeio-prod dc/cli
which connects the user to the cli container of the openshift project amazeeio-prod
Your docs refer to Lagoon as being for "Openshift & Kubernetes". Is it possible to run Lagoon outside of OpenShift on another Kubernetes provider?
I ask because Azure provides $5000 of free credit p.a. for non-profits which is very tempting.
While moving slackin over to lagoon, the first push of master w/ the lagoon changes failed to deploy, showing this error in the openshiftdeploy log
2017-08-31T22:23:29.633Z - silly: Error from server (Forbidden): User "system:serviceaccount:amze-amazeeio:jenkins" cannot list rolebindings in project "amze-amazeeio-slackin-master"
I saw similar behavior when pushing the develop branch for the first time as well. A subsequent push had the deployment succeed each time.
Is there any document or a best way to start using it?
Let's say I want to set up a new drupal site to work on and have it deployed somewhere, say digital ocean, linode or whatever.
Thanks.. I am very curious to give this a try.
XDEBUG_ENABLE
env variables are existingAC:
figure out how to purge varnish in systems that have multiple varnishes
I think we could rethink how the API stores it's data. Currently we are required to store data in the Hiera YAML format as our v3 infrastructure also uses this api in order to provision servers, etc.
The idea initially was that the v4 infrastructure uses the same exact same api and then in the future when everything is migrated we can remove the hiera YAML format and move to another storage system - I would call this process the parallel migration process.
I think though we should rethink this idea and have the following suggestion:
Whenever we start a MySQL/MariaDB it also creates a cronjob that connects to the running mysql and dumps all databases into a persistent storage that is only mounted into the backup cronjob container.
Also check with VSHN how they do it in Appuio
As discussed in #29 we want to rebuild the Storage of the API
At the same time we also create new Objects, in this hierarchy:
ssh keys can be referenced from customer and from project
key | type | description | example value |
---|---|---|---|
name | String | Unique. name of the customer. | amazeeio |
ssh_keys | reference to ssh_keys object | can be referencing to multiple ssh keys, this ssh key will have access to all projects of this customer | |
comment | free text | some comment about the client | |
created | date | time of creation date of customer. Date format tbd. | |
private_key | SSH Private key | ssh private key for this specific user, will be used during the deployment to access the git repositories that should be deployed |
key | type | description | example value |
---|---|---|---|
name | string | Unique. name of the project | awesomewebsite |
client | reference to client | reference to the client of this project | |
ssh_keys | reference to ssh_keys object | can be referencing to multiple ssh keys, used to allow specific ssh keys only access to a single project | |
git_url | string | git url of the project, needs to be in ssh format | [email protected]:amazeeio/awesomewebsite.git |
slack | reference to slack object | can be either existing (if slack enabled) or not (no slack notifications) | |
active_systems_deploy | String | Name of the active system for deployment | lagoon_openshiftDeploy |
active_systems_remove | String | Name of the active system for removals | lagoon_openshiftRemove |
branches | String | Regex of branches to be deployed, default is all branches | ^(master|staging)$ or .* |
pullrequests | Bool | Enable or disable pull request builds, default: false | true, false |
openshift | Reference to OpenShift Object | Used to define to which OpenShift this project should be deployed too |
key | type | description | example value |
---|---|---|---|
name | string | unique, name of the openshift server | |
console_url | URL | URL of the console to connect to | https://console.appuio.ch |
registry | Domain String | Domain (not full URL) of the docker registry to push to | registry.appuio.ch |
token | JSON Web Token | token of the service account to use to create openshift resources. | |
username | String | Username of the OpenShift User that should be used to create openshift resources | foo |
password | String | Password of the OpenShift user that should be used to create openshift resources | bar |
router_pattern | String | String with the router pattern that will be used on that specific OpenShift server, has two substitutions: ${project}, ${environment} that will be subsituted automatically | ${project}.${environment}.appuio.amazee.io |
project_user | String | OpenShift Username that should also be given access too when creating a new project | [email protected] |
key | type | description | example value |
---|---|---|---|
name | String | Unique. name of the ssh key, most probably an email address, but can be any string. | [email protected] |
key | ssh key | the actual ssh key, with no type or email address at the end | AAAAC3NzaC1lZDI1NTE5AAAAICtH4WLYkj55uZ/cLtTjnb0QbutYX1xBJbUzpRhBXeq3 |
type | ssh key type | the type of the ssh key, by default ssh-rsa | ssh-rsa, ssh-ed25519 |
key | type | description | example value |
---|---|---|---|
webhook | URL | URL of the Slack Incomming Webhook | https://hooks.slack.com/services/AAAAAAA/BBBBBBBB/CCCCCCCC |
channel | String | Name of the Slack Channel to send notifications to | mychannel |
Title says it all, like we have logs2slack today, we would need logs2mattermost.
Maybe mattermost ist slack API compatible, then we could actually use logs2slack for it.
After the initial deployment of a stateful set, nothing is done on subsequentness deployments; we don't want to fail a build in this case.
No logs of Cronjobs yet
Find system to save logs of Cronjobs implemented in #74
Title says it all, like we have logs2slack today, we would need logs2rocketchat.
Maybe rocketchat ist slack API compatible, then we could actually use logs2slack for it.
I think @twardnw did some research here already?
Documentation to Update
Get your Drupal Site running on amazee.io
https://docs.amazee.io/step_by_step_guides/push-local-site-to-development-server.html
https://docs.amazee.io/step_by_step_guides/grant_amazeeio_access_to_sourcecode.html#github---webhook
https://docs.amazee.io/step_by_step_guides/golive_on_amazeeio.html
https://docs.amazee.io/local_docker_development/local_docker_development.html (make pygmy default for OS X, deprecate cachalot)
Architecture
https://docs.amazee.io/tools/xdebug.html (remove remote)
https://docs.amazee.io/customization/docker.html (full rewrite)
rsync withouth drush
Find way to have User Documentation inside Lagoon Git Repo
With OpenShift we cannot expose the Port 22, it would need to be something at 30000+
It would be awesome to have some way to run the auth-ssh server on port 22
auth-server/blacklist
GETA declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.