ome / devspace Goto Github PK
View Code? Open in Web Editor NEWContinuous integration tool for OME projects
Continuous integration tool for OME projects
When startin fresh new devspace there are no ./ssh/known_hosts
. This cause push build failure
+ /home/omero/.local/bin/scc merge develop --no-ask --reset -S no-error --push develop/merge/trigger
2016-04-18 17:28:54,440 [ scc.merge] INFO Merging Pull Request(s) based on develop
2016-04-18 17:28:54,440 [ scc.merge] INFO Including Pull Request(s) opened by any public member of the organization
2016-04-18 17:28:54,440 [ scc.merge] INFO Including Pull Request(s) labelled as include
2016-04-18 17:28:54,440 [ scc.merge] INFO Excluding Pull Request(s) labelled as exclude or breaking
2016-04-18 17:28:54,440 [ scc.merge] INFO Excluding Pull Request(s) with either error or failure status
2016-04-18 17:30:00,271 [ scc.merge] INFO Repository: openmicroscopy/bioformats
2016-04-18 17:30:00,271 [ scc.merge] INFO Excluded PRs:
...
2016-04-18 17:30:00,272 [ scc.merge] INFO Merged PRs:
...
2016-04-18 17:30:00,273 [ scc.merge] INFO
Traceback (most recent call last):
File "/home/omero/.local/lib/python2.7/site-packages/scc/main.py", line 76, in entry_point
(UpdateSubmodules.NAME, UpdateSubmodules),
File "/home/omero/.local/lib/python2.7/site-packages/yaclifw/framework.py", line 197, in main
ns.func(ns)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 2912, in __call__
self.push(args, self.main_repo)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 1984, in push
main_repo.rpush(branch_name, remote, force=True)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 1833, in rpush
self.push_branch(branch_name, remote=full_remote, force=force)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 98, in wrapper
return func(*args, **kwargs)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 1147, in push_branch
self.call("git", "push", "-f", remote, name)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 1013, in call
return self.wrap_call(self.debugWrap, *command, **kwargs)
File "/home/omero/.local/lib/python2.7/site-packages/scc/git.py", line 1038, in wrap_call
raise Exception("rc=%s" % rc)
Exception: rc=128
Build step 'Execute shell' marked build as failure
Finished: FAILURE
This is because:
The authenticity of host 'github.com (192.30.252.123)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
to solve that problem:
$ docker exec -it develop_testice35_1 bash
[root@901eecee7574 linux]# su - omero
-bash-4.2$ cd workspace/BIOFORMATS-push/src/
-bash-4.2$ /home/omero/.local/bin/scc merge develop --no-ask --reset -S no-error --push develop/merge/trigger
...
The authenticity of host 'github.com (192.30.252.123)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
2016-04-18 17:31:25,958 [ scc.merge] INFO Merged branch pushed to https://github.com/snoopycrimecop/bioformats/tree/develop/merge/trigger
-bash-4.2$
New container using https://hub.docker.com/r/openmicroscopy/apacheds/
Rather than using the jenkins admin capabilities, it'd may be simpler to use nginx to configure a user.
This leads to permission problems (requiring USER_ID
to be set), and also prevents it running on a distributed cluster.
PHASE I (Jan 2022)
Current build is failing due to old version of Jenkins docker image. Main goal of this work is to have an updated working environment to test scl python 3.8 on CentOS 7
Several repositories need to be updated/adjusted in order to get things to work. Below is the list of repositories with changes:
Infra/Build
DNS assumptions
Jobs
Three jobs were green in merge-ci and failed on a fresh installation, giving us possible false positives.
Others
Other changes in BioFormats related repositories will also be required.
mvn deploy
.ZarrReader
prevents the usage of the more suitable flag altSnapshotDeploymentRepository
.PHASE II (Jan 2023)
Upgrade of Jenkins. Upgrade to 2.375.
Infrastructure
Java 11 related issues:
The following PRs fails the build due to a Javadoc related flag.
Gradle related issues:
Several jobs make persistent changes to the build environment, e.g. https://github.com/openmicroscopy/devspace/blob/0.10.0/home/jobs/BIOFORMATS-push/config.xml#L78 which installs scc outside of a sandboxed virtualenv.
Either these dependencies should be in the Dockerfile, or in a dedicated virtualenv. Moving all builds to docker would also solve this problem.
As reported by @gusferguson, currently we have no exe4j
installed on the Jenkins slave meaning no Windows Insight/Importer artifacts are created by the OMERO-build job.
Minimally, we might want to download https://www.ej-technologies.com/download/exe4j/files and include it in the slave/Dockerfile
in the right place.
While testing the redeployment of the devspace, I realized that the slave Docker container has locale issues
[sbesson@ome-c6220-1-4 slave]$ docker build -t slave .
Sending build context to Docker daemon 7.168 kB
Step 1 : FROM openmicroscopy/devslave-c7:0.2.1
...
Step 35 : CMD /tmp/run.sh
---> Running in e8db8a36e88d
---> 68a842d39b72
Removing intermediate container e8db8a36e88d
Successfully built 68a842d39b72
docker run -it slave bash
[sbesson@ome-c6220-1-4 slave]$ docker run -it slave bash
[omero@9c14e5352b81 linux]$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
This leads to errors especially when running unit tests. Briefly looked at the build logs and issues seem to originate after the reinstallation of glibc-common
+ yum -y install zlib-devel libjpeg-devel gcc
...
+ yum -y install gcc-c++
Failed to set locale, defaulting to C
I'll work on a minimal container to reproduce
Currently omero-build is the primary repository for checking the build status of downstream PRs, leading it to have a more complicated travis script, see https://github.com/ome/omero-build/blob/master/travis.sh
Adding in Bio-Formats to that script would further complicate matters. Perhaps this repository is a more natural location for that. However, to do so, it will be necessary to either spin up a devspace and check its status with a single script or run parts of this repository with less overhead.
ci/ext/oracle
credentialsSOURCE
locationWe need workaround for
Step 5 : RUN /usr/local/bin/plugins.sh /usr/share/jenkins/ref/plugins.txt
---> Running in f80dec721f77
Downloading swarm:2.0
Downloading git:2.4.0
curl: (7) Failed to connect to updates.jenkins-ci.org port 443: Connection timed out
ERROR: Service 'jenkins' failed to build: The command '/bin/sh -c /usr/local/bin/plugins.sh /usr/share/jenkins/ref/plugins.txt' returned a non-zero code: 7
Currently looking into orca-3:42888's web startup failures (e.g. in build 8):
Apr 17 12:28:03 a5a0365505ae systemd[1]: Failed to start OMERO.web.
Apr 17 12:28:03 a5a0365505ae systemd[1]: Unit omero-web.service entered failed state.
Apr 17 12:28:03 a5a0365505ae systemd[1]: omero-web.service failed.
[root@a5a0365505ae tmp]# cat /etc/systemd/system/omero-web.service
[Unit]
Description=OMERO.web
[Service]
User=omero
Type=forking
PIDFile=/home/omero/OMERO.server/var/django.pid
Restart=on-failure
RestartSec=10
Environment="VENVDIR=/home/omero/omero-virtualenv" "BINDIR=/home/omero/OMERO.server/bin"
ExecStart=/usr/bin/bash -c "source $VENVDIR/bin/activate; $BINDIR/omero web start"
ExecStop=/usr/bin/bash -c "source $VENVDIR/bin/activate; $BINDIR/omero web stop"
Restart=/usr/bin/bash -c "source $VENVDIR/bin/activate; $BINDIR/omero web restart"
[Install]
WantedBy=multi-user.target[root@a5a0365505ae tmp]# su - omero
Last login: Sun Apr 17 12:27:15 UTC 2016
-bash-4.2$ VENVDIR=/home/omero/omero-virtualenv/
-bash-4.2$ BINDIR=/home/omero/OMERO.server/bin/
-bash-4.2$ bash -c "source $VENVDIR/bin/activate; $BINDIR/omero web start"
Copying '/home/omero/OMERO.server/lib/python/omeroweb/feedback/static/feedback/css/layout.css'
Traceback (most recent call last):
File "manage.py", line 56, in <module>
execute_from_command_line(sys.argv)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
utility.execute()
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/core/management/__init__.py", line 346, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/core/management/base.py", line 394, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/core/management/base.py", line 445, in execute
output = self.handle(*args, **options)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 168, in handle
collected = self.collect()
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 107, in collect
handler(path, prefixed_path, storage)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 315, in copy_file
self.storage.save(prefixed_path, source_file)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/core/files/storage.py", line 63, in save
name = self._save(name, content)
File "/home/omero/omero-virtualenv/lib/python2.7/site-packages/django/core/files/storage.py", line 222, in _save
os.makedirs(directory)
File "/home/omero/omero-virtualenv/lib64/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/home/omero/omero-virtualenv/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/home/omero/static/web/feedback'
-bash-4.2$ OMERO.server//bin/omero config get
omero.web.application_server=wsgi-tcp
omero.web.application_server.host=0.0.0.0
omero.web.application_server.max_requests=0
omero.web.application_server.port=4080
omero.web.caches={"default": {"BACKEND": "redis_cache.RedisCache","LOCATION": "redis:6379"}}
omero.web.prefix=/web
omero.web.server_list=[["omero", 4064, "omero"], ["slave", 4064, "slave"]]
omero.web.session_engine=django.contrib.sessions.backends.cache
omero.web.static_root=/home/omero/static/web
omero.web.static_url=/web/static/
drwxr-xr-x 3 root root 4096 Apr 8 21:37 static
As a workaround I've made static
and static/web
world-writeable.
Alternatively, extract helper scripts from omero-install for re-use
As of 0.16.0, this repository includes some minimal infrastructure to spin up a Nexus container and initialize a default maven-internal
repository which can be used by the various Java building jobs to upload artifacts - see https://github.com/ome/devspace/blob/0.16.0/nexus-data/createRepoMavenInternal.json.
In the context of a permanent devspace, the default configuration will lead to a steady increase of the binary content under nexus-data
which will eventually fill up the disk space.
Similarly to what is done by the Jenkins jobs generating artifacts, the way to manage disk space is to implement some cleanup policy allowing to rotate the artifacts. For instance, in https://merge-ci.openmicroscopy.org/jenkins/, a cleanup policy keeping only the 5 last snapshots is attached to the maven-internal
repository. As noted in the documentation, a Compact blob store
task also needs to be created and scheduled regularly to reclaim disk space for deleted artifacts (ran daily in the merge-ci
example described above)
Ideally, these tasks and policies should be configured when initializing the Maven data. The default Maven API does not appear to include this level of granularity #159 investigated a new strategy based on the usage of nexus-cli
to create these more advanced configurations and is likely the best starting point to look into managing this configuration.
Automatic builds should be configured for each of the Dockerfiles in this repo.
Currently devspace is designed to work on a dedicated VM. Due to the overhead in managing VMs it would be a lot easier if multiple devspaces could run on a shared docker host.
cross server and web testing can be only done with linking containers to each other. That is currently not possible in version 1. Can be only achieved using version 2 with network
cc @manics
The version of the Swarm plugin will need to be upgraded to 3.5 or above (currently 3.6). The client plugin should be upgraded in coordination which probably requires bumping https://github.com/openmicroscopy/devslave-c7-docker/blob/master/Dockerfile#L28 (or rebuilding the image with a different argument if possible).
15:19:31 bash: /opt/multi-config.sh: No such file or directory
Either git remote prune origin
or git fetch --prune
should be used so that if --shallow
is removed from OMERO-push
, the OMERO-build
step will not fail on trying to checkout scripts.
@sbesson just realized that clonign develop is not a good idea as changes in omero-install can break build. This should be handled by build arg and by default it should clone latest tag.
Nginx has to load multiple nginx configs to test multiple web. This is possible with adding custom hostname (for localhost) to extra_hosts
and proxy_pass
to that host. Each web config needs to have different virtual server name.
On Unix if UID of an owner of space is different then the ID in a container (1000) this has to me manually adjusted by:
+RUN usermod -u 1234 omero
compose v2 allows passing build args
Realized that the nginx
Docker image is currently broken
FROM centos:centos7
MAINTAINER OME
RUN yum -y localinstall http://nginx.org/packages/rhel/7/noarch/RPMS/nginx-release-rhel-7-0.el7.ngx.noarch.rpm \
&& yum clean all
RUN yum -y install nginx \
&& yum clean all
RUN mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled
works. However, the following fails
FROM centos:centos7
MAINTAINER OME
RUN yum -y install epel-release
RUN yum -y localinstall http://nginx.org/packages/rhel/7/noarch/RPMS/nginx-release-rhel-7-0.el7.ngx.noarch.rpm \
&& yum clean all
RUN yum -y install nginx \
&& yum clean all
RUN mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled
fails. Additional investigation might be necessary to determine whether this is related to a package upgrade or hides another deeper issue.
Minimally to improve the build time, but also to simplify the overall repo, Dockerfiles should be migrated to their own repos:
If this works it'd be possible to supply the username and token as an environment variable instead of messing with ssh keys. It's also easy to revoke tokens if necessary.
A user coming to a devspace must currently know a number of URLs (possibly with varying ports) that can be confusing. A single landing page could be generated with the necessary information.
Looks like base image is not starting ssh-agent that cause issues with ssh key and git push
.
ssh-add ~/.ssh/id_gh_rsa
Could not open a connection to your authentication agent.
that cause job failure becuase it is asking for a password:
bash-4.2$ scc merge --repository-config=/home/omero/workspace/BIOFORMATS-push/bio-formats-build/scripts/repositories.yml --no-ask --reset -S success-only --update-gitmodules --push master_merge_trigger master -v
2019-01-28 10:11:21,735 [ scc.config] DEBUG Found github.token
2019-01-28 10:11:21,738 [urllib3.conn] DEBUG Starting new HTTPS connection (1): api.github.com:443
2019-01-28 10:11:22,169 [urllib3.conn] DEBUG https://api.github.com:443 "GET /user HTTP/1.1" 200 None
2019-01-28 10:11:22,307 [urllib3.conn] DEBUG https://api.github.com:443 "GET /user HTTP/1.1" 200 None
2019-01-28 10:11:22,416 [urllib3.conn] DEBUG https://api.github.com:443 "GET /rate_limit HTTP/1.1" 200 None
...
2019-01-28 10:12:41,876 [ scc.git] DEBUG Pushing branch HEAD:refs/heads/master_merge_trigger to [email protected]:olatarkowska/bio-formats-documentation.git...
2019-01-28 10:12:41,876 [ scc.git] DEBUG cd /home/omero/workspace/BIOFORMATS-push/bio-formats-build/bio-formats-documentation
2019-01-28 10:12:41,876 [ scc.git] DEBUG Calling 'git push -f [email protected]:olatarkowska/bio-formats-documentation.git HEAD:refs/heads/master_merge_trigger'
Enter passphrase for key '/home/omero/.ssh/id_gh_rsa':
This hasn't been the case before
Up to devspace 0.6.0
, the Bio-Formats branch of devspace is limited to the following CI pipeline:
The main limitation of this pipeline compared to the reference jobs in https://ci.openmicroscopy.org/view/Bio-Formats is the absence of CI jobs running the automated tests against the data repository.
Adding support for these jobs is becoming increasingly critical as:
The following section describe a series of changes implemented in the next-ci
devspace at various levels (Bio-Formats, Jenkins, configuration, data repository) to address the limitations described above.
A top-level Dockerfile
image has been introduced the Bio-Formats source code repository for the integration branch (currently limited to ant jars tools
) - see ome/bioformats#3013
The Bio-Formats CI queue has been modified in the next-ci
devspace to run the following:
the BIOFORMATS-image
job builds a Docker image on a docker
node from the integration branch created by BIOFORMATS-push
a new BIOFORMATS-test-repo
pipeline job has been introduced with performs the following steps:
BIOFORMATS-test-repo
jobs with the format folder name as a parameterBIOFORMATS-test-folder
job runs the automated tests in test-suite
by running the docker and mounting the path to the format data and configuration folders i.e.$ docker run -v /path/to/data/curated/<format>:/opt/data -v /path/to/configuration/curated/<format>:/opt/config <container_name>
The proposal above modifies the default expectation in terms of testing since every curated/<format>
folder needs to be testable in complete isolation.
A few changes were necessary at the data repository layout to:
curated/unsupported
folder (see private config PR)openjdk
based, all the JDK-dependent pixels tests need to be reviewed (private config PR)Additional changes were required at the configuration or testing level:
test-automated
entrypoint vs creating a wrapper script with different runtime targets (e.g. tools
to run the command line tools, test
for the automated tests)Once the above are resolved and this has been agreed, the next steps should be to
next-ci
as a PR against this repositorydevspace
: 0.7.0
? beginning of 2018?NB: issue description edited by @sbesson
Based on discussions in Slack the robot tests don't work in devspace.
See sonatype/docker-nexus3@bd57cee
Recreating the nexus service (likely after repulling) led to issues when trying to authenticate to populate the repository. While the password was previously assumed to be admin123
(and hardcoded in the script), I retrieved it by shelling into the container and reading the content of admin.password
.
Either we can include some logic to read this password and pass it to the initialization script or we could try and override it, possibly by creating a custom admin.password
file under nexus-data
Remove the usage of group deprecated
All containers should use external storage (especially PG) so that on restarting (even deleting & recreating) the containers, the pre-stop state returns.
cc: @pwalczysko
Currently web is not started in the slave (see OMERO-start job). This would be a first step towards enabling the robot job.
nginx
or the slave
container)web start
in the job with the appropriate configurationNote: a later refactoring may move the SERVICE_NAME of jenkins
in favor of having nginx
be the main point of entry
cc: @aleksandra-tarkowska
see https://github.com/docker-library/postgres/issues/580
A single job could loop through all repositories and push them to snoopy.
See: gh-72
After successful changes in ome/scc#215 jobs could apply filters on startup. This should be configurable (ideally from playbook)
When I exec into the server container, I cannot see the /uod/idr...
mount from there, i.e. cannot import in-place.
[root@idr1-slot2 pwalczysko]# docker exec -it merge-ci-omero-1 bash
bash-5.1$ ls /
afs bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
When I try to perform the workflow from idr1-slot2
directly, I get
omero import --transfer=ln_s -T Dataset:name:CDK5RAP2-C /uod/idr/filesets/idr0021-lawo-pericentriolarmaterial/Raw-files/CDK5RAP2-C/
....
Check failed: /home/omero/omero-server-data/ManagedRepository/user-3_52/Blitz-0-Ice.ThreadPool.Server-17/2024-03/21/11-33-16.477/Cep215_PCNT_gtub_20110506_Fri-1726_0_SIR_PRJ.dv cannot be modified locally!
You likely do not have access to the ManagedRepository for in-place import.
Aborting...
---------------------------------------------------
at ome.formats.importer.transfers.AbstractExecFileTransfer.failLocationCheck(AbstractExecFileTransfer.java:150)
at ome.formats.importer.transfers.AbstractExecFileTransfer.checkLocation(AbstractExecFileTransfer.java:132)
at ome.formats.importer.transfers.AbstractExecFileTransfer.transfer(AbstractExecFileTransfer.java:63)
at ome.formats.importer.ImportLibrary.uploadFile(ImportLibrary.java:531)
at ome.formats.importer.ImportLibrary$3.call(ImportLibrary.java:634)
at ome.formats.importer.ImportLibrary$3.call(ImportLibrary.java:631)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2024-03-21 11:33:17,368 6750 [2-thread-1] ERROR ome.formats.importer.ImportLibrary - Error on import
java.lang.RuntimeException:
---------------------------------------------------
Check failed: /home/omero/omero-server-data/ManagedRepository/user-3_52/Blitz-0-Ice.ThreadPool.Server-17/2024-03/21/11-33-16.477/Cep215_PCNT_gtub_20110506_Fri-1726_0_SIR_PRJ.dv cannot be modified locally!
You likely do not have access to the ManagedRepository for in-place import.
Aborting...
Makes sense & +1 for keeping things in sync. Can certainly see an upcoming round of bumping to 12 if not 13 across the board.
Originally posted by @joshmoore in #179 (comment)
From https://docs.openmicroscopy.org/omero/5.6.2/sysadmins/version-requirements.html#postgresql, bumping at least to PSQL 11 would be inline with our recommended version. No objection to testing with 12 and/or 13 in the CI environment.
("error in call","Traceback (most recent call last):
File "/home/omero/workspace/OMERO-web/OMERO.web/lib/python/omeroweb/webgateway/views.py", line 1218, in wrap
rv = f(request, *args, **kwargs)
File "/home/omero/workspace/OMERO-web/OMERO.web/lib/python/omeroweb/webgateway/views.py", line 2521, in object_table_query
tableData = _table_query(request, fileId, conn, **kwargs)
File "/home/omero/workspace/OMERO-web/OMERO.web/lib/python/omeroweb/webgateway/views.py", line 2450, in _table_query
cols = t.getHeaders()
File "/home/omero/workspace/OMERO-web/OMERO.web/lib/python/omero_Tables_ice.py", line 1029, in getHeaders
return _M_omero.grid.Table._op_getHeaders.invoke(self, ((), _ctx))
ApiUsageException: exception ::omero::ApiUsageException
{
serverStackTrace =
serverExceptionClass =
message = Not yet initialized
}
")
is https://github.com/ome/omero-install/blob/v5.2.3/linux/step01_centos7_deps.sh not enough?
devspace should have a self-hosted git repository to support running without GitHub. All jobs could push/pull from this internal git repo, and an optional separate job would push this to GitHub if required. This removes the need for a secret SSH key or github token.
In order to prevent unwanted commits as in #3 (comment), we could consider using more configuration templating as in ome/openmicroscopy#4093 generate new files rather than modifying them in-situ.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.