Giter VIP home page Giter VIP logo

image-builder's Introduction

Image Builder

Please see our Book for more in-depth documentation.

What is Image Builder?

Image Builder is a tool for building Kubernetes virtual machine images across multiple infrastructure providers. The resulting VM images are specifically intended to be used with Cluster API but should be suitable for other setups that rely on Kubeadm.

Useful links

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Goals

  • To build images for Kubernetes-conformant clusters in a consistent way across infrastructures, providers, and business needs.
    • To install all software, containers, and configuration needed by downstream tools such as Cluster API providers, to enable them to pass conformance tests
    • Support end users requirements to customize images for their business needs.
  • To provide assurances in the binaries and configuration in images for purposes of security auditing and operational stability.
    • Allow introspection of artifacts, software versions, and configurations in a given image.
    • Support repeatable build processes where the same inputs of requested install versions result in the same installed binaries.
  • To ensure that the creation of images is performed via well defined phases. Where users could choose specific phases that they needed.

Non-Goals

  • To provide upgrade or downgrade semantics.
  • To provide guarantees that the software installed provides a fully functional system.
  • To prescribe the hardware architecture of the build system.

Roadmap

  • Centralize the various image builders into this repository
  • Create a versioning policy
  • Automate the building of images
  • Publish images off master to facilitate E2E testing and the removal of k/k/cluster
  • Create a bill of materials for each image and allow it to be used to recreate an image
  • Automate the testing of images for kubernetes node conformance
  • Automate the security scanning of images for CVE's
  • Publish Demo / POC images to coincide with each new patch version of kubernetes to facilitate Cluster API adoption
  • Automate the periodic scanning of images for new CVE's
  • (Stretch Goal) Publish Production ready images with a clear support contract for handling CVE's. Due to the high-level of commitment and effort required to support production images, this will only be done once all the pre-conditions are met including:
    • Create an on-call rotation with sufficient volunteers to provide 365/24/7 coverage
    • Ensure all licensing requirements are met

image-builder's People

Contributors

ajitak avatar averagemarcus avatar codenrhoden avatar cornelius-keller avatar cpanato avatar detiber avatar eleanorrigby avatar figo avatar hrak avatar invidian avatar jepio avatar jessicaochen avatar johananl avatar jsturtevant avatar justinsb avatar k8s-ci-robot avatar karan avatar kkeshavamurthy avatar krousey avatar luxas avatar mboersma avatar medinatiger avatar mikedanese avatar perithompson avatar pipejakob avatar randomvariable avatar roberthbailey avatar sanikagawhane avatar sriramandev avatar voor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

image-builder's Issues

Should we include provider versions in image versions?

This is mostly a follow-up issue based on a previous discussion w/ @akutz @codenrhoden @ncdc @detiber @dennisme #64 (comment)

We recently added CSI support in CAPV which required an update to our machine image because the vSphere CSI driver requires a newer VMX version than what we were using previously. Since machine images are versioned based on Kubernetes versions, we're put in an awkward position for a couple reasons:

  • If we update all machine images with the VMX update, we possible break older versions of CAPV pointing at older images that were just updated.
  • If only new images get the VMX update, new CAPV versions can't deploy older versions of Kubernetes AND older CAPV versions can't use the newer versions of Kubernetes.

Ideally if we also versioned images based on the version of the provider, (i.e for CAPV ubuntu-1804-kube-v1.15.3-capv-v0.5.2), any Kubernetes version would work on the specified version of CAPV. However, this leaves some open questions:

  • if you upgrade CAPV, do you have to upgrade the machine image? For (semantic) minor versions, probably. But probably not required for patch versions?
  • what rules do we follow for updating the machine image on the same Kubernetes version, but a different provider version? Is it any different from updating machine images today?
  • what needs to change in our release processes to make this happen? Is it worth the effort?

CAPI images: simplify steps required to use tool

Some general feedback received, especially when building OVAs, is that there are a large number of steps involved in order to get an environment that can build an image.

Improvements can be made here, around documentation and tooling.

A make check step might be helpful for verifying that prereqs are in place.

The README at packer/ova/README is a bit out of date vs current capabilities as well.

Support extra repos for Photon build

I just noticed in the code that the Photon build does not use:

  • disable_public_repos
  • extra_repos
  • reenable_public_repos
  • remove_extra_repos

It should, as these are necessary to build images from internal mirrors/repositories.

/assign

Should default images include NFS utilities

The current images install support for making NFS mounts. As brought up in #64, this does leave open ports that some users may find undesirable. Rather than explicitly disabling those ports, should it instead be left to a user to customize their image by adding NFS support if they want it?

Build base images from community CIS Benchmark Images

To comply with certain compliance controls, the CIS benchmarks are required. It would be great if the kops AMIs were built from the community CIS Debian images. This would give a certain level of free system hardening out of the box.

For example:
CIS Debian Linux 9 Benchmark v1.0.0.5 - Level 1: ami-003c5a72f53c93f63

Template error while templating string: no filter named 'search'.

My aim is build capi ova image on my linux system.
On running make build-ova-ubuntu1804, I am getting an error Build 'ubuntu-1804' errored: Error executing Ansible: Non-zero exit status: exit status 2

On inspecting the provisioning steps, there seems to be an error with file images/capi/ansible/roles/providers/tasks/main.yml

OUTPUT

Last few lines of output were like this.

ubuntu-1804: TASK [kubernetes : Install kubelet service] ************************************
ubuntu-1804: skipping: [default]
ubuntu-1804:
ubuntu-1804: TASK [kubernetes : Ensure that the kubelet is running] *************************
ubuntu-1804: ok: [default]
ubuntu-1804:
ubuntu-1804: TASK [kubernetes : Create the Kubernetes version file] *************************
ubuntu-1804: changed: [default]
ubuntu-1804:
ubuntu-1804: TASK [kubernetes : Create kubeadm config file] *********************************
ubuntu-1804: changed: [default]
ubuntu-1804:
ubuntu-1804: TASK [kubernetes : Kubeadm pull images] ****************************************
ubuntu-1804: changed: [default]
ubuntu-1804:
ubuntu-1804: TASK [kubernetes : delete kubeadm config] **************************************
ubuntu-1804: changed: [default]
ubuntu-1804:
ubuntu-1804: TASK [providers : include_tasks] ***********************************************
ubuntu-1804: skipping: [default]
ubuntu-1804:
ubuntu-1804: TASK [providers : include_tasks] ***********************************************
ubuntu-1804: fatal: [default]: FAILED! => {"msg": "The conditional check 'packer_builder_type | search('vmware')' failed. The error was: template error while templating string: no filter named 'search'. String: {% if packer_builder_type | search('vmware') %} True {% else %} False {% endif %}\n\nThe error appears to be in '/home/necuser/go/src/sigs.k8s.io/image-builder/images/capi/ansible/roles/providers/tasks/main.yml': line 18, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- include_tasks: vmware.yml\n ^ here\n"}
ubuntu-1804:
ubuntu-1804: PLAY RECAP *********************************************************************
ubuntu-1804: default : ok=25 changed=18 unreachable=0 failed=1 skipped=36 rescued=0 ignored=0
ubuntu-1804:
==> ubuntu-1804: Provisioning step had errors: Running the cleanup provisioner, if present...
==> ubuntu-1804: Stopping virtual machine...
==> ubuntu-1804: Deleting output directory...
Build 'ubuntu-1804' errored: Error executing Ansible: Non-zero exit status: exit status 2
==> Some builds didn't complete successfully and had errors:
--> ubuntu-1804: Error executing Ansible: Non-zero exit status: exit status 2
==> Builds finished but no artifacts were created.
Makefile:44: recipe for target 'build-ova-ubuntu-1804' failed
make: *** [build-ova-ubuntu-1804] Error 1

Docs and getting started

Holy cow we need docs and a getting started guide and prune things we aren't using/supporting into a separate area.

capi image-builder broken

Looks like due to PR #68 merged

particularly

  set_fact:
    ecr: "{{ kubernetes_container_registry is regex('^[0-9]{12}.dkr.ecr.[^.]+.amazonaws.com\/') }}"

error log:

[0;32m    ubuntu-1804: ERROR! Syntax Error while loading YAML.[0m
[0;32m    ubuntu-1804:   found unknown escape character[0m
[0;32m    ubuntu-1804:[0m
[0;32m    ubuntu-1804: The error appears to be in '/home/imgbuilder-ova/workspace/run-image-builder/62/image-builder/images/capi/ansible/roles/kubernetes/tasks/main.yml': line 40, column 92, but may[0m
[0;32m    ubuntu-1804: be elsewhere in the file depending on the exact syntax problem.[0m
[0;32m    ubuntu-1804:[0m
[0;32m    ubuntu-1804: The offending line appears to be:[0m
[0;32m    ubuntu-1804:[0m
[0;32m    ubuntu-1804:   set_fact:[0m
[0;32m    ubuntu-1804:     ecr: "{{ kubernetes_container_registry is regex('^[0-9]{12}.dkr.ecr.[^.]+.amazonaws.com\/') }}"[0m
[0;32m    ubuntu-1804:                                                                                            ^ here[0m
[0;32m    ubuntu-1804: We could be wrong, but this one looks like it might be an issue with[0m
[0;32m    ubuntu-1804: missing quotes. Always quote template expression brackets when they[0m
[0;32m    ubuntu-1804: start a value. For instance:[0m

cc @aaroniscode

Protect containerd processes from getting oomkilled

The Kubernetes kubelet's dockershim sets oom_score_adj for the docker processes to -999 to protect them from getting killed:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/container_manager_linux.go#L774-L796

However other CRIs like containerd, kubelet does not know about the names of the processes or their pid and hence does NOT set the oom_score_adj:
kubernetes/kubernetes#86420

The guidance from the containerd folks is for packagers/admins to do this themselves:
containerd/containerd#3901

Since we ship containerd by default and we install containerd in all our images, we should set this ourselves by default in image-builder itself.

One pattern of setting this using ansible is (found quickly using google search as i don't know much about ansible, so there may be other patterns):
https://chuckyz.wordpress.com/2016/12/28/centos-7-disabling-oomkiller-for-a-process/

Let's please do this!

CAPI OVA: support starting from existing VMX

The existing build procedure for the OVA image always starts with an ISO. Some consumers of the tool may want to start with an existing image that just needs the K8s components added to it.

This would potentially allow a user to use the vmware-iso builder to create an OS image from scratch, which could later be used by the vmware-vmx builder for additional customization.

To facilitate this, the OVA build could be broken into two stages. One to generate the base/OS image, and the second to apply the K8s artifacts.

Lock default user for OVAs

The CAPV OVAs built with the image builder have always relied on cloud-init to lock the default accounts. This worked fine in CAPV v1a1 when we created our own cloud-init userdata and used the ssh_authorized_keys option to add a key to the default user. That’s not an option in CAPBK -- you must explicitly add users. Well, it turns out if you explicitly specify a user in the KubeadmConfig resource, the default account is removed from the users list in cloud-init (by cloud-init, not by virtue of the bootstrapper) and thus the default rules aren’t applied, like locking it down.

So images that were locked down when deployed with CAPV v1a1 have their default accounts (ubuntu, centos, photon) enabled for plain-text password authentication when deployed with CAPV v1a2. The same images, just different cloud-init userdata.

There were several solutions suggested:

  • Having CABPK always add the default user to the top of the users list in the cloud-init data, but @detiber was rightly worried about possible side-effects.
  • I suggested always setting /etc/ssh/sshd_config to disable clear-text password access, but @ detiber again pointed out that that is an option when adding users via the KubeadmConfig resource.

This isn’t an issue for the AMIs as that build process uses a key injected into the image. @ncdc suggested the images should likely not rely on cloud-init to lock down accounts. I’m not in total agreement with that, but I definitely see his point. And because this isn’t an issue for the AMIs, I didn’t feel like pushing for @chuckha or @detiber to resolve the issue by adding an explicit - default entry to the users: key in the cloud-init userdata generated by CAPBK.

All of the above considered, I was convinced, as @ncdc advised, that the images should have accounts locked by the build process, not cloud-init.

@figo and I are going to modify the OVA packer config’s shutdown command to resemble the following to ensure the account used to build the OVA by Ansible and Packer is locked at the end of the build process

"shutdown_command": "echo '{{user `ssh_password`}}' | sudo -S -E sh -c 'usermod -L {{user `ssh_username`}} && {{user `shutdown_command`}}'",

This will ensure the images produced have their default accounts locked ahead of time.
The unfortunate thing is that this involves rebuilding and republishing all of the active OVAs. I already filed kubernetes-sigs/cluster-api-provider-vsphere#644 to add a provision to CAPV to indicate the published OVAs are not supported or production ready.

If I missed anything, @ncdc , @figo, @andrewsykim, @chuckha, or @detiber can probably fill in the gaps. Many thanks to all of them for helping out with this issue on the day before a pretty big milestone for CAPV.

capi image: docker image support

for some reason, some users want to use docker as runtime.

and now I have try to use geerlingguy.docker to build a new image, only AMI, for k8s runtime.

and it's working fine.

can we consider to support CAPI docker runtime?

$ kubectl  --kubeconfig guohao4  get node -o wide --all-namespaces
NAME                                       STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ip-10-0-0-114.us-east-2.compute.internal   Ready    <none>   3m13s   v1.15.4   10.0.0.114    <none>        Ubuntu 18.04.3 LTS   4.15.0-1051-aws   docker://19.3.3
ip-10-0-0-144.us-east-2.compute.internal   Ready    master   7m38s   v1.15.4   10.0.0.144    <none>        Ubuntu 18.04.3 LTS   4.15.0-1051-aws   docker://19.3.3
ip-10-0-0-217.us-east-2.compute.internal   Ready    master   5m28s   v1.15.4   10.0.0.217    <none>        Ubuntu 18.04.3 LTS   4.15.0-1051-aws   docker://19.3.3
ip-10-0-0-64.us-east-2.compute.internal    Ready    master   5m44s   v1.15.4   10.0.0.64     <none>        Ubuntu 18.04.3 LTS   4.15.0-1051-aws   docker://19.3.3

Add support for building CAPI images behind an HTTP proxy

As a CAPI user, I want to be able to build my own images behind a corporate HTTP proxy.

At first glance, the following changes are needed:

  • Flow http proxy vars into ansible
  • Manually download goss
  • Disable the automatic download of goss in the goss plugin

Make sure user-provided var files take precedence

When using a command of the form PACKER_VAR_FILES=... make build..., the var files provided should take precedence over what is there by default. This should be a pretty easy fix, just need to reorder how we list the files.

From the Packer documentation:

Combining the -var and -var-file flags together also works how you'd expect. Variables set later in the command override variables set earlier.

Right now, we are listing any user provided files first, but we really want to list them last.

Formalize process for rev'ing versions included in images

In the last community meeting, the topic was brought up of how and when we bump verions of things included in the image. Namely, things like containerd and cni. We should formalize how and when these packages can be bumped up, rather than just being at the whim of someone opening a PR.

Verify minimum version in dependency scripts

In the dependency scripts (images/capi/hack/ensure-*), we should check that if a required tool is already installed, that the version of it meets the required minimum version.

Docker is enabled on boot

At the moment, images have Docker installed and enabled by default. This is not a good idea in cases where different Docker version will be installed or some other container runtime like conainerd.

Would be a good idea to either:

  1. Disable Docker auto-start on boot (will be enabled by Kops later)
  2. Just copy Docker packages in the cache dir and expect Kops to install what it needs.

/cc @justinsb

CAPI ova: need ability to customize OVF metadata

Downstream consumers of the image-builder for CAPI that building OVAs may have a need to customize the metadata included in the OVF. It's time to take a look at what is currently in the metadata, and an a generic way to customize the metadata.

This is all done in the Packer post-processing step.

/assign

Build/deploy cloud-init 19.1/2 for CentOS 7

This issue tracks the work necessary to automate the build of cloud-init 19.1/2 as an RPM for CentOS 7. More information on why this is necessary may be found at kubernetes-retired/cluster-api-bootstrap-provider-kubeadm#41.

Neither pkgs.org nor rpmfind listed cloud-init 18.4+ for CentOS or RHEL. There are RPMs for other OSs, but they link against python3 and other dependencies unavailable to even the CentOS EPEL repository.

There's a branch of the CAPV project that includes a Dockerfile for building the cloud-init RPM for CentOS 7. More information may be found at kubernetes-retired/cluster-api-bootstrap-provider-kubeadm#41 (comment).

However, during a recent call with Tim St. Clair, he mentioned possibly using Copr to automate the build and deployment of this package. As @codenrhoden has experience with Copr, this issue is being assigned to him.

/assign @codenrhoden

CAPV images for Ubuntu have unattended upgrades enabled

This must happen by default with Ubuntu, but he current images have the unattended-upgrades package installed and enabled. Furthermore, there are systemd units that are trying to do a daily apt-get update... These units are:

  • apt-daily.{service,timer}
  • apt-daily-upgrade.{service,timer}

Since the images are supposed by immutable, we really don't want any packages changing automatically, and we don't need any daily tasks to fetch new package lists. This should all be cleaned up.

/assign

ctr can not deal with container image tagged with `vmware/kube-apiserver:###`

Impact:
When build ova and kubernetes_source_type = "http"

Steps:

  1. set kubernetes_source_type = "http" and kubernetes_cni_source_type = "http"
  2. start the build, in the process, it will call ctr images import to load image tar file from the http url, if the image tar file been tagged as vmware/kube-apiserver which is not a typical server address, ctr will add a docker.io as prefix to the container, the container will show as
    docker.io/vmware/kube-apiserver.

expected behavior:
ctr respect the tag contains in the container tar file, load the image as vmware/kube-apiserver.

note: the vmware name is shows as an example.

images/capi/ova-ubuntu: cloud-init does not work in OpenStack

Hi,

I'm using the CAPI Ubuntu image in OpenStack. My problem is that cloud-init doesn't detect OpenStack correctly (My assumption is the ova image is meant to be also used in OpenStack).

I'm building and deploying the image like this:

cd images/capi
make build-ova-ubuntu-1804

qemu-img convert -f vmdk -O qcow2 -c ./output/ubuntu-1804-kube-v1.16.2/ubuntu-1804.ova.vmdk ./output/ubuntu-1804-kube-v1.16.2/ubuntu-1804.ova.qcow2

openstack image create --disk-format qcow2 \                                                                                                       
  --private \            
  --container-format bare \
  --file ./output/ubuntu-1804-kube-v1.16.2/ubuntu-1804.ova.qcow2 ubuntu-1804-kube-v1.16.2

Then I'm just using Cluster API OpenStack to create a cluster.

I would expect that cloud-init detects OpenStack as datasource and retrieves the metadata. Instead it only trys to detect the vmware-guestinfo datasource.

2019-11-10_10-47-43

A simple fix is just not installing the vmware-guestinfo datasource: https://github.com/sbueringer/image-builder/pull/2/files

I'm not sure for what the ova image is meant to be used (local development?). But the way it's currently implemented seems to make it impossible to use on OpenStack because the datasources are overwritten in the vmware-guestinfo install script:

logs for kubeadm cloud-init module

/kind bug

What steps did you take and what happened:
[A clear and concise description of what the bug is.]
Have a failed kubeadm join on a node

What did you expect to happen:
Expect to see kubeadm output in /var/log/cloud-init-output.log

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The module doesn't capture output at the moment - should stream results out, redacting anything sensitive.

Environment:

  • Cluster-api-provider-aws version: capa-ami-ubuntu-18.04-1.15.1-00-1563545503
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Install a newer version of Ansible

Right now our scripts install Ansible version 2.8.0. We should look at using a later version.

Especially for Ansible, I think doing this requires more automated testing, because we've seen different behavior between different versions. So we have to be careful about setting our minimum version (see #143) and verifying that an existing installation works.

Create gitbook and publish to sigs.k8s.io

As the image-builder is maturing, we need to have a coherent set of docs. Since this is a k8s-sigs sponsored project, we already can have a doc page at image-builder.sigs.k8s.io. If we use gitbook and netlify, we can publish the gitbook to that URL automatically, as are done in other projects.

This is issue tracks setting up a /doc folder and the initial scaffolding to publish said docs.

Support running image-builder in RHEL 7

The Makefile in images/capi/packer/ami and images/capi/packer/gce uses shasum to verify the downloaded packer plugins.

In some distributions, such as RHEL 7, shasum is actually sha256sum. I think we can update the Makefile to detect whether shasum or sha256sum is available on the machine. If neither are there, exit with an error.

ova-photon-3 build fails with ansible cloud-init-19.1-2.ph3 update subtask

Photon 3 has been updated with cloud-init-19.1-3.ph3.noarch.rpm

The ansible task to download and rpm install cloud-init-19.1-2.ph3 fails the playbook and is no longer required.

    photon-3: TASK [providers : Install cloud-init packages] *********************************
    photon-3: skipping: [default]
    photon-3:
    photon-3: TASK [providers : Install cloud-init packages] *********************************
    photon-3: skipping: [default]
    photon-3:
    photon-3: TASK [providers : Install cloud-init packages] *********************************
    photon-3: changed: [default]
    photon-3:
    photon-3: TASK [providers : copy cloud-init 19.1-2 rpm] **********************************
    photon-3: [WARNING]: Consider using the get_url or uri module rather than running 'curl'.
    photon-3: changed: [default]
    photon-3: If you need to use command because get_url or uri is insufficient you can add
    photon-3: 'warn: false' to this command task or set 'command_warnings=False' in
    photon-3: ansible.cfg to get rid of this message.
    photon-3:
    photon-3: TASK [providers : Update cloud-init to 19.1-2.ph3] *****************************
    photon-3: [WARNING]: Consider using the yum, dnf or zypper module rather than running
    photon-3: 'rpm'.  If you need to use command because yum, dnf or zypper is insufficient
    photon-3: fatal: [default]: FAILED! => {"changed": true, "cmd": ["rpm", "-Uvh", "./cloud-init-19.1-2.ph3.noarch.rpm"], "delta": "0:00:00.048006", "end": "2019-12-03 17:01:39.647078", "msg": "non-zero return code", "rc": 2, "start": "2019-12-03 17:01:39.599072", "stderr": "\tpackage cloud-init-19.1-3.ph3.noarch (which is newer than cloud-init-19.1-2.ph3.noarch) is already installed", "stderr_lines": ["\tpackage cloud-init-19.1-3.ph3.noarch (which is newer than cloud-init-19.1-2.ph3.noarch) is already installed"], "stdout": "Verifying...                          ########################################\nPreparing...                          ########################################", "stdout_lines": ["Verifying...                          ########################################", "Preparing...                          ########################################"]}
    photon-3: you can add 'warn: false' to this command task or set 'command_warnings=False'
    photon-3: in ansible.cfg to get rid of this message.
    photon-3:
    photon-3: PLAY RECAP *********************************************************************
    photon-3: default                    : ok=25   changed=23   unreachable=0    failed=1    skipped=42   rescued=0    ignored=0
    photon-3:
==> photon-3: Provisioning step had errors: Running the cleanup provisioner, if present...
==> photon-3: Stopping virtual machine...
==> photon-3: Deleting output directory...
Build 'photon-3' errored: Error executing Ansible: Non-zero exit status: exit status 2

Standardise on ntpd on cloud-init based distros

Many users will need to configure ntp during the use of cluster api. Cloud-init can do this through a single ntp module, but only when ntpd is set up.

RHEL / CentOS 7 and Amazon Linux 2 default to chronyd instead of ntpd, and Photon to systemd-timesyncd so these would need to be swapped out.

centos OVA capi image can be login with root and password by default

Issues description
the centos-7 capi image can be login by user: root and password configured in ks.cfg file.

Expected behavior
login with password disabled by default for both custom user(user defined in ks.cfg) or root

affected images
centos-7 OVA has problem
ubunutu-1804 OVA don't have this problem
AMI images (amazon linux, ubuntu, centos) yet be tested

CAPI OVA: Allow custom repos in kickstart/preseed

The current CentOS kickstart and Debian preseed files have hardcoded references to various repos (CentOS mirror, EPEL, etc). Enterprise consumers of this tool may have internal repo mirrors that they want to use instead of reaching out to the internet.

Devise a solution for these kickstart/preseed files to reference different repos instead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.