Giter VIP home page Giter VIP logo

pxeless's Introduction

What is PXEless? GitHub Release

It's an automated system install and image-creation tool for situations where provisioning machines via a PXE server is not an option, or is not an option yet. It's ideal for small-scale greenfielding, proofs-of-concept, and general management of on-prem compute infrastructure in a cloud-native way without the cloud.

PXEless is based on covertsh/ubuntu-autoinstall-generator, and generates a customized Ubuntu auto-intstall ISO. This is accomplished by using cloud-init and Ubuntu's Ubiquity installer - specifically the server variant known as Subiquity, which itself wraps Curtin.

PXEless works by:

  1. Downloading the ISO of your choice - a daily build, or a release.
  2. Extracting the EFI, MBR, and File-System from the ISO
  3. Adding some kernel command line parameters
  4. Adding customised autoinstall and cloud-init configuration files
  5. Copying arbitrary files to / running scripts against the squashfs (Optional)
  6. Repacking the data into a new ISO.

The resulting product is a fully-automated Ubuntu installer. This serves as an easy stepping-off point for configuration-management tooling like Ansible, Puppet, and Chef or personalization tools like jessebot/onboardme. Please note that while similar in schema, the Autoinstall and Cloud-Init portions of the user-data file do not mix. The user-data key marks the transition from autoinstall to cloud-init syntax as seen HERE

Application Flow

Diagram showing the flow of information through the PXEless process. 1. Downloading the ISO. 2. Extracting the EFI, MBR, and File-System from the ISO. 3. Adding some kernel command line parameters. 4. Adding customised autoinstall and cloud-init configuration files. 5.Repacking the data into a new ISO.

Docker Quickstart

It is advised to run PXEless in a docker container due to it's reliance on Linux-only packages.

Skip 1 and 2 if you already have a cloud-init file

  1. Clone the rpos

    git clone https://github.com/cloudymax/pxeless.git
  2. Change directory to the root of the repo

    cd pxeless
  3. Run in a Docker container:

    • Basic Usage:

      docker run --rm --volume "$(pwd):/data" \
        --user $(id -u):$(id -g) deserializeme/pxeless \
        --all-in-one  \
        --user-data user-data.basic \
        --code-name jammy \
        --use-release-iso 
    • Adding static files to the ISO

      Take note that we do not specify a user here. Adding extra files to the ISO via the -x or --extra-files flag requires root access in order to chroot the squashfs.

      The contents of the extras directory will be copied to the /media dir of the image's filesystem. The extra-files are mounted as /data/<directory> when running in a docker conatiner because we mount $(pwd) as /data/

      docker run --rm --volume "$(pwd):/data" deserializeme/pxeless \
        --all-in-one \
        --user-data user-data.basic \
        --code-name jammy \
        --use-release-iso \
        --extra-files /data/extras
    • Offline Installation

      Running an offline installer script to customize image during build procedure. Adding a bash script as content of the extras directory. The script should be passed to image-create using -o or --offline-installer.

         docker run --rm --volume "$(pwd):/data" deserializeme/pxeless \
          --all-in-one \
          --user-data user-data.basic \
          --code-name jammy \
          --use-release-iso \
          --extra-files /data/extras \
          --offline-installer installer-sample.sh
  4. Writing your ISO to a USB drive

    • On MacOS I reccommend using Etcher

    • On Linux use dd.

      # /dev/sdb is assumed for the sake of the example
      
      export IMAGE_FILE="ubuntu-autoinstall.iso"
      
      sudo fdisk -l |grep "Disk /dev/"
      
      export DISK_NAME="/dev/sdb"
      
      sudo umount "$DISK_NAME"
      
      sudo dd bs=4M if=$IMAGE_FILE of="$DISK_NAME" status=progress oflag=sync
  5. Boot your ISO file on a physical machine for VM and log-in. If you used my user-data.basic file the user is vmadmin, and the password is password. You can create your own credentials by running mkpasswd --method=SHA-512 --rounds=4096 as documented on THIS page at line 49.

Command-line options

Short Long Description
-h --help Print this help and exit
-v --verbose Print script debug info
-n --code-name The Code Name of the Ubuntu release to download (bionic, focal, jammy etc...)
-a --all-in-one Bake user-data and meta-data into the generated ISO. By default you will need to boot systems with a CIDATA volume attached containing your autoinstall user-data and meta-data files. For more information see: https://ubuntu.com/server/docs/install/autoinstall-quickstart
-e --use-hwe-kernel Force the generated ISO to boot using the hardware enablement (HWE) kernel. Not supported by early Ubuntu 20.04 release ISOs.
-u --user-data Path to user-data file. Required if using -a
-m --meta-data Path to meta-data file. Will be an empty file if not specified and using the -a flag. You may read more about providing a meta-data file HERE
-x --extra-files Specifies a folder with files and folders, which will be copied into the root of the iso image. If not set, nothing is copied. Requires use of --privileged flag when running in docker
-k --no-verify Disable GPG verification of the source ISO file. By default SHA256SUMS- and SHA256SUMS-.gpg files in the script directory will be used to verify the authenticity and integrity of the source ISO file. If they are not present the latest daily SHA256SUMS will be downloaded and saved in the script directory. The Ubuntu signing key will be downloaded and saved in a new keyring in the script directory.
-o --offline-installer Run a bash script to customize image, including install packages and configuration. It should be used with -x, and the bash script should be avilable in the same extras directory.
-r --use-release-iso Use the current release ISO instead of the daily ISO. The file will be used if it already exists.
-s --source Source ISO file. By default the latest daily ISO for Ubuntu 20.04 will be downloaded and saved as script directory/ubuntu-original-current date.iso That file will be used by default if it already exists.
-t --timeout Set the GRUB timeout. Defaults to 30.
-d --destination Destination ISO file. By default script directory/ubuntu-autoinstall-current date.iso will be created, overwriting any existing file.

Sources

This project is made possible through the open-source work of the following authors and many others. Thank you all for sharing your time, effort, and knowledge freely with us. You are the giants upon whos shoulders we stand. ❤️

Reference Author Description
ubuntu-autoinstall-generator covertsh The original project that PXEless is based off of. If the original author ever becomes active again, I would love to merge these changes back.
Ubuntu Autoinstall Docs Canonical Official documentation for the Ubuntu Autoinstall process
Cloud-Init Docs Canonical The official docs for the Cloud-Init project
How-To: Make Ubuntu Autoinstall ISO with Cloud-init Dr Donald Kinghorn A great walkthrough of how to manually create an AutoInstall USB drive using Cloud-Init on Ubuntu 20.04
My Magical Adventure with Cloud-Init Xe Iaso Excellent practical example of how to manipulate cloud-init's execution order by specifying module order
Basic user-data example Cloudymax A very basic user-data file that will provision a user with a password
Advanced user-data example Cloudymax

Need something different?

PXEless currently only supports creating ISO's using Ubuntu Server (Focal and Jammy). Users who's needs ae not met by PXEless may find these other FOSS projects useful:

Project Name Description
Tinkerbell A flexible bare metal provisioning engine. Open-sourced by the folks @equinixmetal; currently a sandbox project in the CNCF
Metal³ Bare Metal Host Provisioning for Kubernetes and preferred starting point for Cluster API
Metal-as-a-Service Treat physical servers like virtual machines in the cloud. MAAS turns your bare metal into an elastic cloud-like resource
Packer A tool for creating identical machine images for multiple platforms from a single source configuration.
Clonezilla Live! A partition or disk clone tool similar to Norton Ghost®. It saves and restores only used blocks in hard drive. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (Server Edition)

Testing your ISO with QEMU

Click to expand

You will need to have a VNC client (tigerVNC or Remmina etc...) installed as well as the following packages:

    sudo apt-get install -y qemu-kvm \
        bridge-utils \
        virtinst\
        ovmf \
        qemu-utils \
        cloud-image-utils \
        ubuntu-drivers-common \
        whois \
        git \
        guestfs-tools
  • You will need to replace my host IP (192.168.50.100) with your own.
  • Also change the path to the ISO file to match your system.
  • I have also set this VM to forward ssh over port 1234 instead of 22, feel free to change that as well.
  1. Do fresh clone of the pxeless repo

  2. Create the iso with

    docker run --rm --volume "$(pwd):/data" --user $(id -u):$(id -g) deserializeme/pxeless -a -u user-data.basic -n jammy
  3. Create a virtual disk with

    qemu-img create -f qcow2 hdd.img 8G
  4. Create a test VM to boot the ISO files with

    sudo qemu-system-x86_64 -machine accel=kvm,type=q35 \
    -cpu host,kvm=off,hv_vendor_id=null \
    -smp 2,sockets=1,cores=1,threads=2,maxcpus=2 \
    -m 2G \
    -cdrom /home/max/repos/pxeless/ubuntu-autoinstall.iso \
    -object iothread,id=io1 \
    -device virtio-blk-pci,drive=disk0,iothread=io1 \
    -drive if=none,id=disk0,cache=none,format=qcow2,aio=threads,file=hdd.img \
    -netdev user,id=network0,hostfwd=tcp::1234-:22 \
    -device virtio-net-pci,netdev=network0 \
    -serial stdio -vga virtio -parallel none \
    -bios /usr/share/ovmf/OVMF.fd \
    -usbdevice tablet \
    -vnc 192.168.50.100:0
  5. Select "Try or install Ubuntu" from the grub pop-up Screenshot 2022-12-29 at 06 57 01

  6. Connect to the VM using VNC so we can watch the grub process run.

    Screenshot 2022-12-29 at 07 01 06
  7. After the install process completes and the VM reboots, select the "Boot from next volume" grub option to prevent installing again

    Screenshot 2022-12-29 at 06 58 50
  8. I was then able to log into he machine using vmadmin and password for the credentials

    Screenshot 2022-12-29 at 07 00 01
  9. Finally i tried to SSH to the machine (since the vm I created is using SLIRP networking I have to reach it via a forwarded port)

    Screenshot 2022-12-29 at 07 05 58

The most common issues I run into with this process are improperly formatted yaml in the user-data file, and errors in the process of burning the ISO to a USB drive.

In those cases, the machine will perform a partial install but instead of seeing pxeless login: as the machine name at login it will still say ubuntu login:.

Contributors

cloudymax
Max!
lmunch
Lars Munch
meraj-kashi
Meraj Kashi
koenvandesande
Koen Van De Sande
snyk-bot
Snyk Bot
Poeschl
Markus Pöschl
dcd-arnold
Arnold
n0k0m3
N0k0m3
MrKinauJr
MrKinauJr
ToroNZ
Toro
webbertakken
Webber Takken
ZerNox
Null

License

MIT license.

This spin-off project adds support for eltorito + GPT images required for Ubuntu 20.10 and newer. It also keeps support for the now depricated isolinux + MBR image type. In addition, the process is dockerized to make it possible to run on Mac/Windows hosts in addition to Linux. Automated builds via github actions have also been created.

pxeless's People

Contributors

cloudymax avatar dcd-arnold avatar github-actions[bot] avatar koenvandesande avatar lmunch avatar meraj-kashi avatar mrkinaujr avatar n0k0m3 avatar poeschl avatar snyk-bot avatar toronz avatar webbertakken avatar zernox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pxeless's Issues

Not a bug, just an FYI to anyone using subdirectories

The image-create script has been coded to take the path from which it is run and then use it as the root directory for certain other steps. This is all handled by the use of the SCRIPT_DIR variable.

This does not cause any issues if everything is placed in a single directory or the script is placed at the top level of any tree structure that is defined. It does cause a complication if the script itself is placed in a sub-directory.

In the environment I am putting together the impact is on the -d --destination switch as any path passed must be offset from the location of the script, rather than the 'root' directory. -s --source does not have this issue as the function extract_images has not been written to use SCRIPT_DIR.

The result is that the command line for image-create ends up looking something like this

bin/image-create.sh -a
-k
-n jammy
-u config_files/scaleway-1-network-basic
-s iso_store/ubuntu-22.04.1-live-server-amd64.iso
-d ../iso_output/ubuntu-22.04.1-test-amd64.iso

So

  • the script is found in bin/
  • config files are in config_files/
  • source iso image is in iso_store/
  • destination iso is placed in iso_output/ but must be defined as a path offset from bin/

You may be wondering why I'm making such a complex structure. My aim is to have an environment that can create a number of different ISOs for CI and production environments, which will be complicated even more by the use of ytt Carvel tools which will allow me to have a far more dynamic YAML environment.

[FEATURE] Add a user account with sudo access in container

This is to address the issue uncovered while troubleshooting #33

The problem is that in order to mount the squashfs, we need sudo access.

This is problematic because we dont want to run containers as root in production

The only compromise I can think of is to add a user account with passwordless sudo access into the container, and then run the container with the user ID and group ID of the new account.

This should allow sudo access to the squashfs while fulfilling the technical requirement for a non-root account to run the container - though it's still not ideal to have the user account with sudo access.

I imagine through that most people using this script do so as a one-shot job and don't leave the container running for long, which should mean the risks are minimal

BUG: extra files not copied to live system during install when using `-x`

I'm trying to figure out how to take the additional files I've added to the iso and have them copied to the hard-drive of the machine being built.

I originally used -x to add a directory to the iso..but I didn't see that directory in the / dir of the newly installed machine. Next I tried adding a cp -R /root/my_additional_stuff /root/ to see if that would copy over my additional directory but that didn't seem to work either.

What would be the preferred method to accomplish getting some custom files copied from the iso to the new machine?

[Question] Offline package installation

Hi!
This is a question and not an issue!

Is there any way to install debian packages before creating the ISO? Ubuntu cloud-init needs access to the Internet to install packages, so I am looking for a solution to inject packages during image creation.

Downloading debian packages and injecting through extras file could be a solution, but managing package dependecies is not easy!

The another possible solution could be chroot, but in which step of the script I should do?

Br,
Meraj

An issue within the readme example

Hi,

Thanks for doing so much of the heavy lifting in terms of providing a solution that allows features like cloud-init to be useful.

One thing you have the following example in your readme.md

docker build -t iso-generator . &&
docker run -it --mount type=bind,source="$(pwd)",target=/app iso-generator
ubuntu-autoinstall-generator.sh -a -u user-data.example -n jammy

The version of ubuntu-autoinstall-generator.sh over at covertsh/ubuntu-autoinstall-generator does not have an -n option.

Currently, it is not clear if you have a typo in your example or maybe an extended version of the script.

Thanks again

switch combinations

I hope you don't mind me raising issues like this as I don't have the time or really the skills to do direct modifications and pull requests.

Your script has a problem when trying to use the -s (source) switch as you also need to define the -n (code-name) switch as the script still tries to run function latest_release which will fail on the first curl as it can not retrieve a valid file without a code name being provided.

Make sudo conditional to user/env

Noticed that this function was expanded quite a bit and "sudo" is now a dependency (which broke CI builds for me in Bitbucket). Perhaps we could make sudo optional if not running as root or in CI runners?... like:

#!/bin/bash
if [ ! "$CI" ]; then
  if [ "$USER" != "root" ]; then
    SUDO="sudo "
  fi
fi

I've had a quick look and they seem to be related to #27 . Though I never had issues adding extra files in Ubuntu Focal (20.04). So there must be something more to it... I hope to dig a little bit deeper on the following days, for now I will lock this submodule to the previous commit.

BTW - Kudos @cloudymax for maintaining this tool :)

pxeless/image-create.sh

Lines 387 to 398 in d814df0

sudo unsquashfs "${SQUASH_FS}"
log " - Step 3. Copy extra files to /media..."
sudo cp -R "${EXTRA_FILES_FOLDER}/." "squashfs-root/media/"
if [ -n "$OFFLINE_INSTALLER" ]; then
log " - Step 3.5. Runing offline installer script..."
sudo chroot squashfs-root/ /bin/bash "/media/${OFFLINE_INSTALLER}"
fi
log " - Step 4. Rebuilding squashfs.."
sudo mksquashfs squashfs-root/ "${SQUASH_FS}" -comp xz -b 1M -noappend

stat /data/image-create.sh: no such file or directory: unknown.

v0.0.7 and v0.0.8 is broken:
$ docker run --rm --volume "$(pwd):/data" --user $(id -u):$(id -g) deserializeme/pxeless:v0.0.7 -a -u user-data.basic -n jammy -r
Digest: sha256:7a1862ac4d38c493a9cebed3dc31a57f7cf1e843bdd0a04e42ad05b09f193826
Status: Downloaded newer image for deserializeme/pxeless:v0.0.7
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/data/image-create.sh": stat /data/image-create.sh: no such file or directory: unknown.

v0.0.6 is works:
$ docker run --rm --volume "$(pwd):/data" --user $(id -u):$(id -g) deserializeme/pxeless:v0.0.6 -a -u user-data.basic -n jammy -r
Digest: sha256:0df39c1aeabdc9cb4518a648b99aa682444cd3fe8d69536ab187ae2b146c0e85
Status: Downloaded newer image for deserializeme/pxeless:v0.0.6
[2023-05-18 20:12:39] 📁 Created temporary working directory /tmp/tmp.hAZYqs7Mux
....

I assume this issue has been missed as it (probably) works if you have image-create.sh in $(pwd)

GUIDE: Testing your ISO with QEMU

I used docker run --rm --volume "$(pwd):/data" --user $(id -u):$(id -g) deserializeme/pxeless \ -a -u user-data.basic -n jammy to create a bootable drive. When ssh'ing into the machine I am unable to access it using the default username and password, vmadmin and password

[FEATURE] Customize boot parameters

Thanks for continuing this development.

My main goal is to add toram fsck.mode=skip to the boot parameters. I've done this locally, but would like to upstream it, soliciting feedback on a good approach.

The use-case is that we deploy remote sites, where all we know is the BMC IP of the server and no personell on-site. So generating a server-specific ISO with an approproate cloud-init helps us bootstrap the initial infrastructure node before we have machine provisioning established. The BMC is configured to mount and boot the ISO over the network.

Some of these sites are very remote with high latency (up to a second RTT), which poses some challenges. It all seems to work well with these two parameters added.

So before I submit a PR, how would you like these to be added? An arbitrary --kernel-args, or individual switches (--toram, --skip-fsck)?

Working examples with Ubuntu 22.04.1 Desktop?

Thanks for taking over the ubuntu-autoinstall-iso-generator repro ❤️

Is there a working example known for an automatic install of ubuntu 22.04.1?
I'm having issues on setting it up correctly.

When I'm trying to use the provided sample and execute the builder with a pre-downloaded ubuntu iso and the command

./image-create.sh --no-verify --all-in-one --user-data basic-config.yml -s ubuntu-22.04.1-desktop-amd64.iso -d install-22.iso -n jammy

the iso is successfully created.

Now when booted from this iso in a VM, selected the default grup entry, the automatic procedure is not started correctly and I'm seeing the install wizard.

I looked into the syslog and the commands in the kernel seem to be correct. Any clues where to look at?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.