Giter VIP home page Giter VIP logo

installimage's People

Contributors

alex-pakalniskis avatar asciiprod avatar astronomus-gm avatar bastelfreak avatar freezingdaniel avatar gratuxri avatar lilalkor avatar timakro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

installimage's Issues

Boot and root do not need to be outside LVM

I've had a couple of installs where I needed to patch installimage by removing

installimage/functions.sh

Lines 1118 to 1139 in cc14774

# test if /boot or / is mounted outside the LVM
if [ "$LVM" = "1" ]; then
TMPCHECK=0
for ((i=1; i<=PART_COUNT; i++)); do
if [ "${PART_MOUNT[$i]}" = "/boot" ]; then
TMPCHECK=1
fi
done
if [ "$TMPCHECK" = "0" ]; then
for ((i=1; i<=PART_COUNT; i++)); do
if [ "${PART_MOUNT[$i]}" = "/" ]; then
TMPCHECK=1
fi
done
fi
if [ "$TMPCHECK" = "0" ]; then
graph_error "ERROR: /boot or / may not be a Logical Volume"
return 1
fi
fi

A totally working Debian 11 setup has this truncated lsblk output:

NAME                                           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1                                        259:0    0 476.9G  0 disk
├─nvme0n1p1                                    259:1    0    50G  0 part
│ └─md0                                          9:0    0    50G  0 raid1
│   ├─$FQDN-root 253:0    0    30G  0 lvm   /
│   └─$FQDN-boot 253:1    0     1G  0 lvm   /boot

And following snippet from grub.cfg:

        module2 /vmlinuz-5.10.0-21-amd64 placeholder root=/dev/mapper/$FQDN-root ro  nomodeset consoleblank=0

(module2 as a Xen hypervisor is being loaded, but that's irrelevant to the actual functionality of Grub)

The only things that need to be ensured are having

insmod mdraid1x
insmod lvm

in grub.cfg. Debian 11 just natively supports this and doesn't require any special configuration here, that's the output from a stock update-grub.

So I would recommend just removing that snippet entirely, since if Debian can do it, I guess everything can; or, if this is supposd to be a foot penetration safety measure, at least allow disabling it with a command line flag.

ERROR: Value for CRYPTPASSWORD is not defined

I tried to install a new server, but I get this error after editing the config file.
In the config file there is no comment for this setting.
I tried
CRYPTPASSWORD 1234 but that didn't work either. Is there a possibility to install server without cryptpassword?

Thanks.

Allow setting raid level/config per partition

Currently, you can only select one raid level for all following partitions, despite each partition being created as a separate raid device.
It would be great to have the option to have different raid levels per partition, i.e.:

  • Raid 1 for the server;
  • Raid 1 for a guest images partition/volume;
  • No raid for (duplicated) swap partitions, since these stripe by default;
  • Raid 0 for a volume dedicated to swap partitions for VMs, since those don't really benefit from raid 1 anyway.

Currently, to achieve this, we have to manually adjust the partitions after the installation is done.

RockyLinux 9 support

Hey, Rocky Linux 9 is already out. Is there any chance it gets added to installimage? 😉

Raid: ability to pass e.g. --chunk to mdadm

As with RAID_LAYOUT, it's desirable to be able to specify --chunk too.
I even think it would be good to have an arbitrary "extra args" field that can be passed to mdadm as is - to proactively cover other possible (and future) needs with any other extra flags.

Avoid removing partitions?

From what I see in the code, all existing partitions are always deleted in delete_partitions()?

Is it possible to keep some existing data partitions around, while reinstalling the system drive at /?

This is for the storage servers I'm running with Hetzner. With the CentOS 8 end of life change, I'm afraid I may need to reinstall them sooner than expected. 😟

If it's not possible, what would be the next-best way to preserve partitions? KVM/VNC installation?

Support for Fedora 35?

I have seen that there is now also support for rocky linux, alma linux etc.
However, we use fedora in the cloud so far and would like to use this for our dedicated server as well.
Is there a way to add this? @asciiprod I guess it's not that difficult since rocky, centos etc. are already very close to fedora? I would also be willing to make a PR, unfortunately I can not find documentation on how to include other operating systems, nor instructions on how to test installimage...

Extended partitions

Most disk utilities, and both LVM2 and BTRFS, support including multiple disks in one partition.

The comments in the config file seem to indicate that extended partitions will be handled transparently. However, trying to set a size that will not fit on one disk fails, telling me that I don't have enough space. In addition,, the all seems to only fill up the rest of a single disk.

Is there a bug preventing the proper support of extended partitions, or is it possible to add it in the future?

support for rootfs on btrfs subvolume

Is it possible to add configuration options to allow having "/" on a btrfs subvolume, i.e. something like this:

PART swap swap 4G
PART boot ext3 1G
PART btrfsvol btrfs all
PART / btrfs 20G btrfsvol rootfs
PART /home btrfs 100G btrfsvol home

or similar?

Install to disk by ID

We are trying to install servers with 1 OS drive and several data drives. Installing with ex. -d sda is unreliable, both because the disk which installimage sees as sda is random and because it then installs a system where fstab refers to partitions as /dev/sda[123]. In what appears to be a similar case, #14 (comment) says that "installimage will always use UUIDs for that reason. If UUIDs are also used to reference the second disk, the order should not be an issue", but attempting to actually use ex. -d wwn-0x500a075419ae8a61 results in an error, and when using -d sdX installimage is not converting to UUIDs or anything else stable when writing fstab, which results in sometimes-broken systems. What is the correct way to handle this, short of manually correcting the installed fstab from rescue system?

Auto mode not working when USB stick attached

Even when selecting a disk that is not the USB stick, install image complains that I'm installing on a USB stick /dev/sde and blocks waiting for user input.

Command:

installimage -a -n test -b grub -r no -l 1 -i /root/.oldroot/nfs/install/../images/Ubuntu-1604-xenial-64-minimal.tar.gz \
-p /:ext4:10G -d sdb -f no -s en -t yes

Disks:

root@rescue ~ # ls -al /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 780 May  6 00:09 .
drwxr-xr-x 8 root root 160 May  6 00:08 ..
lrwxrwxrwx 1 root root   9 May  6 00:08 ata-Micron_5100_MTFDDAK480TBY_1710162FAD00 -> ../../sda
lrwxrwxrwx 1 root root   9 May  6 00:08 ata-Micron_5100_MTFDDAK480TBY_1710162FAD15 -> ../../sdb
lrwxrwxrwx 1 root root   9 May  6 00:08 ata-TOSHIBA_MG04ACA600EY_27LEK01LFTTB -> ../../sdd
lrwxrwxrwx 1 root root   9 May  6 00:08 ata-TOSHIBA_MG04ACA600EY_27LEK01PFTTB -> ../../sdc
lrwxrwxrwx 1 root root   9 May  6 00:07 usb-JetFlash_Transcend_8GB_2499536996-0:0 -> ../../sde
lrwxrwxrwx 1 root root  10 May  6 00:07 usb-JetFlash_Transcend_8GB_2499536996-0:0-part1 -> ../../sde1
lrwxrwxrwx 1 root root   9 May  6 00:08 wwn-0x500003979bd82c5e -> ../../sdd
lrwxrwxrwx 1 root root   9 May  6 00:08 wwn-0x500003979bd82c62 -> ../../sdc
lrwxrwxrwx 1 root root   9 May  6 00:08 wwn-0x500a0751162fad00 -> ../../sda
lrwxrwxrwx 1 root root   9 May  6 00:08 wwn-0x500a0751162fad15 -> ../../sdb

Alma/Rocky Linux 9 support

Hey, Rocky Linux 9 is already out. Is there any chance it gets added to installimage?
Thank's in advance.

syntax error when trying to setup LUKS

Hello, I'm trying to setup LUKS on my new cloud instance, but I've got the following message.

[22:06:31] # setting up /etc/netplan/01-netcfg.yaml
[22:06:31] :   chroot: failed to run command ‘/usr/bin/env’: Exec format error
[22:06:31] :   /root/.oldroot/nfs/install/network_config.functions.sh: line 176: ((: < 226: syntax error: operand expected (error token is "< 226")
[22:06:32] :   configuring dhcpv4 for
[22:06:32] :   configuring ipv6 addr <redacted ipv6>for
[22:06:32] :   chroot: failed to run command ‘/usr/bin/env’: Exec format error
[22:06:32] fatal: can not query netplan version
[22:06:32] => FAILED

I was following the following instruction, but with a bit of modification as my system is in UEFI mode. (I added PART /boot/efi esp 256M, not sure if I am supposed to do that.)

https://community.hetzner.com/tutorials/install-ubuntu-2004-with-full-disk-encryption

Improve support for encryption at rest.

It's 2021- we now have GDPR and other new data protection laws which put a lot of pressure on us to encrypt user data at rest. Encryption at rest is the norm at other providers such as AWS and Google Cloud. It's becoming a security standard.

Some work was done on this here: #21

This needs to be more automatic for installimage to set up for you. Or at very least, there should be documentation on how to do this properly. Extra post-install scripts to add and configure dropbear during the installation process would be reasonable.

Thanks!

EFI partition type

bios_grub seems to be supported type, however calling mkfs.bios_grub fails (not surprisingly).

Is there other way how to create an EFI partition?

Support for RHEL?

Hi there,

would it be possible to add support for rhel as well?
It might be fairly close to the current centos logic, but users would not need to

  • use centos in the image name
  • ignore errors at the end of installimage (because of the update, since it's mostly not susbcribed yet)

Just checking whether in general you would be fine before looking into actually doing it to create a PR which might be rejected because not wanted :)

Random Installation disk order

I've got 2 nvme disks and I want to install the system with installimage, no RAID, here:

  • nvme0n1
  • nvme1n1

Sometimes, the first disk is used, sometime the second one, the system installation disk is not predictable.

Expected behavior: The installation disk should always be nvme0n1

Autorelabel should be done for permissive SELinux

On centos specific configuration script, an autorelabel is triggered only if SELinux is configured in enforcing mode[1].

However, it should be also triggered in permissive mode. The reason is that rescue environment does not have SELinux enabled, so the files it creates are unlabeled and remain unlabeled, so permissive mode would report false alerts and, what is worse, there would be failures if enforcing mode is set.

An example of a CentOS version that installs with SELinux set to permissive mode is the Centos Stream 8 version available at the time this issue is being reported.

[1] - https://github.com/hetzneronline/installimage/blob/master/centos.sh#L161

Support for Fedora CoreOS

I'd like to use Fedora CoreOS (https://docs.fedoraproject.org/en-US/fedora-coreos/) with the Cluster API Provider Hetzner (https://github.com/syself/cluster-api-provider-hetzner), but it requires any setup of bare metal machines to be made with installimage, which currently doesn't support it, because it has a different filesystem structure than common Linux distributions. It uses an ostree based root filesystem for A/B-deployments and rollbacks. This requires btrfs with snapshots and a different /boot layout.

Grub works on xfs at least for Ubuntu-1804

root@Ubuntu-1804-bionic-64-minimal:~# mount|grep xfs
/dev/md0 on / type xfs (rw,relatime,attr2,inode64,noquota)
/dev/md2 on /data type xfs (rw,relatime,attr2,inode64,noquota)

So probably ERROR in install scirpt regard grup on xfs is not actuall.

Warning, GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub is over written.

Probably not actionable, but I wanted to note for users searching about this issue- your kernel options in /etc/default/grub for GRUB_CMDLINE_LINUX_DEFAULT are overwritten in /etc/default/grub.d/hetzner.cfg.

sed -i "$grubdefconf" -e "s/^GRUB_CMDLINE_LINUX_DEFAULT=.*/GRUB_CMDLINE_LINUX_DEFAULT=\"${grub_linux_default}\"/"

You may be better off setting your kernel options in /etc/default/grub.d/hetzner.cfg

Some example repercussions of this:

  • Production Redis requires transparent_hugepage=never
  • Rootless Docker requires systemd.unified_cgroup_hierarchy=1 in current kernels to use cgroupsv2

Unsure if Hetzner can do anything about this, but just so people are aware when googling this issue.

Remember of course to update-grub when changing kernel options.

Thanks.

an error should lead to a non-zero exit code if called with -a

i use install image via terraform with the -a flag to provision my images with the following command

/root/.oldroot/nfs/install/installimage -a -n myname -b grub -r no -i /root/.oldroot/nfs/images/`ls /root/.oldroot/nfs/images | grep -E '^Debian.*bullseye.*minimal.tar.gz$' | sort | tail -n1` ... -t yes

as you, sadly, have new naming conventions for your images, the script had an error, as i called it via terraform, the script got stuck so hard that i had to kill the terraform process, even a ctrl-x or ctrl-c have not helped.

terraform was stuck at:
image

i had to add a logoutput to my terraform which outputs the exactly command and run it manually on the system to finally see:

image

which i debugged to your new file naming scheme.

running installimage with -a (or maybe a new flag like --no-interaction or --batch) should never lead to a gui showing the error but to a non-zero exit code an an error message to std-err.

would be really helpful and safe time.

it would be really nice if you could add a symlink following the previous naming scheme. in general a more stable naming scheme would be awesome.

   0 lrwxrwxrwx  1 root root    50 Aug 25 08:28 Debian-913-stretch-64-minimal.tar.gz -> ../images.old/Debian-913-stretch-64-minimal.tar.gz
   0 lrwxrwxrwx  1 root root    36 Aug 24 13:16 Debian-oldstable-64-minimal.tar.gz -> Debian-1010-buster-64-minimal.tar.gz
   0 lrwxrwxrwx  1 root root    36 Jul 20 08:02 Debian-stable-64-minimal.tar.gz -> Debian-1010-buster-64-minimal.tar.gz
   0 lrwxrwxrwx  1 root root    38 Aug 24 13:18 Debian-stable-amd64-base.tar.gz -> Debian-1100-bullseye-amd64-base.tar.gz

debian buster should not be called debian stable any more.

something like the following would be really helpful

Debian-latest -> Debian-bullseye-1101 
Debian-buster-latest -> Debian-buster-1011
Debian-buster-1000
Debian-buster-1001
...
Debian-buster-1011
Debian-bullseye-latest -> Debian-bullseye-1101
Debian-bullseye-1101 
Debian-bullseye-1100

Root on ZFS for supported distros

Root on ZFS should work on Debian and Ubuntu; however it's not presented in installimage and the steps provided by OpenZFS for Ubuntu 20.04 do not result in a bootable system (instructions here for reference).

It seems, at a minimum, the live system would need to install zfsutils-linux from buster-backports, and the same would need installed in the new system (it's available from default repositories in ubuntu) as well as zfs-initramfs in the live system. There would also need to be handling of the creation of the suggested bpool and rpool (or user defined pools) and a standard BIOS Boot partition.

The config could look like follows:

PART zfs <pool name> <size> <---- defines that user wants to use zfs

zpool <pool name> <dataset> <mountpoint> <----- user defines datasets per pool

Where all <dataset> specified is relative to <pool name>/

For example, with a basic bpool and rpool, with no particular dataset besides / and /boot. /boot can be on ZFS, but that isn't as straightforward with swraid. Keep in mind, however, zfs has native RAID support at level 0 [just uses several disks] and then what you'd expect for level 1, 5, 6, and 10. Easiest solution would be to let ZFS handle the RAID if user selects it.

PART swap swap 32G
PART zfs bpool 2G
PART zfs rpool all

zpool bpool boot /boot
zpool rpool root /boot

Said config is not unlike what already exists for btrfs.

Would it be possible to add zfs support in the future? ZFS comes with many handy features, like recovering from silent corruption, compression, and built in RAID, amongst other things, and is suitable for most production environments,

Ubuntu22.04 not using the rescue system credentials

Hey!

So I use ssh credentials to connect to the rescue system, installed ubuntu 22.04 with the basic config, rebooted the server and then I got asked for a username and password - the ssh key didn't work. (I only set a ssh key when ordering the server)
I tried multiple times installing this even with params like -t or -G to take over the SSH keys but that also didn't work.

Now after 5 reinstallations I went to ubuntu 20.04 and there it worked fine and I could connect with my SSH-Key right after the installation.

Add spaces to Dracut config

Getting this in CentOS 9. Doesn't cause any actual problems, but maybe good to change in the future.

/etc/dracut.conf.d/99-hetzner.conf:add_dracutmodules+="lvm mdraid"
/etc/dracut.conf.d/99-hetzner.conf:add_drivers+="raid0 raid1 raid10 raid456"

dracut: WARNING: <key>+=" <values> ": <values> should have surrounding white spaces!
dracut: WARNING: This will lead to unwanted side effects! Please fix the configuration file.

SWAP partition size.

When I install Ubuntu 18.04 minimal and use the default 32G suggestion for the swap partition I end up with a 128G partition. I'm using 4xNvme in a raid 0 configuration and that seems to suggest that the 32G is getting set per disk in the raid thus ending up being 128G.

Any chance you lads can look into fixing this?

raid0 error on EX100

hello,

we can not use SWRAIDLEVEL 0 with EX100 installimage partition configuration.

PART /boot/efi esp 256M
PART swap swap 4G
PART /boot ext3 1024M
PART / ext4 all

It seems the EFI ESP does not accept raid0

this PART is not on AX installimage configuration
I have no issue on AX servers.

installimage sometimes confuses one disk for another

Hello.
PX62-NVMe server has two disks: nvme0n1 and nvme1n1. I need them separately - not in a raid. I also need the first one as some partitions and the second one as a raw device.
I run installimage -a -r no -p /boot:ext3:512M,/:ext4:20G,/kafka:ext4:all -d nvme0n1 -f no -s en -i /root/.oldroot/nfs/install/../images/Ubuntu-1804-bionic-64-minimal.tar.gz
Sometimes it works as expected and sometimes (quite often apparently) installimage thinks that nvme0n1 is nvme1n1:

# cat /etc/fstab
proc /proc proc defaults 0 0
# /dev/nvme0n1p1 during Installation (RescueSystem)
UUID=ab20d81c-c59b-4ec6-8399-1b79c750f429 /boot ext3 defaults 0 0
# /dev/nvme0n1p2 during Installation (RescueSystem)
UUID=de1eab9c-7dea-44c6-bc77-5ce34d436755 / ext4 defaults 0 0
# /dev/nvme0n1p3 during Installation (RescueSystem)
UUID=c6278d98-3c85-4f04-bb15-86b4ee5cef7e /kafka ext4 defaults 0 0

# ls -lha /dev/disk/by-uuid
lrwxrwxrwx 1 root root  15 Apr 18 09:47 ab20d81c-c59b-4ec6-8399-1b79c750f429 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root  15 Apr 18 09:47 c6278d98-3c85-4f04-bb15-86b4ee5cef7e -> ../../nvme1n1p3
lrwxrwxrwx 1 root root  15 Apr 18 09:47 de1eab9c-7dea-44c6-bc77-5ce34d436755 -> ../../nvme1n1p2

Looks like installimage is relying somewhere on some strict order, which actually can be random

ERROR: ESP missing or multiple ESP found

Hey,

I am trying to run the installimage as part of https://github.com/syself/cluster-api-provider-hetzner

I have 3 dedicated servers on 2 servers the script just runs fine but on the 3rd server it fails with:

[14:23:21] # use config file /autosetup for autosetup
[14:23:21] # use post-install file /root/post-install.sh
[14:23:21] # OPT_CONFIGFILE:   /autosetup
[14:23:21] # starting installimage
[14:23:21] -------------------------------------
[14:23:21] :   Hardware data:
[14:23:21] :   CPU1: AMD Ryzen 9 5950X 16-Core Processor (Cores 32)
[14:23:21] :   Memory:  128746 MB
[14:23:21] :   Disk /dev/nvme0n1: 3840 GB (=> 3576 GiB)
[14:23:21] :   Disk /dev/nvme1n1: 3840 GB (=> 3576 GiB)
[14:23:21] :   Total capacity 7153 GiB with 2 Disks
[14:23:21] -------------------------------------
[14:23:21] # make clean config
[14:23:21] # SYSTYPE: System Product Name
[14:23:21] # SYSMFC:  ASUS
[14:23:22] # executing autosetup ...
[14:23:22] # SYSTYPE: System Product Name
[14:23:22] # SYSMFC:  ASUS
[14:23:22] # checking if the script is disabled
[14:23:22] # validating config ...
[14:23:22] :   /boot : 1024
[14:23:22] :   / : all
[14:23:22] Image info:
[14:23:22] :   DISTRIB ID:               ubuntu
[14:23:22] :   DISTRIB RELEASE/CODENAME: 2004
[14:23:22] :   Size of the first hdd is: 3840755982336
[14:23:22] :   ERROR: ESP missing or multiple ESP found
[14:23:22] cleaning up
[14:23:22] :   umount: /installimage.kiLIM/hdd: not mounted

The autosetup file looks like this:

DRIVE1 /dev/nvme1n1
DRIVE2 /dev/nvme0n1

HOSTNAME bm-main-cluster-md-1-nsqbl
SWRAID 0

PART /boot ext4 1024M
PART / ext4 all



IMAGE /root/.oldroot/nfs/install/../images/Ubuntu-2004-focal-64-minimal-hwe.tar.gz

LVM+RAID support using MD-RAID + LVM or LVM-RAID?

I would like to understand the RAID+LVM setup. For this I already noticed the example given by:
https://github.com/hetzneronline/installimage/blob/master/configs/simple-debian64-raid-lvm.

Will 'SWRAIDLEVEL=1' introduce a mdadm RAID or will it use the LVM-Raid features, see here:
https://blog.programster.org/create-raid-with-lvm

Example for RAID1 using LVM2 (taken from link above):

VG_NAME="vg1"
LV_NAME="lvm_raid1"

sudo vgcreate $VG_NAME /dev/sd[x]1 /dev/sd[x]1

sudo lvcreate \
  --mirrors 1 \
  --type raid1 \
  -l 100%FREE \
  --nosync \
  -n $LV_NAME $VG_NAME

sudo mkfs.[ext4/xfs] /dev/$VG_NAME/$LV_NAME

Is it also possible to make installimage run custom scripts to setup partitions using lvm commands directly?

allow other editor than mcedit

for the advanced users who are able to use vim (which is installed on the Hetzner Rescue system) it would be great and much easier to configure a server if you allowed using it by respecting the configured value of the VISUAL/EDITOR environment variables.

Option: copy no files

I'm using this script to set up new servers. It does two things here: Prepare the disks, RAID, LVM and file systems; and copy the OS files (including making necessary changes to the config files).

When restoring a server from a file-based backup, no OS files need to be copied from a default image. I only need the disk preparation parts of this script, then I can restore all files in the file system from my backup.

It would be helpful to have an option to this script that only performs the first part and then does not copy any files into the new file system(s).

REGRESSION: encrypted btrfs root filesystem fails -- but worked before (for sure at commit 84883ef)

This simple installimage.cfg worked perfectly at commit 84883ef

DRIVE1 /dev/sda

SWRAID 0
SWRAIDLEVEL 0

BOOTLOADER grub
HOSTNAME tester

PART swap swap 4G
PART /boot ext2 1G
PART / btrfs all crypt 

IMAGE /root/.oldroot/nfs/images/archlinux-latest-64-minimal.tar.gz

SSHKEYS_URL /root/.ssh/authorized_keys

CRYPTPASSWORD :somecryptpassword

But it now fails because of a wrongly constructed device mapper paths and the wrong attempt to encrypt all partitions.

The latter relates this issue to the since long reported issue of #51 (comment)

The relevant output of debug.txt reads:

[11:19:00] # Encrypt partitions and create /etc/crypttab
[11:19:05] ! this is no valid block device:  /dev/mapper/luks-dev/dev/mapper/luks-sda1
[11:19:05] content from ls /dev/[hmsv]d*: /dev/sda
/dev/sda1
/dev/sda2
/dev/sda3
[11:19:09] ! this is no valid block device:  /dev/mapper/luks-dev/dev/mapper/luks-sda2
[11:19:09] content from ls /dev/[hmsv]d*: /dev/sda
/dev/sda1
/dev/sda2
/dev/sda3
[11:19:13] ! this is no valid block device:  /dev/mapper/luks-dev/dev/mapper/luks-sda3
[11:19:13] content from ls /dev/[hmsv]d*: /dev/sda
/dev/sda1
/dev/sda2
/dev/sda3

Please fix this show stopper soon.

Various syntax errors when trying to install from a URL

Whenever we try to install from a URL we see the following issues:

                Hetzner Online GmbH - installimage

  Your server will be installed now, this will take some minutes
             You can abort at any time with CTRL+C ...

         :  Reading configuration                           done 
         :  Loading image file variables                    done 
         :  Loading ubuntu specific functions               done 
   1/17  :  Deleting partitions                             done 
   2/17  :  Test partition size                             done 
   3/17  :  Creating partitions and /etc/fstab              busy /root/.oldroot/nfs/install/functions.sh: line 1930: ((: 23.04: syntax error: invalid arithmetic operator (error token is ".04")
/root/.oldroot/nfs/install/functions.sh: line 1922: [: 23.04: integer expression expected
/root/.oldroot/nfs/install/functions.sh: line 1930: ((: 23.04: syntax error: invalid arithmetic operator (error token is ".04")
                                                            done 
   4/17  :  Creating software RAID level 1                  busy /root/.oldroot/nfs/install/functions.sh: line 2256: [: 23.04: integer expression expected
/root/.oldroot/nfs/install/functions.sh: line 2263: [: 23.04: integer expression expected
                                                            done 
   5/17  :  Formatting partitions
         :    formatting /dev/md/0 with swap                done 
         :    formatting /dev/md/1 with ext3                done 
         :    formatting /dev/md/2 with ext4                done 
   6/17  :  Mounting partitions                             done 
   7/17  :  Sync time via ntp                               done 
   8/17  :  Downloading image (http)                        done 
         :  Importing public key for image validation       done 
   9/17  :  Validating image before starting extraction     warn 
         :  No detached signature file found!
  10/17  :  Extracting image (http)                         done 
  11/17  :  Setting up network config                       busy /root/.oldroot/nfs/install/network_config.functions.sh: line 797: ((: 23.04: syntax error: invalid arithmetic operator (error token is ".04")
/root/.oldroot/nfs/install/network_config.functions.sh: line 528: /installimage.QHrZ6/hdd/etc/network/interfaces: No such file or directory
                                                            done 
  12/17  :  Executing additional commands
         :    Setting hostname                              busy /root/.oldroot/nfs/install/functions.sh: line 3017: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3018: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3020: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3022: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3023: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3024: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3025: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3026: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3027: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
/root/.oldroot/nfs/install/functions.sh: line 3029: /installimage.QHrZ6/hdd/etc/hosts: No such file or directory
                                                            done 
         :    Generating new SSH keys                       done 
         :    Generating mdadm config                       busy sed: can't read /installimage.QHrZ6/hdd/etc/default/mdadm: No such file or directory
sed: can't read /installimage.QHrZ6/hdd/etc/default/mdadm: No such file or directory
sed: can't read /installimage.QHrZ6/hdd/etc/default/mdadm: No such file or directory
sed: can't read /installimage.QHrZ6/hdd/etc/default/mdadm: No such file or directory
                                                           failed

ZFS

Where is the option to setup Debian or Proxmox on ZFS

Confusing error message when mis-specifying image

If I mistype the image name in my autosetup config file, I get a generic error with no explanation:

[15:32:09] # starting installimage on [ xxxxx ]
[15:32:09] -------------------------------------
[15:32:10] :   Hardware data:
[15:32:10] :   CPU1: AMD Ryzen 9 3900 12-Core Processor (Cores 24)
[15:32:10] :   Memory:  128769 MB
[15:32:10] :   Disk /dev/nvme0n1: 1920 GB (=> 1788 GiB) doesn't contain a valid partition table
[15:32:10] :   Disk /dev/nvme1n1: 1920 GB (=> 1788 GiB) doesn't contain a valid partition table
[15:32:10] :   Total capacity 3576 GiB with 2 Disks
[15:32:10] -------------------------------------
[15:32:10] # make clean config
[15:32:10] # executing autosetup ...
[15:32:10] # checking if the script is disabled
[15:32:10] # validating config ...
[15:32:10] :   /boot : 1024
[15:32:10] :   lvm : all
[15:32:10] :   Size of smallest drive is 1920383410176
[15:32:10] Calculated size of array is: 1920383410176
[15:32:10] checking if hdd sizes are within tolerance. min: 1920383410176 / max: 2592517603737
[15:32:10] DRIVE1 in range
[15:32:10] :   1920383410176
[15:32:10] DRIVE2 in range
[15:32:10] :   1920383410176
[15:32:10] :   check_dos_partitions
[15:32:10] # executing installfile ...
[15:32:10] :   /boot : 1024
[15:32:10] :   lvm : all
[15:32:11] => FAILED
[15:32:11] :   report install.conf to rz-admin: 1555531
[15:32:11] :   report debug.txt to rz-admin: true
[15:32:11] cleaning up

From this, it looks like the lvm configuration is at fault. It should at least say "image not found".

Support ZFS

Any plans to support ZFS with InstallImage?

support for ClearLinux

I would like to test out the "cloudfirst" OS, ClearLinux. It does not support grub, and for this reason I'm waiting for my KVM, as I see that as the only way.

I guess their OpenStack image (Cloud Guest" is not far from being possible to use?
https://clearlinux.org/downloads

I would love to see this OS as one of the support ones...

Centos 8 fails with networkd missing

Hi

I am trying to install Centos 8 using installimage. The installation fails with networkd given that Centos 8 does not use networkd but NetworkManager instead

[18:55:25] # use config file config.txt for autosetup
[18:55:25] # OPT_CONFIGFILE: /root/config.txt
[18:55:25] # starting installimage on [ 138.201.140.245 ]
[18:55:25] -------------------------------------
[18:55:25] : Hardware data:
[18:55:25] : CPU1: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (Cores 16)
[18:55:25] : Memory: 64338 MB
[18:55:25] : Disk /dev/nvme0n1: 1024 GB (=> 953 GiB)
[18:55:25] : Disk /dev/nvme1n1: 1024 GB (=> 953 GiB)
[18:55:25] : Total capacity 1907 GiB with 2 Disks
[18:55:25] -------------------------------------
[18:55:25] # make clean config
[18:55:26] # executing autosetup ...
[18:55:26] # checking if the script is disabled
[18:55:26] # validating config ...
[18:55:26] : /boot : 512
[18:55:26] : lvm : all
[18:55:27] : Size of smallest drive is 1024209543168
[18:55:27] Calculated size of array is: 1024209543168
[18:55:27] : setting TAR to GNUtar
[18:55:27] : setting TAR to GNUtar
[18:55:27] checking if hdd sizes are within tolerance. min: 1024209543168 / max: 1382682883276
[18:55:27] DRIVE1 in range
[18:55:27] : 1024209543168
[18:55:27] DRIVE2 in range
[18:55:27] : 1024209543168
[18:55:27] : check_dos_partitions
[18:55:47] # executing installfile ...
[18:55:47] : /boot : 512
[18:55:47] : lvm : all
[18:55:47] : check_dos_partitions
[18:55:47] # load centos specific functions...
[18:55:47] # Deleting partitions
[18:55:48] # Deleting partitions on /dev/nvme0n1
[18:55:49] # Deleting partitions on /dev/nvme1n1
[18:55:51] # Test partition size
[18:55:51] : check_dos_partitions
[18:55:51] # Creating partitions and /etc/fstab
[18:55:51] # Creating partitions on /dev/nvme0n1
[18:55:51] : deactivate all dm-devices with dmraid and dmsetup
[18:55:51] : no block devices found
[18:55:51] : create partition: parted -s /dev/nvme0n1 mkpart primary ext3 2048s 1050623s
[18:55:51] : create partition: parted -s /dev/nvme0n1 mkpart primary ext3 1050624s 2000407215s
[18:55:52] : reread partition table after 5 seconds
[18:55:57] : deactivate all dm-devices with dmraid and dmsetup
[18:55:57] : no block devices found
[18:55:57] # Creating partitions on /dev/nvme1n1
[18:55:57] : deactivate all dm-devices with dmraid and dmsetup
[18:55:57] : no block devices found
[18:55:57] : create partition: parted -s /dev/nvme1n1 mkpart primary ext3 2048s 1050623s
[18:55:58] : create partition: parted -s /dev/nvme1n1 mkpart primary ext3 1050624s 2000407215s
[18:55:58] : reread partition table after 5 seconds
[18:56:03] : deactivate all dm-devices with dmraid and dmsetup
[18:56:03] : no block devices found
[18:56:03] # Creating software RAID level 1
[18:56:03] # create software raid array(s)
[18:56:03] : Line is: "proc /proc proc defaults 0 0"
[18:56:03] : Line is: "devpts /dev/pts devpts gid=5,mode=620 0 0"
[18:56:03] : Line is: "tmpfs /dev/shm tmpfs defaults 0 0"
[18:56:03] : Line is: "sysfs /sys sysfs defaults 0 0"
[18:56:03] : Line is: "/dev/nvme1n1p1 /boot ext4 defaults 0 0"
[18:56:03] Array RAID Level is: '1' - -
[18:56:03] Array metadata is: '--metadata=1.2'
[18:56:03] : Line is: "# /dev/nvme1n12 belongs to LVM volume group 'vg0'"
[18:56:03] Array RAID Level is: '1' - -
[18:56:03] Array metadata is: '--metadata=1.2'
[18:56:03] : mdadm: /dev/nvme0n1p2 appears to be part of a raid array:
[18:56:03] : level=raid1 devices=2 ctime=Fri Oct 11 17:16:58 2019
[18:56:03] : mdadm: /dev/nvme1n1p2 appears to be part of a raid array:
[18:56:03] : level=raid1 devices=2 ctime=Fri Oct 11 17:16:58 2019
[18:56:04] # Creating LVM volumes
[18:56:04] # Removing all Logical Volumes and Volume Groups
[18:56:04] # Removing all Physical Volumes
[18:56:04] # Creating PV /dev/md/1
[18:56:04] : /dev/md/1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
[18:56:04] : /dev/md/1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
[18:56:04] : Physical volume "/dev/md/1" successfully created.
[18:56:04] # Creating VG vg0 with PV /dev/md/1
[18:56:04] : Volume group "vg0" successfully created
[18:56:04] # Creating LV vg0/root (51200 MiB)
[18:56:04] : Wiping xfs signature on /dev/vg0/root.
[18:56:04] : Logical volume "root" created.
[18:56:04] # Creating LV vg0/swap (8192 MiB)
[18:56:05] : Logical volume "swap" created.
[18:56:05] # Creating LV vg0/tmp (30720 MiB)
[18:56:05] : Logical volume "tmp" created.
[18:56:09] # formatting /dev/md/0 with ext4
[18:56:13] # formatting /dev/vg0/root with xfs
[18:56:17] # formatting /dev/vg0/swap with swap
[18:56:18] : Setting up swapspace version 1, size = 8 GiB (8589930496 bytes)
[18:56:18] : no label, UUID=13dacd13-a3a6-4248-a2a6-d32bc08a47e2
[18:56:22] # formatting /dev/vg0/tmp with xfs
[18:56:22] # Mounting partitions
[18:56:22] # Sync time via ntp
[18:56:22] : Using standard Hetzner Online GmbH pubkey: /root/.oldroot/nfs/install/gpg/public-key.asc
[18:56:22] : Using standard Hetzner Online GmbH pubkey: /root/.oldroot/nfs/install/gpg/public-key-2018.asc
[18:56:22] : gpg: keybox '/root/.gnupg/pubring.kbx' created
[18:56:22] : gpg: /root/.gnupg/trustdb.gpg: trustdb created
[18:56:22] : gpg: key 9E03E2BEB8F0F463: public key "Hetzner Online AG, RZ-Softwareentwicklung (Signing Key 2013) [email protected]" imported
[18:56:22] : gpg: Total number processed: 1
[18:56:22] : gpg: imported: 1
[18:56:22] : gpg: key 7030DBE4387333B3: public key "Hetzner Online GmbH image signing key [email protected]" imported
[18:56:22] : gpg: Total number processed: 1
[18:56:22] : gpg: imported: 1
[18:56:22] # Validating image before starting extraction
[18:56:22] # Extracting image (local)
[18:56:30] # Setting up network config
[18:56:30] # setup network config
[18:56:30] # setup /etc/systemd/network files
[18:56:30] # setting up /etc/systemd/network/10-mainif.network
[18:56:30] # Systype: B360 HD3P-LM
[18:56:30] # Manufacturer: Gigabyte Technology Co., Ltd.
[18:56:30] # Systype: B360 HD3P-LM
[18:56:30] # Manufacturer: Gigabyte Technology Co., Ltd.
[18:56:30] # chroot: systemctl enable systemd-networkd
[18:56:30] : Failed to enable unit, unit systemd-networkd.service does not exist.
[18:56:30] => FAILED
[18:56:30] : report install.conf to rz-admin: 1267528
[18:56:31] : report debug.txt to rz-admin: ok
[18:56:31] cleaning up

On RX220 installimage does not create /boot/efi esp partition as raid1

When installing Ubuntu 22.04 on an RX220 we specify the partitions as such:

PART /boot/efi esp 256M
PART /boot ext3 1024M
PART lvm vg0 all

LV vg0 tmp /tmp reiserfs 20G
LV vg0 log /var/log ext4 30G
LV vg0 swap swap swap 10G
LV vg0 root / ext4 3200G
LV vg0 home /home ext4 300G

However, only /dev/md0 and /dev/md1 are being created. The size of them matches the /boot and lvm partitions. When installing on an AX101 with the same settings there were three /dev/mdx devices. The two mentioned above and another one matching the /boot/efi partition.

On the RX220 lsblk reports:

NAME           MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
loop0            7:0    0  3.3G  1 loop
nvme1n1        259:0    0  3.5T  0 disk
├─nvme1n1p1    259:5    0    1G  0 part
│ └─md0          9:0    0 1022M  0 raid1
├─nvme1n1p2    259:6    0  256M  0 part
└─nvme1n1p3    259:8    0  3.5T  0 part
  └─md1          9:1    0  3.5T  0 raid1
    ├─vg0-tmp  253:0    0   20G  0 lvm
    ├─vg0-log  253:1    0   30G  0 lvm
    ├─vg0-swap 253:2    0   10G  0 lvm
    ├─vg0-root 253:3    0  3.1T  0 lvm
    └─vg0-home 253:4    0  300G  0 lvm
nvme0n1        259:1    0  3.5T  0 disk
├─nvme0n1p1    259:2    0    1G  0 part
│ └─md0          9:0    0 1022M  0 raid1
├─nvme0n1p2    259:3    0  256M  0 part
└─nvme0n1p3    259:4    0  3.5T  0 part
  └─md1          9:1    0  3.5T  0 raid1
    ├─vg0-tmp  253:0    0   20G  0 lvm
    ├─vg0-log  253:1    0   30G  0 lvm
    ├─vg0-swap 253:2    0   10G  0 lvm
    ├─vg0-root 253:3    0  3.1T  0 lvm
    └─vg0-home 253:4    0  300G  0 lvm

Encrypts root despite config not specifying encryption

I tried installing Ubuntu by encryptin a dedicated /data volume like so:

DRIVE1 /dev/nvme0n1
DRIVE2 /dev/nvme1n1
SWRAID 1
SWRAIDLEVEL 1
HOSTNAME Ubuntu-2004-focal-64-minimal
CRYPTPASSWORD <SECRET_PASSWORD>
PART swap swap 32G
PART /boot ext3 1024M
PART / ext4 50G
PART /data ext4 all crypt
IMAGE /root/.oldroot/nfs/install/../images/Ubuntu-2004-focal-64-minimal.tar.gz
Ubuntu-2004-focal-64-minimal-hwe.tar.gz

But when I mount the volume it looks like the installimage script encrypted both / and /data:

root@rescue ~ # mount /dev/dm-0 /mnt
root@rescue ~ # cat /mnt/etc/fstab 
proc /proc proc defaults 0 0
# /dev/md/0
UUID=388b8803-559a-4997-b980-5d95b8f9a0a6 none swap sw 0 0
# /dev/md/1
UUID=5f925bfa-1822-4a95-988f-52c0596d3a58 /boot ext3 defaults 0 0
/dev/mapper/luks-eb7cde07-90e6-4487-bca2-c6002e0cb9fc / ext4 defaults 0 0 # crypted
/dev/mapper/luks-b4f37da6-54bf-4a03-bb45-8a292af702b0 /data ext4 defaults 0 0 # crypted

Which is obviously wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.