Giter VIP home page Giter VIP logo

packer-build's Introduction

packer-build

What does this do?

These Packer templates and associated files may be used to build fresh Debian and Ubuntu virtual machine images for Vagrant, VirtualBox and QEMU.

The resulting image files may be used as bootable systems on real machines and the provided preseed files may also be used to install identical systems on bare metal as well.

What dependencies does this have?

These templates are tested semi-regularly on recent Linux (Debian and/or Ubuntu) hosts using recent versions of Packer and Vagrant. All testing is currently done on systems that have amd64/x86_64-family processors.

The VirtualBox and QEMU versions used for Linux testing are normally the "stock" ones provided by the official distribution repositories.

Even though Packer supports QEMU as an officially-supported provider, Vagrant, for some reason, does not. The 3rd-party plugin named "vagrant-libvirt" provides the missing QEMU support for Vagrant. We are unable at this time to verify this fact due to the following errors encountered while trying to run "vagrant up":

Error while connecting to libvirt: Error making a connection to libvirt URI qemu:///system?no_verify=1&keyfile=/home/whoa/.ssh/id_rsa:
Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

It may be possible to correct this error by installing the libvirt-daemon-system package on Debian.

TODO Items

  • Fix the boot commands in the Ubuntu cloud-init UEFI templates (non-UEFI ones work fine)
  • Find out if partman-crypto will allow passphrase-crypted
  • Continue investigating best way to handle non-interactive encrypted images (dropbear, likely)

Using Packer Templates

XXX FIXME TODO THIS SECTION NEEDS TO BE REWRITTEN ONCE THE HCL TEMPLATES ARE WORKING!!!

Using Vagrant Box Files

A Vagrant box file is actually a regular gzipped tar archive containing...

  • box.ovf - Open Virtualization Format XML descriptor file
  • nameofmachine-disk1.vmdk - a virtual hard drive image file
  • Vagrantfile - derived from 'Vagrantfile.template'
  • metadata.json - containing just '{ "provider": "virtualbox" }'

An OVA file is actually a regular tar archive containing identical copies of the first 2 files that you would normally see in a Vagrant box file (but the OVF file may be named nameofmachine.ovf and it must be the first file or VirtualBox will get confused).

To use a locally-built Vagrant box file without a dedicated Vagrantfile:

vagrant box add myname/bullseye \
    build/2038-01-19-03-14/base-bullseye-1.0.0.virtualbox.box
vagrant init myname/bullseye
vagrant up
vagrant ssh
...
vagrant destroy

In order to version things and self-host the box files, you will need to create a JSON file containing the following:

{
  "name": "base-bullseye",
  "description": "Base box for x86_64 Debian Bullseye 11.x",
  "versions": [
    {
      "version": "1.0.0",
      "providers": [
        {
          "name": "virtualbox",
          "url": "http://myserver/vm/base-bullseye/base-bullseye-1.0.0-virtualbox.box",
          "checksum_type": "sha256",
          "checksum": "deadbeef"
        }
      ]
    }
  ]
}

SHA256 hashes are the largest ones that Vagrant supports, currently.

Then, simply make sure you point your Vagrantfile at this version payload:

Vagrant.configure('2') do |config|
  config.vm.box = 'base-bullseye'
  config.vm.box_url = 'http://myserver/vm/base-bullseye/base-bullseye.json'

  config.vm.synced_folder '.', '/vagrant', disabled: true
end

NOTE: You must ensure you disable the synched folder stuff above or you will encounter the following error:

Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:

mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant

The error output from the command was:

mount: unknown filesystem type 'vboxsf'

Making Bootable Drives

For best results, you should use the Packer QEMU builder "kvm" accelerator when trying to create bootable images to be used on real hardware. This allows the use of the "raw" block device format which is ideal for writing directly directly to USB and SATA drives. Alternately, you may use "qemu-img convert" or "vbox-img convert" to convert an exiting image in another format to raw mode:

zcat build/2038-01-19-03-14/base-bullseye.raw.gz | dd of=/dev/sdz bs=4M

... Or, if you just want to "boot" it:

qemu-system-x86_64 -m 768M -machine type=pc,accel=kvm \
    build/2038-01-19-03-14/base-bullseye.raw

Overriding Local VM Cache Location

vboxmanage setproperty machinefolder ${HOME}/vm

Disabling Hashicorp Checkpoint Version Checks

Both Packer and Vagrant will contact Hashicorp with some anonymous information each time it is being run for the purposes of announcing new versions and other alerts. If you would prefer to disable this feature, simply add the following environment variables:

CHECKPOINT_DISABLE=1
VAGRANT_CHECKPOINT_DISABLE=1

UEFI Booting on VirtualBox

It isn't necessary to perform this step when running on real hardware, however, VirtualBox (4.3.28) seems to have a problem if you don't perform this step.

To examine the actual contents of the file after editing it:

hexdump /boot/efi/startup.nsh

Using the EFI Shell Editor

To enter the UEFI shell text editor from the UEFI prompt:

edit startup.nsh

Type in the stuff to add to the file (the path to the UEFI blob):

FS0:\EFI\debian\grubx64.efi

To exit the UEFI shell text editor:

^S
^Q

Hex Result:

0000000 feff 0046 0053 0030 003a 005c 0045 0046
0000010 0049 005c 0064 0065 0062 0069 0061 006e
0000020 005c 0067 0072 0075 0062 0078 0036 0034
0000030 002e 0065 0066 0069
0000038

Using Any Old 'nix' Text Editor

To populate the file in a similar manner to the UEFI Shell method above:

echo 'FS0:\EFI\debian\grubx64.efi' > /boot/efi/startup.nsh

Hex Result:

0000000 5346 3a30 455c 4946 645c 6265 6169 5c6e
0000010 7267 6275 3678 2e34 6665 0a69
000001c

Caching Debian/Ubuntu Packages

If you wish to speed up fetching lots of Debian and/or Ubuntu packages, you should probably install "apt-cacher-ng" on a machine and then add the following to each machine that should use the new cache:

echo "Acquire::http::Proxy 'http://localhost:3142';" >>\
    /etc/apt/apt.conf.d/99apt-cacher-ng

You must re-run "apt-cache update" each time you add or remove a proxy. If you populate the "d-i http/proxy string" value in your preseed file, all this stuff will have been done for you already.

Installer Documentation

Other

Ubuntu Live Server

To re-engage cloud-init after it has been used:

sudo rm -f /etc/machine-id
sudo cloud-init clean -s -l

Using a Headless Server

If you are using these scripts on a "headless" server (i.e.: no GUI), you must set the "headless" variable to "true" or you will encounter the following error:

...
==> virtualbox: Starting the virtual machine...
==> virtualbox: Error starting VM: VBoxManage error: VBoxManage: error: The virtual machine 'base-bullseye' has terminated unexpectedly during startup because of signal 6
==> virtualbox: VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine
...

Offical ISO Files

Debian ~~~~~~

Ubuntu ~~~~~~

Distro Release Names

Debian_releases ~~~~~~~~~~~~~~~

  • ? (16.x); released on 2031-??-??, supported until 2036-06?-01
  • ? (15.x); released on 2029-??-??, supported until 2034-06?-01
  • Forky (14.x); released on 2027-??-??, supported until 2032?-06?-01
  • Trixie (13.x); released on 2025-??-??, supported until 2030?-06?-01
  • Bookworm (12.x); released on 2023-06-10, supported until 2028-06-01
  • Bullseye (11.x); released on 2021-08-14, supported until 2026-06-01
  • Buster (10.x); released on 2019-07-06, supported until 2024-06-30

Debian releases seem to occur every 2 years around mid-year and usually receive security support for 3 years and long-term support for 5 years.

Ubuntu_releases ~~~~~~~~~~~~~~~

  • C? C? (31.10.x); released on 2031-10-??, supported until 2032-07?-01
  • B? B? (31.04.x); released on 2031-04-??, supported until 2032-01?-01
  • A? A? (30.10.x); released on 2030-10-??, supported until 2031-07?-01
  • Z? Z? (30.04.x LTS); released on 2030-04-??, supported until 2035-04?-01 (ESM 2040-04?-01)
  • Y? Y? (29.10.x); released on 2029-10-??, supported until 2030-07?-01
  • X? X? (29.04.x); released on 2029-04-??, supported until 2030-01?-01
  • W? W? (28.10.x); released on 2028-10-??, supported until 2029-07?-01
  • V? V? (28.04.x LTS); released on 2028-04-??, supported until 2033-04?-01 (ESM 2037-04?-01)
  • U? U? (27.10.x); released on 2027-10-??, supported until 2028-07?-01
  • T? T? (27.04.x); released on 2027-04-??, supported until 2028-01?-01
  • S? S? (26.10.x); released on 2026-10-??, supported until 2027-07?-01
  • R? R? (26.04.x LTS); released on 2026-04-??, supported until 2031-04?-01 (ESM 2035-04?-01)
  • Q? Q? (25.10.x); released on 2025-10-??, supported until 2026-07?-01
  • P? P? (25.04.x); released on 2025-04-??, supported until 2026-01?-01
  • O? O? (24.10.x); released on 2024-10-??, supported until 2025-07?-01
  • Noble Numbat (24.04.x LTS); released on 2024-04-25, supported until 2029-05-31 (ESM 2034-04-25)
  • Mantic Minotaur (23.10.x); released on 2023-10-12, supported until 2024-07-01
  • Jammy Jellyfish (22.04.x LTS); released on 2022-04-21, supported until 2027-04-21 (ESM 2032-04-21)
  • Focal Fossa (20.04.x LTS); released on 2020-04-23, supported until 2025-04-23 (ESM 2030-04-23)

Ubuntu releases traditionally-occur twice a year--in April and October. LTS releases typically come out in April and receive standard support for 5 years and Extended Security Maintenance for 10 years. Non-LTS releases typically seem to receive standard support for 9 to 11 months with no extended security maintenance.

Extended Security Maintenance (ESM) support for LTS releases is available to individuals on "up to 3 machines" or up to 50 machines for officially-recognized Ubuntu community members.

  • Bionic Beaver (18.04.x LTS); released on 2018-04-26, supported until 2023-05-31 (ESM 2028-04-26)
  • Xenial Xerus (16.04.x LTS); released on 2016-04-21, supported until 2021-04-30 (ESM 2026-04-23)
  • Trusty Tahr (14.04.x LTS); released on 2014-04-17, supported until 2019-04-25 (ESM 2024-04-25)

packer-build's People

Contributors

acdn-ndostert avatar argilo avatar dhbaird avatar mikhirev avatar tylert avatar wheelerlaw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

packer-build's Issues

Error getting SSH address 500 QEMU guest agent is not running

Packer version:

$ packer version
Packer v1.6.0

Builder: proxmox

Proxmox version:

pveversion --verbose
proxmox-ve: 6.2-1 (running kernel: 5.4.44-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
...

I am trying to run a Packer build using the ubuntu-20.04-live-server-amd64.iso, but when the boot_command runs the build just hangs waiting to get an SSH connection to the launched instance. When the ssh_timeout is reached the build fails and the VM is destroyed.

The Packer log reports that the SSH address cannot be obtained because the QEMU guest agent isn't running. It wont be on a fresh ISO and I also set the communicator to ssh, and disabled the qemu_agent, so I'm not sure why this is happening ?

...
  "builders": [
    {
      ...
      "communicator": "ssh",
...
      "qemu_agent": false,
      "ssh_handshake_attempts": "50",
      "ssh_username": "{{user `ssh_username`}}",
      "ssh_password": "{{user `ssh_password`}}",
      "ssh_pty": true,
      "ssh_timeout": "{{user `ssh_timeout`}}",

packer log:
image

build times-out:
image

I have copied the complete Packer build file and the user-data below and there is some further detail on this thread which speaks to similar issues (I think I have explored most suggestions here) : hashicorp/packer#9115

Any guidance on how to ensure that the Packer build uses ssh rather than the qemu agent would be much appreciated (or anything else you think might be the culprit).

Kind Regards

Fraser.

host.json:

{
  "builders": [
    {
      "boot_command": [
        "<enter><wait20><enter><f6><esc><wait>",
        "autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/",
        "<enter>"
      ],
      "boot_wait": "{{user `boot_wait`}}",
      "communicator": "ssh",
      "disks": [
        {
          "disk_size": "{{user `home_volume_size`}}",
          "storage_pool": "local-lvm",
          "storage_pool_type": "lvm-thin",
          "type": "scsi",
          "format": "raw"
        }
      ],
      "http_directory": "{{user `http_directory`}}",
      "insecure_skip_tls_verify": true,
      "iso_checksum": "{{user `iso_checksum_type`}}:{{user `iso_checksum`}}",
      "iso_file": "{{user `iso_file`}}",
      "memory": 2048,
      "name": "ubuntu-20-04-base",
      "network_adapters": [
        {
          "bridge": "vmbr0",
          "model": "virtio"
        }
      ],
      "node": "{{user `proxmox_target_node`}}",
      "password": "{{user `proxmox_server_pwd`}}",
      "proxmox_url": "https://{{user `proxmox_server_hostname`}}:{{user `proxmox_server_port`}}/api2/json",
      "qemu_agent": false,
      "ssh_handshake_attempts": "50",
      "ssh_username": "{{user `ssh_username`}}",
      "ssh_password": "{{user `ssh_password`}}",
      "ssh_pty": true,
      "ssh_timeout": "{{user `ssh_timeout`}}",
      "type": "proxmox",
      "unmount_iso": true,
      "username": "{{user `proxmox_server_user`}}"
    }
  ],
  "provisioners": [
    {
      "execute_command": "{{ .Vars }} sudo -E -S sh '{{ .Path }}'",
      "inline": [
        "ls /"
      ],
      "type": "shell"
    }
  ],
  "variables": {
    "boot_wait": "2s",
    "http_directory": "http",
    "iso_checksum": "caf3fd69c77c439f162e2ba6040e9c320c4ff0d69aad1340a514319a9264df9f",
    "iso_checksum_type": "sha256",
    "iso_file": "local:iso/ubuntu-20.04-live-server-amd64.iso",
    "proxmox_server_hostname": "proxmox-002",
    "proxmox_server_port": "8006",
    "proxmox_server_pwd": "xxxxxxxxx",
    "proxmox_server_user": "xxxxxxxx",
    "proxmox_target_node": "home",
    "ssh_handshake_attempts": "20",
    "ssh_password": "ubuntu",
    "ssh_username": "ubuntu",
    "ssh_timeout": "10m"
  }
}

user-data:

#cloud-config
autoinstall:
  identity:
    hostname: ubuntu-20-04-base
    password: '$6$wdAcoXrU039hKYPd$508Qvbe7ObUnxoj15DRCkzC3qO7edjH0VV7BPNRDYK4QR8ofJaEEF2heacn0QgD.f8pO8SNp83XNdWG6tocBM1'
    username: ubuntu
  keyboard:
    layout: en
    #toggle: null
    variant: 'gb'
  late-commands:
    - sed -i 's/^#*\(send dhcp-client-identifier\).*$/\1 = hardware;/' /target/etc/dhcp/dhclient.conf
    - 'sed -i "s/dhcp4: true/&\n      dhcp-identifier: mac/" /target/etc/netplan/00-installer-config.yaml'
  locale: en_GB
  network:
    network:
      version: 2
      ethernets:
        ens33:
          dhcp4: true
          dhcp-identifier: mac
  ssh:
    allow-pw: true
    authorized-keys:
    - "ssh-rsa AAAAB3NzaC1yc2..."
    install-server: true
  version: 1

Post-processor build failure on bionic

Undoubtedly I'm doing stupid, but my attempt to generate a bionic based VirtualBox is failing with the following message:
`==> vbox (vagrant): Creating Vagrant box for 'virtualbox' provider
vbox (vagrant): Unpacking OVA: build/2018-05-26-18-12-14/base-bionic.ova
vbox (vagrant): Renaming the OVF to box.ovf...
vbox (vagrant): Using custom Vagrantfile: source/ubuntu/bionic/base.vagrant
vbox (vagrant): Compressing: Vagrantfile
==> vbox: Running post-processor: shell-local
==> vbox (shell-local): Post processing with local shell script: /tmp/packer-shell679910228
Build 'vbox' errored: 1 error(s) occurred:

  • Post-processor failed: archive/tar: cannot encode header: Format specifies GNU; and GNU cannot encode Uname="[email protected]"`

How is packer-build determining the tar file name? Is there a variable I can set.

Thx,

-steve

Packer fix error (when running with packer 1.4.2)

When I run ./script/generate_templates.sh, packer fix (packer 1.4.2) generates this error:

Error! Fixed template fails to parse: Failed to cast root level comment value to string

This is usually caused by an error in the input template.
Please fix the error and try again.

Bionic fails to build with Packer

I appreciate that this is possibly a stupid question, but have you successfully built a Bionic image with Packer? I've been going around in circles trying to get mine to work (including pinching ideas from your configs), with no joy. I did come across a comment on this pull request which worried me:

If what you want is to automate server installs of Ubuntu for non-test purposes, that should be done with MAAS, not with subiquity.

From that, it reads like they've ditched the expert mode installer and they're trying to push MAAS for actual installs instead - if that's the case then it's going to be a massive problem for those of us who want to use preseeds!

Pretty useful

I have not used packer yet , and am on OSX, not Linux.
Would you be able to comment on the use case I have below, so I can get a high level view of if this code could help me ??

I need to create USB bootable thumb drives of 3 types:

Windows 10, with some programs already installed

Raspberry pi.
Same software on it as the windows 10 box

Ubuntu 16.04 server. With postresql and some golang executables.

It would be super awesome if I could make vagrant boxes / qemu boxes of the same also. Then I can test in a vm, then make the bootable USB images as needed or maybe even as part of CI

Thanks in advance

Crash with Bionic Beaver and Packer 1.2.4

Bionic Beaver VirtualBox image with Packer 1.2.4 crashes. See:
for crash log. Basic issue seems to be with download of iso.

Reverting to Packer 1.2.3 succeeds but generates following message in same area as crash in 1.2.4:
vbox: Downloading or copying: http://myserver:8080/ubuntu/ubuntu-18.04-server-amd64.iso vbox: Error downloading: Get http://myserver:8080/ubuntu/ubuntu-18.04-server-amd64.iso: dial tcp: lookup myserver on 127.0.0.53:53: server misbehaving vbox: Downloading or copying: http://cdimage.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-amd64.iso

'packer validate' throws errors

packer version 1.8.3 is throwing validation errors.

For example on Ubuntu 22.04 Jammy base.pkr.hcl:

❯ packer validate base.pkr.hcl
Error: Unsupported block type

on base.pkr.hcl line 4, in packer:
4:   required_providers {

Blocks of type "required_providers" are not expected here.

Error: Unsupported block type

on base.pkr.hcl line 4, in packer:
4:   required_providers {

Blocks of type "required_providers" are not expected here.

Format has changed (for virtualbox for example)
from

   required_providers {
      virtualbox = {
         source  = "github.com/hashicorp/packer-plugin-virtualbox"
         version = ">= 1.0.0, < 2.0.0"
      }
   }

to

  required_plugins {
     virtualbox = {
        source  = "github.com/hashicorp/virtualbox"
        version = ">= 1.0.0, < 2.0.0"
     }  
  }

Templating of external files is not yet complete in conversion from YAML to HCL

@tylert at first thanks for providing this great collection of build-scripts!
I appreciate your work and try to "fork" my own builds from this repo.

Unfortunately things seem not to work and I wonder if it's my fault or not.

I don't use the tool vagrant itself, so I run packer "manually" within packer-build/source/debian/11_bullseye.

I edited some variables to fit my local environment:

 git diff base.pkr.hcl
diff --git a/source/debian/11_bullseye/base.pkr.hcl b/source/debian/11_bullseye/base.pkr.hcl
index e03063c9..c2df116b 100644
--- a/source/debian/11_bullseye/base.pkr.hcl
+++ b/source/debian/11_bullseye/base.pkr.hcl
@@ -15,7 +15,7 @@ packer {
 
 variable "apt_cache_url" {
   type    = string
-  default = "http://myserver:3142"
+  default = "http://ivy.loc.oops.co.at:3142"
 }
 
 variable "boot_wait" {
@@ -111,7 +111,8 @@ variable "iso_path_external" {
 
 variable "iso_path_internal" {
   type    = string
-  default = "http://myserver:8080/debian"
+  #default = "http://myserver:8080/debian"
+  default = "/mnt/platz/isos/debian"
 }
 
 variable "keep_registered" {
@@ -156,7 +157,8 @@ variable "packer_cache_dir" {
 
 variable "preseed_file" {
   type    = string
-  default = "template/debian/11_bullseye/base.preseed"
+  #default = "template/debian/11_bullseye/base.preseed"
+  default = "base.preseed"
 }
 
 variable "qemu_binary" {
@@ -196,7 +198,7 @@ variable "ssh_file_transfer_method" {
 
 variable "ssh_fullname" {
   type    = string
-  default = "Ghost Writer"
+  default = "vagrant"
 }
 
 variable "ssh_handshake_attempts" {
@@ -211,7 +213,7 @@ variable "ssh_keep_alive_interval" {
 
 variable "ssh_password" {
   type    = string
-  default = "1ma63b0rk3d"
+  default = "vagrant"
 }
 
 variable "ssh_port" {
@@ -231,7 +233,7 @@ variable "ssh_timeout" {
 
 variable "ssh_username" {
   type    = string
-  default = "ghost"
+  default = "vagrant"
 }

I only run the vbox-build now:

PACKER_LOG=1 PACKER_LOG_PATH=packer.log packer build -only=virtualbox-iso.vbox base.pkr.hcl

The VM gets created and started, but always fails at setting the keyboard somehow.
I fiddled with settings in the preseed file already, but set them back again.

Do you have positive results with this build currently? Might there be any changes in packer or debian maybe?

Missing qemu.sh and vbox.sh

I may be missing something pretty obvious here, but I can not find qemu.sh and vbox.sh under .scripts/. Any idea?

Build failed running post-processor: shell-local

Hi guys,

First of all, very nicely done, you saved me lots of time.

I tried building boxes using QEMU (14.04, 16.04 and 18.04) . I tried to build my own boxes since 4 years ago - quite painfull and builds seems to be working fine without any errors.

However, building VirtualBox box using MacOS somehow failed at the end of the process

==> vbox: Running post-processor: shell-local
==> vbox (shell-local): Running local shell script: /var/folders/25/4ygjmyhj6ld066wbxqqd2_f80000gp/T/packer-shell007772938
    vbox (shell-local): sed: 1: "build/2018-10-31-10-29- ...": undefined label 'uild/2018-10-31-10-29-24/trusty64S-virtualbox.yaml'
Build 'vbox' errored: 1 error(s) occurred:

* Post-processor failed: Erroneous exit code 1 while executing script: /var/folders/25/4ygjmyhj6ld066wbxqqd2_f80000gp/T/packer-shell007772938

Please see output above for more information.

==> Some builds didn't complete successfully and had errors:
--> vbox: 1 error(s) occurred:

* Post-processor failed: Erroneous exit code 1 while executing script: /var/folders/25/4ygjmyhj6ld066wbxqqd2_f80000gp/T/packer-shell007772938

Please see output above for more information.

==> Builds finished but no artifacts were created.

MacOS: 10.14
Packer version: 1.3.2
Virtual box: 5.2.20

Please do let me know if you need more. I haven't tried the box yet, but it seems to be created (I can see .box file in build directory.

Errors in building packer

roar-alpha@localhost:~/packer-build/source/ubuntu/22.04_jammy$ packer build base.pkr.hcl 
Error: Error in function call

  on base.pkr.hcl line 305:
  (source code not available)

with var as object with 52 attributes,
     var.user_data_location as "user-data".

Call to function "templatefile" failed: user-data:119,20-29: Unsupported
attribute; This object does not have an attribute named "username"., and 1 other
diagnostic(s).

Error: Error in function call

  on base.pkr.hcl line 364:
  (source code not available)

with var as object with 52 attributes,
     var.user_data_location as "user-data".

Call to function "templatefile" failed: user-data:119,20-29: Unsupported
attribute; This object does not have an attribute named "username"., and 1 other
diagnostic(s).

22.04 packer template fails to build may I know how should I debug this?

Debian UEFI build

A general question:

Did you lately test base-uefi.pkr.hcl ? I try to get that working (Debian 11.3) with VirtualBox on my Fedora system and see various issues. The boot_command seems not to work anymore etc.

Just asking if that build works for you, @tylert ... thanks!

LUKS unattended builds do not complete automatically

Hi guys,

I'm trying to build images with LUKS encryption but when the OS reboots it asking for the password to unlock the boot.

I'm using QEMU builder with KVM acceleration.

Is there a way to automate this without been in front of the VNC console?

Thanks :)

Make templates more DRY

According to a hint found at hashicorp/packer#1047 (comment), packer templates can be made a lot simpler using "node anchors" in YAML to generate the large JSON blobs (DRY = Don't Repeat Yourself).

Ideally, rather than store the large, ugly JSON blobs, it may be preferable to store a smaller YAML blob instead and try to use some templating engine such as Jinja to work around the problem of packer not having a good way to nest user variables.

gpus?

In order to do GPU passthrough I need to run the virt-install as:

sudo virt-install \ --name ubuntu-vm \ --boot uefi \ --machine q35 \ --host-device 4b:00.0 --host-device 4b:00.1 \ ... ...

Does this packer script support this? I am not quite sure where to look to work out what the equivalent virt-install bit is

Needed to fix "dd" command

Running the xenial build failed for me when the "dd" command eventually errors out due to lack of diskspace. I had to add "|| true" or "||:" to make this work for me.

  •    "dd if=/dev/zero of=/ZEROFILL bs=4M",
    
  •    "dd if=/dev/zero of=/ZEROFILL bs=4M ||:",
    

vagrant-libvirtd: possible hint

Hello,

not an issue per say, more like a possible hint in regards to this quote from your README:

Even though Packer supports QEMU as an officially-supported provider, Vagrant, for some reason, does not. The 3rd-party plugin named "vagrant-libvirt" provides the missing QEMU support for Vagrant. We are unable at this time to verify this fact due to the following errors encountered while trying to run "vagrant up":

Error while connecting to libvirt: Error making a connection to libvirt URI qemu:///system?no_verify=1&keyfile=/home/whoa/.ssh/id_rsa:
Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

I had this issue this morning. It's because the libvirtd daemon is not running at all.
On debian (testing/buster), i missed the right package installed (libvirt-daemon-system).
Upon install, the daemon will be started and this message shall disappear.

My 2 cents.

Cheers,

Find out why ubuntu wily and xenial base templates fail to get past the boot command key presses

At the moment, the wily and xenial base templates fail to automate the keypresses to get past the initial boot command part so it never actually starts off the install process.

  1. extract the official server seed file from the wily and xenial ISOs to confirm they haven't changed
  2. merge any changes with the existing wily and xenial preseed files in use here
  3. if this doesn't fix it, keep applying various blunt instruments to ubuntu to make it work again

Love your work, but have you seen boxcutter/ubuntu?

I love what you've got going here, the configs are more minimal and tailored to exactly the required boot arguments on newer releases like Xenial (which still seems to suffer from a short kernel args list) but I wonder if it would be more effective to add some of your stuff into boxcutter/ubuntu as well. You seem to have a good handle on the trickier versions like 15.10 and 16.04.

[DEBUG] SSH handshake err

Any ideas why I get

2020/12/16 10:24:05 packer-builder-qemu plugin: [INFO] Attempting SSH connection to 127.0.0.1:3514...
2020/12/16 10:24:05 packer-builder-qemu plugin: [DEBUG] reconnecting to TCP connection for SSH
2020/12/16 10:24:05 packer-builder-qemu plugin: [DEBUG] handshaking with SSH
2020/12/16 10:24:15 packer-builder-qemu plugin: [DEBUG] SSH handshake err: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain

Is there something to change here?

 ssh_password: 1ma63b0rk3d
  ssh_password_crypted: '$6$w5yFawT.$d51yQ513SdzariRCjomBwO9IMtMh6.TjnRwQqTBlOMwGhyyVXlJeYC9kanFp65bpoS1tn9x7r8gLP5Dg4CtEP1'

I saw some posts on communities, but nothing relevant. I experience the same from ubuntu1804 or osx.

Edit:
Ok, that tooks me a while to understand that packer needs time to finish ≈40min
Everything seems to be ok now.

qemuargs build options

How can I populate

      "qemuargs": [
        [
          "-display",
          "none"
        ]]

through variables.json as

make BUILD_OPTS='-var-file=variables.json'

Any examples will be useful

Windows sucks ... as always ... as expected :(

I was looking for a way to build a minimal Ubuntu desktop and tried this repo. Unfortunately all I get is this:

image

image

I am running on a Windows 7 64bit machine with recent versions of Packer, Vagrant und VirtualBox. The command above was run in cmder.

image

Correct generated filenames for QEMU image files

Currently, the image files generated when using the QEMU builder produces 2 files named, for example, base-jessie64 and base-jessie64.raw.gz.

It would be nicer to add a suffix of .img and .img.gz, respectively, to these files instead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.