Giter VIP home page Giter VIP logo

packer-ci-build's Introduction

Cilium Logo

CII Best Practices Go Report Card CLOMonitor Artifact Hub Join the Cilium slack channel GoDoc Read the Docs Apache licensed BSD licensed GPL licensed FOSSA Status Gateway API Status Github Codespaces

Cilium is a networking, observability, and security solution with an eBPF-based dataplane. It provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay mode. It is L7-protocol aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.

Cilium implements distributed load balancing for traffic between pods and to external services, and is able to fully replace kube-proxy, using efficient hash tables in eBPF allowing for almost unlimited scale. It also supports advanced functionality like integrated ingress and egress gateway, bandwidth management and service mesh, and provides deep network and security visibility and monitoring.

A new Linux kernel technology called eBPF is at the foundation of Cilium. It supports dynamic insertion of eBPF bytecode into the Linux kernel at various integration points such as: network IO, application sockets, and tracepoints to implement security, networking and visibility logic. eBPF is highly efficient and flexible. To learn more about eBPF, visit eBPF.io.

Overview of Cilium features for networking, observability, service mesh, and runtime security

Stable Releases

The Cilium community maintains minor stable releases for the last three minor Cilium versions. Older Cilium stable versions from minor releases prior to that are considered EOL.

For upgrades to new minor releases please consult the Cilium Upgrade Guide.

Listed below are the actively maintained release branches along with their latest patch release, corresponding image pull tags and their release notes:

v1.15 2024-06-10 quay.io/cilium/cilium:v1.15.6 Release Notes
v1.14 2024-06-10 quay.io/cilium/cilium:v1.14.12 Release Notes
v1.13 2024-06-10 quay.io/cilium/cilium:v1.13.17 Release Notes

Architectures

Cilium images are distributed for AMD64 and AArch64 architectures.

Software Bill of Materials

Starting with Cilium version 1.13.0, all images include a Software Bill of Materials (SBOM). The SBOM is generated in SPDX format. More information on this is available on Cilium SBOM.

Development

For development and testing purpose, the Cilium community publishes snapshots, early release candidates (RC) and CI container images build from the main branch. These images are not for use in production.

For testing upgrades to new development releases please consult the latest development build of the Cilium Upgrade Guide.

Listed below are branches for testing along with their snapshots or RC releases, corresponding image pull tags and their release notes where applicable:

main daily quay.io/cilium/cilium-ci:latest N/A
v1.16.0-rc.0 2024-06-17 quay.io/cilium/cilium:v1.16.0-rc.0 Release Candidate Notes

Functionality Overview

Protect and secure APIs transparently

Ability to secure modern application protocols such as REST/HTTP, gRPC and Kafka. Traditional firewalls operate at Layer 3 and 4. A protocol running on a particular port is either completely trusted or blocked entirely. Cilium provides the ability to filter on individual application protocol requests such as:

  • Allow all HTTP requests with method GET and path /public/.*. Deny all other requests.
  • Allow service1 to produce on Kafka topic topic1 and service2 to consume on topic1. Reject all other Kafka messages.
  • Require the HTTP header X-Token: [0-9]+ to be present in all REST calls.

See the section Layer 7 Policy in our documentation for the latest list of supported protocols and examples on how to use it.

Secure service to service communication based on identities

Modern distributed applications rely on technologies such as application containers to facilitate agility in deployment and scale out on demand. This results in a large number of application containers being started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination ports. This concept requires the firewalls on all servers to be manipulated whenever a container is started anywhere in the cluster.

In order to avoid this situation which limits scale, Cilium assigns a security identity to groups of application containers which share identical security policies. The identity is then associated with all network packets emitted by the application containers, allowing to validate the identity at the receiving node. Security identity management is performed using a key-value store.

Secure access to and from external services

Label based security is the tool of choice for cluster internal access control. In order to secure access to and from external services, traditional CIDR based security policies for both ingress and egress are supported. This allows to limit access to and from application containers to particular IP ranges.

Simple Networking

A simple flat Layer 3 network with the ability to span multiple clusters connects all application containers. IP allocation is kept simple by using host scope allocators. This means that each host can allocate IPs without any coordination between hosts.

The following multi node networking models are supported:

  • Overlay: Encapsulation-based virtual network spanning all hosts. Currently, VXLAN and Geneve are baked in but all encapsulation formats supported by Linux can be enabled.

    When to use this mode: This mode has minimal infrastructure and integration requirements. It works on almost any network infrastructure as the only requirement is IP connectivity between hosts which is typically already given.

  • Native Routing: Use of the regular routing table of the Linux host. The network is required to be capable to route the IP addresses of the application containers.

    When to use this mode: This mode is for advanced users and requires some awareness of the underlying networking infrastructure. This mode works well with:

    • Native IPv6 networks
    • In conjunction with cloud network routers
    • If you are already running routing daemons

Load Balancing

Cilium implements distributed load balancing for traffic between application containers and to external services and is able to fully replace components such as kube-proxy. The load balancing is implemented in eBPF using efficient hashtables allowing for almost unlimited scale.

For north-south type load balancing, Cilium's eBPF implementation is optimized for maximum performance, can be attached to XDP (eXpress Data Path), and supports direct server return (DSR) as well as Maglev consistent hashing if the load balancing operation is not performed on the source host.

For east-west type load balancing, Cilium performs efficient service-to-backend translation right in the Linux kernel's socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers.

Bandwidth Management

Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce transmission tail latencies for applications and to avoid locking under multi-queue NICs compared to traditional approaches such as HTB (Hierarchy Token Bucket) or TBF (Token Bucket Filter) as used in the bandwidth CNI plugin, for example.

Monitoring and Troubleshooting

The ability to gain visibility and troubleshoot issues is fundamental to the operation of any distributed system. While we learned to love tools like tcpdump and ping and while they will always find a special place in our hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide:

  • Event monitoring with metadata: When a packet is dropped, the tool doesn't just report the source and destination IP of the packet, the tool provides the full label information of both the sender and receiver among a lot of other information.
  • Metrics export via Prometheus: Key metrics are exported via Prometheus for integration with your existing dashboards.
  • Hubble: An observability platform specifically written for Cilium. It provides service dependency maps, operational monitoring and alerting, and application and security visibility based on flow logs.

Getting Started

What is eBPF and XDP?

Berkeley Packet Filter (BPF) is a Linux kernel bytecode interpreter originally introduced to filter network packets, e.g. for tcpdump and socket filters. The BPF instruction set and surrounding architecture have recently been significantly reworked with additional data structures such as hash tables and arrays for keeping state as well as additional actions to support packet mangling, forwarding, encapsulation, etc. Furthermore, a compiler back end for LLVM allows for programs to be written in C and compiled into BPF instructions. An in-kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the BPF bytecode to CPU architecture-specific instructions for native execution efficiency. BPF programs can be run at various hooking points in the kernel such as for incoming packets, outgoing packets, system calls, kprobes, uprobes, tracepoints, etc.

BPF continues to evolve and gain additional capabilities with each new Linux release. Cilium leverages BPF to perform core data path filtering, mangling, monitoring and redirection, and requires BPF capabilities that are in any Linux kernel version 4.8.0 or newer (the latest current stable Linux kernel is 4.14.x).

Many Linux distributions including CoreOS, Debian, Docker's LinuxKit, Fedora, openSUSE and Ubuntu already ship kernel versions >= 4.8.x. You can check your Linux kernel version by running uname -a. If you are not yet running a recent enough kernel, check the Documentation of your Linux distribution on how to run Linux kernel 4.9.x or later.

To read up on the necessary kernel versions to run the BPF runtime, see the section Prerequisites.

https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/bpf-overview.png

XDP is a further step in evolution and enables running a specific flavor of BPF programs from the network driver with direct access to the packet's DMA buffer. This is, by definition, the earliest possible point in the software stack, where programs can be attached to in order to allow for a programmable, high performance packet processor in the Linux kernel networking data path.

Further information about BPF and XDP targeted for developers can be found in the BPF and XDP Reference Guide.

To know more about Cilium, its extensions and use cases around Cilium and BPF take a look at Further Readings section.

Community

Slack

Join the Cilium Slack channel to chat with Cilium developers and other Cilium users. This is a good place to learn about Cilium, ask questions, and share your experiences.

Special Interest Groups (SIG)

See Special Interest groups for a list of all SIGs and their meeting times.

Developer meetings

The Cilium developer community hangs out on Zoom to chat. Everybody is welcome.

eBPF & Cilium Office Hours livestream

We host a weekly community YouTube livestream called eCHO which (very loosely!) stands for eBPF & Cilium Office Hours. Join us live, catch up with past episodes, or head over to the eCHO repo and let us know your ideas for topics we should cover.

Governance

The Cilium project is governed by a group of Maintainers and Committers. How they are selected and govern is outlined in our governance document.

Adopters

A list of adopters of the Cilium project who are deploying it in production, and of their use cases, can be found in file USERS.md.

Roadmap

Cilium maintains a public roadmap. It gives a high-level view of the main priorities for the project, the maturity of different features and projects, and how to influence the project direction.

License

The Cilium user space components are licensed under the Apache License, Version 2.0. The BPF code templates are dual-licensed under the General Public License, Version 2.0 (only) and the 2-Clause BSD License (you can use the terms of either license, at your option).

packer-ci-build's People

Contributors

aanm avatar borkmann avatar brb avatar dylandreimerink avatar eloycoto avatar gandro avatar gentoo-root avatar glibsm avatar ianvernon avatar jibi avatar joestringer avatar jrajahalme avatar jrfastab avatar leblowl avatar nbusseneau avatar nebril avatar pchaigno avatar qmonnet avatar rlenglet avatar sayboras avatar tgraf avatar ti-mo avatar tklauser avatar twpayne avatar vadorovsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

packer-ci-build's Issues

Reduce the size of VM and container images

The VM images we use in CI and in local tests take between 7.5GB and 9.1GB, except for the ubuntu-dev image which takes 3.9GB. A large contributor to the VM image's sizes is the container images we prepull (except for ubuntu-dev). On limited network connections, the frequent downloads of large VM and docker images can be a pain.

There's a good number of low hanging fruits we can investigate to reduce the sizes of these images:

  1. Automatically synchronize the list of prepulled container images with images used in Cilium's tests. Related: #83
    • Write a program to extract container image references in Cilium repo and compare to prepulled images.
    • Extend packer-ci-build's CI to run the program regularly.
  2. Update container image versions to use newer, smaller images. cilium/cilium#12579
    • google-samples/gb-xxx could switch to latest versions. Saves ~150MB.
    • istio/examples-bookinfo-ratings-v1 could switch to latest version. Saves ~57MB.
    • istio/examples-bookinfo-details-v1 could switch to latest version. Saves ~121MB.
    • memcached could switch to alpine flavor.
  3. Rebase our Dockerfiles on smaller images where possible. cilium/cilium#12583
    • cilium/python-bmemcached could use python:alpine3.12. Saves ~847MB. cilium/python-bmemcached#1
    • cilium/dnssec-client could use python:alpine3.12. Saves ~844MB. cilium/dnssec-client#1
    • cilium/cc-grpc-demo could use ubuntu:20.04. Saves ~122MB.
  4. Use our own Dockerfile where sensible.
    • istio/examples-bookinfo-productpage-v1 could be switched to the latest version with wget installed on top. Saves ~486MB.
  5. Clean files from VM installation.
    • Remove Linux sources. Saves ~3.6GB. #228
    • Remove archive and Debian packages from Linux installation. Saves ~244MB. #228
    • Remove archives and Debian packages from K8s installation. Saves ~34MB.
  6. Remove unneeded packages and files from VM.
    • Remove old kernel images in /boot for net-next image.

net-next build is failing

Last PR build for net next was https://jenkins.cilium.io/job/Vagrant-PR-Boxes-Packer-Build-Next/152/ and it passed without a problem.

After merging the PR for this build (#229) net-next build was triggered and docker-ce installation failed during building the image (https://jenkins.cilium.io/job/Vagrant-Master-Boxes-Packer-Build-net-next/63/).

I was able to recreate this problem and inspect the VM by building the image locally with make build DISTRIBUTION=ubuntu-next ARGS="-on-error=abort" (it takes a bit of time because kernel compilation happens inside vms) and then sshing into it with vagrant/vagrant credentials.

Since we are cloning git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git as part of net-next build I wonder if there could be changes that happened between the initial PR run which passed and the master build.

Some troubleshooting logs:

Log from single docker.service unit is:

Jul 30 09:44:07 vagrant systemd[1]: Starting Docker Application Container Engine...
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.276448937Z" level=info msg="Starting up"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.277064492Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.277732389Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.277760315Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.277787276Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.277944508Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.279176959Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.279291697Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.279466188Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.279593290Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.299300211Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.306244865Z" level=warning msg="Your kernel does not support cgroup rt period"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.306272870Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.306279478Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.306287806Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.306413899Z" level=info msg="Loading containers: start."
Jul 30 09:44:07 vagrant dockerd[30414]: time="2020-07-30T09:44:07.326488447Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 30 09:44:07 vagrant dockerd[30414]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables: Invalid argument. Run `dmesg' for more information.
Jul 30 09:44:07 vagrant dockerd[30414]:  (exit status 1)

Iptables seem to be at least able to list chains:

vagrant@vagrant:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

vagrant@vagrant:~$ sudo iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

didn't find anything sensible in dmesg:
vagrantdmesg.txt

running the iptables command from docker logs indeed fails:

vagrant@vagrant:~$ sudo iptables --wait -t nat -N DOCKER
iptables: Invalid argument. Run `dmesg' for more information.

lsmod output:
lsmod.txt

builds fail when patching `fs/overlay/super.c` in net-next VM builds

Example build failure: https://jenkins.cilium.io/job/Vagrant-Master-Boxes-Packer-Build-net-next/35/execution/node/23/log/

Relevant output:

09:47:52  [0;32m    virtualbox-iso:   LINK     bpftool[0m
09:47:52  [0;32m    virtualbox-iso: + sudo make install[0m
09:47:52  [0;32m    virtualbox-iso:[0m
09:47:52  [0;32m    virtualbox-iso: Auto-detecting system features:[0m
09:47:52  [0;32m    virtualbox-iso: ...                        libbfd: [ [31mOFF[m ][0m
09:47:52  [0;32m    virtualbox-iso: ...        disassembler-four-args: [ [31mOFF[m ][0m
09:47:52  [0;32m    virtualbox-iso: ...                          zlib: [ [32mon[m  ][0m
09:47:52  [0;32m    virtualbox-iso:[0m
09:47:52  [0;32m    virtualbox-iso: make[1]: Entering directory '/home/vagrant/k/tools/lib/bpf'[0m
09:47:52  [0;32m    virtualbox-iso: Warning: Kernel ABI header at 'tools/include/uapi/linux/if_link.h' differs from latest version at 'include/uapi/linux/if_link.h'[0m
09:47:52  [0;32m    virtualbox-iso: make[1]: Leaving directory '/home/vagrant/k/tools/lib/bpf'[0m
09:47:52  [0;32m    virtualbox-iso:   INSTALL  bpftool[0m
09:47:52  [1;32m==> virtualbox-iso: Provisioning with shell script: provision/ubuntu/kernel-next.sh[0m
09:47:52  [0;32m    virtualbox-iso: [sudo] password for vagrant: ++ uname -r[0m
09:47:52  [0;32m    virtualbox-iso: + export KCONFIG=config-4.15.0-55-generic[0m
09:47:52  [0;32m    virtualbox-iso: + KCONFIG=config-4.15.0-55-generic[0m
09:47:52  [0;32m    virtualbox-iso: + cd /home/vagrant/k[0m
09:47:52  [0;32m    virtualbox-iso: + git apply[0m
09:47:52  [0;32m    virtualbox-iso: error: patch failed: fs/overlayfs/super.c:1314[0m
09:47:52  [0;32m    virtualbox-iso: error: fs/overlayfs/super.c: patch does not apply[0m
09:47:53  [1;32m==> virtualbox-iso: Deregistering and deleting VM...[0m
09:47:53  [1;32m==> virtualbox-iso: Deleting output directory...[0m
09:47:53  [1;31mBuild 'virtualbox-iso' errored: Script exited with non-zero exit status: 1[0m
09:47:53  

cc: @jrfastab @borkmann @brb

unable to use VM build 128 in Cilium CI

When trying to run the Cilium CI with box version 128, the following error occurs:

20:07:47 /opt/vagrant/embedded/gems/gems/net-scp-1.2.1/lib/net/scp.rb:398:in `await_response_state': �scp: error: unexpected filename: . (RuntimeError)
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-scp-1.2.1/lib/net/scp.rb:365:in `block (3 levels) in start_command'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/channel.rb:607:in `do_close'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:564:in `channel_closed'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:675:in `channel_close'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:540:in `dispatch_incoming_packets'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:237:in `ev_preprocess'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/event_loop.rb:99:in `each'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/event_loop.rb:99:in `ev_preprocess'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/event_loop.rb:27:in `process'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:216:in `process'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:178:in `block in loop'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:178:in `loop'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/session.rb:178:in `loop'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-ssh-4.1.0/lib/net/ssh/connection/channel.rb:269:in `wait'
20:07:47 	from /opt/vagrant/embedded/gems/gems/net-scp-1.2.1/lib/net/scp.rb:284:in `upload!'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/communicators/ssh/communicator.rb:289:in `block in upload'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/communicators/ssh/communicator.rb:685:in `block in scp_connect'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/communicators/ssh/communicator.rb:333:in `connect'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/communicators/ssh/communicator.rb:683:in `scp_connect'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/communicators/ssh/communicator.rb:286:in `upload'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/provisioners/file/provisioner.rb:42:in `block in provision'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/provisioners/file/provisioner.rb:5:in `tap'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/provisioners/file/provisioner.rb:5:in `provision'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:95:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builder.rb:116:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `block in run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/util/busy.rb:19:in `busy'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/environment.rb:543:in `hook'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/provision.rb:126:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/provision.rb:103:in `each'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/provision.rb:103:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builder.rb:116:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `block in run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/util/busy.rb:19:in `busy'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/call.rb:53:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builder.rb:116:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `block in run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/util/busy.rb:19:in `busy'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/call.rb:53:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/warden.rb:34:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/builder.rb:116:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `block in run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/util/busy.rb:19:in `busy'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/action/runner.rb:66:in `run'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/machine.rb:227:in `action_raw'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/machine.rb:202:in `block in action'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/environment.rb:631:in `lock'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/machine.rb:188:in `call'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/machine.rb:188:in `action'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/commands/provision/command.rb:30:in `block in execute'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/plugin/v2/command.rb:238:in `block in with_target_vms'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/plugin/v2/command.rb:232:in `each'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/plugin/v2/command.rb:232:in `with_target_vms'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/plugins/commands/provision/command.rb:29:in `execute'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/cli.rb:42:in `execute'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/lib/vagrant/environment.rb:308:in `cli'
20:07:47 	from /opt/vagrant/embedded/gems/gems/vagrant-2.0.1/bin/vagrant:138:in `<main>'

Set GOROOT and GOPATH, add to PATH

make generate-k8s-api fails in the dev VM due to missing GOPATH:

./generate-groups.sh: line 71: GOPATH: unbound variable
Makefile:227: recipe for target 'generate-k8s-api' failed
make: *** [generate-k8s-api] Error 1

While it would be possible to fix this by modifying the Makefile, it would be better to have GOROOT and GOPATH set in the base box image itself.

Add $GOPATH to PATH, so that running dep (when installed) succeeds without explicit path.

Run 'pip install -r Documentation/requirements.txt'

Consider running pip install -r Documentation/requirements.txt when building the VM image. Without it a plain make in /home/vagrant/go/src/github.com/cilium/cilium fails with a recommendation to Run 'pip install -r Documentation/requirements.txt'.

Unattended-upgrades are still holding locks. Kill them _more_.

Unattended-upgrades running in the background hold APT locks and occasionally make the VMs fail to deploy correctly. We tried to shut them down, or even to remove the unattended-upgrades package, but as Paul reported: That wasn't enough to fix it. Still seeing:

07:55:15      k8s1-1.23: Warning: apt-key output should not be parsed (stdout is not a terminal)
07:55:16      k8s1-1.23: OK
07:55:16      k8s1-1.23: E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 8859 (apt-get)
07:55:16      k8s1-1.23: E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?

For example at https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-net-next/527/consoleFull for a pull request that did include the new VM images.

Originally posted by @pchaigno in #313 (comment)

Install libelf-dev

New dependency introduced in cilium/cilium#3267 .

Command:
sudo apt-get -y install libelf-dev

Remove command to install from cilium/cilium Vagrantfile once this is complete.

Automatically pull images used in YAML files for testing in the image build process

We have the script provisoin/pull-images which has to be manually updated. We can automate this to some degree as part of the building of new images, by scanning YAML files in the test/k8sT/manifests directory in the Cilium repository, and programmatically extracting the images used in those files, and then downloading those directly into the image. As a result, this would happen as part of every VM build in packer-build-ci.

Feature request: Add git tag when we build a new version of the VM

At the moment we can figure out how a VM was generated by going to the pipeline page..

https://jenkins.cilium.io/view/Packer%20builds/job/Vagrant-Master-Boxes-Packer-Build-net-next/50/

Then to git build data.

https://jenkins.cilium.io/view/Packer%20builds/job/Vagrant-Master-Boxes-Packer-Build-net-next/50/git/

It'd be useful to link this info back into the git repository here as a build tag so that we can do "git log" and see which commits are included in which revisions of the dev VM box image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.