Giter VIP home page Giter VIP logo

openshift-vm-playground's Introduction

OpenShift VM Playground

Table of Contents

Prerequisites

Instructions to create a VM and to ssh to it

  • Log on to your OCP cluster
  • Deploy the HyperConverged CR to enable the nested virtualization features:
kubectl apply -f resources/hyperconverged.yml
  • Create or select a project/namespace
oc new-project <NAMESPACE>
  • Create a Kubernetes secret using your public key needed to ssh to the VM
kubectl create secret generic quarkus-dev-ssh-key -n <NAMESPACE> --from-file=key=~/.ssh/<PUBLIC_KEY_FILE>.pub                  
  • Create the DataVolume on the cluster (which is a PVC) using the Fedora Quarkus Dev VM image pushed on: quay.io/snowdrop/quarkus-dev-vm
kubectl apply -n openshift-virtualization-os-images -f resources/quay-to-pvc-datavolume.yml

NOTE: This step should be performed only once. If you have already created the DataVolume, then you can skip this step and move to the next one.

  • When done, create a VirtualMachine
kubectl delete -n <NAMESPACE> vm/quarkus-dev
kubectl apply -n <NAMESPACE> -f resources/quarkus-dev-virtualmachine.yml
  • If a loadbalancer is available on the platform where the cluster is running, then deploy a Service of type Loabalancer to access it using a ssh client
kubectl apply -f resources/services/service.yml
...
# Wait till you got an external IP address
VM_IP=$(kubectl get svc/quarkus-dev-loadbalancer-ssh-service -ojson | jq -r '.status.loadBalancer.ingress[].ip')
ssh -p 22000 fedora@$VM_IP

NOTE: If you have installed the virtctl client, you can also ssh to the vm using the following command able to forward the traffic:

virtctl ssh --local-ssh fedora@<VM_NAME>

Customizing the Fedora Cloud image

By default, podman, socat packages are not installed within the Fedora Cloud image. They can be installed using CloudInit but that means that the process to create a KubeVirt VirtualMachine will take more time. To avoid this, we have created a GitHub Action flow able to customize the Fedora cloud image using the tool: virt-customize. See: .github/workflows/build-push-podman-remote-vm.yml.

Note: The flow is not triggered for each commit and by consequence if some changes are needed, you will have first to push your changes and next to launch the flow using either the GitHub Action UI or the client gh workflow run build-push-podman-remote-vm.yml

The image generated is available under the Quay registry: quay.io/snowdrop/quarkus-dev-vm The image can be next deployed kubectl apply -f resources/quarkus-dev-virtualmachine.yml within the ocp cluster using a DataVolume resource under the namespace hosting the different OSes openshift-virtualization-os-images

Now, you will be able to consume it for every VirtualMachine you will create if you include this DataVolumeTemplate:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: quarkus-dev
  labels:
    app: quarkus-dev
spec:
  dataVolumeTemplates:
    - apiVersion: cdi.kubevirt.io/v1beta1
      kind: DataVolume
      metadata:
        name: quarkus-dev
      spec:
        pvc:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 11Gi
        source:
          pvc:
            namespace: openshift-virtualization-os-images
            name: podman-remote

Build a Quarkus application using Tekton

First create the pvc used to git clone and build quarkus

cd pipelines 
kubectl apply -f setup/project-pvc.yaml
kubectl apply -f setup/m2-repo-pvc.yaml
kubectl apply -f setup/configmap-maven-settings.yaml

Next, deploy the pipeline and pipelineRun to build the Quarkus application

kubectl delete pipelinerun/quarkus-maven-build-run
kubectl delete pipeline/quarkus-maven-build
kubectl delete task/git-clone
kubectl delete task/maven
kubectl delete task/rm-workspace
kubectl delete task/virtualmachine
kubectl delete task/ls-workspace

kubectl apply -f tasks/rm-workspace.yaml
kubectl apply -f tasks/git-clone.yaml
kubectl apply -f tasks/ls-workspace.yaml
kubectl apply -f tasks/maven.yaml
kubectl apply -f tasks/virtualmachine.yaml
kubectl apply -f pipelines/quarkus-maven-build.yaml
kubectl apply -f pipelineruns/quarkus-maven-build-run.yaml

You can follow the pipeline execution using the following command:

tkn pr logs quarkus-maven-build-run -f

NOTE: If you experiment an issue when a container is created during the creation of a testcontainer, you can then modify the create-remote-container included within the pipeline quarkus-maven-build to and set the parameter debug to true within the PipelineRun quarkus-maven-build-run

Replay the pipeline

To replay the Pipeline, it is needed to delete first the git cloned project and its pvc otherwise the step will report this error: [git-clone : clone] fatal: destination path '.' already exists and is not an empty directory.

kubectl delete -f pipelineruns/quarkus-maven-build-run.yaml
kubectl delete -f setup/project-pvc.yaml
kubectl apply -f setup/project-pvc.yaml
kubectl apply -f pipelineruns/quarkus-maven-build-run.yaml
tkn pr logs quarkus-maven-build-run -f

End-to-end test

To play with Kubevirt & Tekton to execute an end-to-end test case where we:

  • Create a Virtual Machine
  • Provision it to install podman, socat
  • Expose the podman daemon using socat
  • Deploy a Tekton pipeline on the cluster and launch it to git clone a Quarkus application and build it
  • Consult the log of the pipeline to verify if the maven succeeds as it use a testcontainer (e.g postgresql) and access remotely podman

execute this command:

./e2e.sh -v <VM_NAME> -n <NAMESPACE> -p <PUBLIC_KEY_FILE_PATH>

where:

  • <VM_NAME>: name of the virtual machine and also OS image to download (e.g. Fedora Cloud customized = quay.io/snowdrop/quarkus-dev-vm)
  • : kubernetes namespace where scenario should be deployed and tested
  • <PUBLIC_KEY_FILE_PATH>: path to the file containing the public key to be imported within the VM

Access podman remotely

To access podman remotely, it is needed to expose the daemon host using socat within the Fedora VM

sh-5.2# socat TCP-LISTEN:2376,reuseaddr,fork,bind=0.0.0.0 unix:/run/user/1000/podman/podman.sock  # rootless for user 1000
sh-5.2# socat TCP-LISTEN:2376,reuseaddr,fork,bind=0.0.0.0 unix:/run/podman/podman.sock            # rootfull

then from a pod running a docker/podman client, you will be able to access the daemon

kubectl apply -n <NAMESPACE> -f resources/podman-pod.yml
kubectl exec -n <NAMESPACE> podman-client  -it -- /bin/sh
sh-5.2# podman -r --url=tcp://<VM_IP>:2376 ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

sh-5.2# podman -r --url=tcp://<VM_IP>:2376 images
REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

sh-5.2# podman -r --url=tcp://<VM_IP>:2376 pull hello-world
Resolved "hello-world" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull quay.io/podman/hello:latest...
Getting image source signatures
Copying blob sha256:d08b40be68780d583e8c127f10228743e3e1beb520f987c0e32f4ef0c0ce8020
Copying config sha256:e2b3db5d4fdf670b56dd7138d53b5974f2893a965f7d37486fbb9fcbf5e91d9d
Writing manifest to image destination
e2b3db5d4fdf670b56dd7138d53b5974f2893a965f7d37486fbb9fcbf5e91d9d

GitHub Workflows

This project includes some GitHub Actions flows able to:

  • Build and push the Fedora Cloud image customized with podman, socat on quay/snowdrop. See: .github/workflows/build-push-podman-remote-vm.yml
  • Create a kind cluster + kubevirt and perform using Tekton the End-to-end scenario: See: .github/workflows/kubevirt-podman-remote-quarkus-helloworld.yaml

WARNING: Due to the following JIB build issue, we cannot build and deploy the resources yet using the GitHub workflow - kubevirt-podman-remote-quarkus-helloworld.yaml

NOTE: As the target platform used here is kubernetes and not ocp, then some adjustments are needed .This is the reason why different kustomization.yaml files have been created !

NOTE: The flow to build the customized image must be executed manually using the GitHub UI or client !

Issues

The step to set up a network bridge is not needed to allow the pods to access the VM within the cluster as a Kuybernetes Service is required in this case When we tried to use the Network Attachment Definitioneen faced to the following error: 0/6 nodes are available: 3 Insufficient devices.kubevirt.io/kvm, 3 Insufficient devices.kubevirt.io/tun, 3 Insufficient devices.kubevirt.io/vhost-net, 3 node(s) didn't match node selector, 6 Insufficient bridge-cni.network.kubevirt.io/br1. To fix it, follow the instructions described within this ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1727810

TIP: Don't create using ocp 4.13.x the bridge using the UI as documented heree can also create a service using the virtctl client: virtctl expose vmi quarkus-dev --name=quarkus-dev --port=2376 --target-port=2376

openshift-vm-playground's People

Contributors

cmoulliard avatar iocanel avatar

Watchers

 avatar  avatar

Forkers

iocanel

openshift-vm-playground's Issues

Tekton stratetegy to build the image: jib vs quarkus container build vs buildpacks vs s2i

Discussion

What should be the strategy that we will adopt/support in our pipelines to build the container image ?

There are different options:

  • Buildah
  • JIB
  • Buildpacks
  • S2I **

**: This option should not the way to go. RHTAP, RHDH and now project Dance are using to build java application the Buildah Tekton Task and not at all S2I. Why ? This is because they push the image build to an external registry and they use ArgoCD to deploy the yaml resources within the target cluster A or B or C etc => Service, Deployment (= pointing to the external registry image) & Route. This is why you cannot use the ocp internal registry & imagestream as image should be shared between clusters A or B or C !

Our GitHub flow and Tekton currently uses the quarkus build approach and jib: https://github.com/iocanel/openshift-vm-playground/blob/main/pipelines/pipelines/kustomization.yaml#L4-L18

Examples of tasks

  1. Buildah task example
    - name: buildah
      runAfter:
        - maven-compile
      taskRef:
        kind: Task
        name: buildah
      params:
        - name: DOCKERFILE
          value: src/main/docker/Dockerfile.jvm
        - name: IMAGE
          value: quay.io/ch007m/quarkus-dev-vm
      workspaces:
        - name: source
          workspace: project-dir
        - name: dockerconfig
          workspace: dockerconfig-ws
  1. Quarkus container JIB build example (if you set the quarkus-container-jib-extension to the pom.xml)
    - name: maven-generate-deploy
      params:
        - name: DOCKER_CONFIG
          value: $(workspaces.dockerconfig.path)/config.json
        - name: GOALS
          value:
            - package
            - '-DskipTests'
            - -B
            - -Dquarkus.container-image.image=quay.io/ch007m/quarkus-helloworld:latest
            - -Dquarkus.kubernetes.deploy=true
            - -Dquarkus.log.level=DEBUG
      taskRef:
        kind: Task
        name: maven
      workspaces:
        - name: maven-settings
          workspace: maven-settings
        - name: project-dir
          workspace: project-dir
        - name: maven-m2-repo
          workspace: maven-m2-repo
        - name: dockerconfig
          workspace: dockerconfig-ws

Tekton substitution do not take place for DOCKER_HOST

Issue

Substitution do not take place and then maven set as parameter the DOCKER_HOST like this tcp://$(tasks.virtualmachine.results.ip):2376

Code where results is defined

apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  name: maven-run
spec:
  taskRef:
    name: maven
  params:
    - name: DOCKER_HOST
      value: tcp://$(tasks.virtualmachine.results.ip):2376
...

Task code exporting the result: https://github.com/iocanel/openshift-vm-playground/blob/main/pipelines/tasks/virtualmachine.yaml#L27
TaskRun consuming the result: https://github.com/iocanel/openshift-vm-playground/blob/main/pipelines/taskruns/maven-run.yaml#L10

NOTE: To be investigated of course. It could be related to the fact that we cannot share results between tasks BUT instead using Pipeline + tasks ...

cloud-init[915]: Failed to connect to bus: No medium found

Issue

During the execution of the bash script - https://github.com/iocanel/openshift-vm-playground/blob/fc58851fb75af3d2c1269fad0e2b9cfc5a1305a3/resources/config.sh
and commands:

sudo dnf -y install podman socat
systemctl --user enable --now podman.socket
loginctl enable-linger 1000

cloudinit reports the following error and stop the execution of the commands:

Nov 10 10:24:49 fedora37 cloud-init[915]: Failed to connect to bus: No medium found

NOTe: I suspect that the issue that we have here is related to the fact that we would like to enable the service for user 1000 (= podman rootless mode) while in fact the script and the bash commands are not executed by the user 1000 as we are not logged !

Enable maven artifact caching

To speed things up, we need an additional pvc to use as a maven local repository.
We also need to pass it the the maven task and make sure its well respected.

The VM resource contains a fixed macAddress

This means that we are prone to mac address clashing. For example:

  • We can't install the same VM in another namespace in the same cluster.
  • It's possible that we can't install the same definition in another cluster.

Unfortunately, removing the macAddress property in the network interface cause the VM to stuck in Starting state. We need to find a better solution.

Create a github actions workflow supporting e2e scenarios

Todo

Create a github actions workflow (like this one: https://github.com/ch007m/test-github-action/actions/runs/7056374345/job/19208247972) supporting our e2e scenario where we:

  • Create a kubernetes cluster
  • Deploy Kubevirt operator targeting a specific version (aka what ocp 4.x packages)
  • Configure virt: add RBAC, enable emulation, etc
  • Create a VM using the Fedora + socat, podman image
  • Wait till podman service replies
  • Install Tekton targeting a specific version (aka what ocp 4.x packages)
  • Deploy the QShift tasks, pipelines
  • Run the 2e2 scenario to:
    • Git clone Quarkus HelloWorld
    • Compile and test it: mvn test (=> we will access remotely podman running in the VM)
    • Build the Quarkus image
    • Deploy the Quarkus HelloWorld application

The test pipeline should also perform deployment

The pipeline currently performs a simple maven build.
It would be create if we had a deployment step.

The deployment step could just use ./mvnw quarkus:deploy -Dquarkus.openshift.deploy=true or something like this.

Task to perform a kubectl+jq command fails due to Error executing command: fork/exec /tekton/scripts/script-0-gwvs8: exec format error

Issue

 cd ~/code/openshift/openshift-vm-playground/pipelines
 ./bin/run vm-ip
IP address of the docker host running within the VM not specified as 2nd argument
task.tekton.dev "vm-ip" deleted
taskrun.tekton.dev "vm-ip-run" deleted
task.tekton.dev/vm-ip created
taskrun.tekton.dev/vm-ip-run created
Waiting for Task vm-ip to start...
Task vm-ip has failed.
2023/11/08 16:49:57 Error executing command: fork/exec /tekton/scripts/script-0-gwvs8: exec format error
2023/11/08 16:49:54 Decoded script /tekton/scripts/script-0-gwvs8
2023/11/08 16:49:53 Entrypoint initialization

Deprecate Taskruns folder

Suggestion

As the project supports to execute a Pipeline using PipelineRun and Tasks I propose to deprecate (= move to sandbox) the folder: pipelines/taskruns

@iocanel

Bypass docker limit issue using config.json file as DOCKER_CONFIG env for the testcontainers

Issue

As users will suffer from the Docker limit, then we should pass an env var to let testcontainer to load the credentials to be authenticated with the docker.io registry

See: https://java.testcontainers.org/supported_docker_environment/#docker-registry-authentication

Something like this should work

  - name: maven-test
    params:
    - name: DOCKER_AUTH_CONFIG
      value: $(workspaces.dockerconfig.path)/config.json
    - name: DOCKER_HOST
      value: tcp://$(tasks.virtualmachine.results.ip):2376
    - name: GOALS
      value:
      - -B
      - test

Step VIRT-CUSTOMIZE fails due to: "libguestfs error: inspect_os: mount exited with status 32: mount: /tmp/btrfsgHo4ay: unknown filesystem type 'btrfs'"

Issue

The step running the client virt-customize and able to custoimize a VM fails when executed as part of a task (or using a built image)

STEP-RUN-VIRT-CUSTOMIZE

[   0.0] Examining the guest ...
virt-customize: error: libguestfs error: inspect_os: mount exited with status 32: mount: /tmp/btrfsgHo4ay: unknown filesystem type 'btrfs'.

If reporting bugs, run virt-customize with debugging enabled and include the complete output:

  virt-customize -v -x [...]

HowTo reproduce

Note: I got this remark from Alice Frosi on kubevirt slack -->
https://kubernetes.slack.com/archives/C8ED7RKFE/p1701244144049719?thread_ts=1701188876.505219&cid=C8ED7RKFE) The problem is that the guest kernel (not the ocp node) used by libguestfs and shipped by centos stream doesn't have the support for btrfs. Unfortunately, this is a limitation for all OSes using btrfs like fedora

@iocanel

Tekton job deploying Quarkus app fails due to UNAUTHORIZED

Issue:

GitHub flow deploying Quarkus app fails due to this auth issue

Error: build-image-deploy : mvn-goals] [ERROR] Caused by: com.google.cloud.tools.jib.api.RegistryAuthenticationFailedException: Failed to authenticate with registry quay.io/snowdrop/quarkus-dev-vm because: 401 UNAUTHORIZED
Error: build-image-deploy : mvn-goals] [ERROR] GET https://quay.io/v2/auth?service=quay.io&scope=repository:snowdrop/quarkus-dev-vm:pull,push
Error: build-image-deploy : mvn-goals] [ERROR] {"errors":[{"code":"UNAUTHORIZED","detail":{},"message":"Could not find robot with username: *** and supplied password."}]}
Error: build-image-deploy : mvn-goals] [ERROR] 

The problem is perhaps related to a wrong base64 command used as job works locally using same credentials

encodeAuth=$(echo "${{ secrets.QUAY_USERNAME }}:${{ secrets.QUAY_ROBOT_TOKEN }}" | base64 -w0)
          cat <<EOF > config.json
          {
            "auths": {
              "quay.io/${{ env.QUAY_ORG }}": {
                "auth": "$encodeAuth"
              }
            }
          }
          EOF
          kubectl create secret generic dockerconfig-secret --from-file=config.json

Testcontainer issue: could not insert 'ip_tables': Operation not permitted

Issue

Tekton task maven fails as testcontainer reports the following error:

2023-11-08 14:38:19,861 INFO  [org.tes.doc.DockerClientProviderStrategy] (build-5) Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=tcp://10.128.2.169:2376
2023-11-08 14:38:19,864 INFO  [org.tes.DockerClientFactory] (build-5) Docker host IP address is 10.128.2.169
2023-11-08 14:38:19,961 INFO  [org.tes.DockerClientFactory] (build-5) Connected to docker: 
  Server Version: 4.6.2
  API Version: 1.41
  Operating System: fedora
  Total Memory: 1951 MB
2023-11-08 14:38:19,964 WARN  [org.tes.uti.ResourceReaper] (build-5) 
********************************************************************************
Ryuk has been disabled. This can cause unexpected behavior in your environment.
********************************************************************************
2023-11-08 14:38:19,967 INFO  [org.tes.DockerClientFactory] (build-5) Checking the system...
2023-11-08 14:38:19,968 INFO  [org.tes.DockerClientFactory] (build-5) โœ”๏ธŽ Docker server version should be at least 1.6.0
2023-11-08 14:38:19,980 INFO  [org.tes.uti.ImageNameSubstitutor] (build-5) Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
2023-11-08 14:38:20,013 INFO  [๐Ÿณ .io/postgres:14]] (build-5) Pulling docker image: docker.io/postgres:14. Please be patient; this may take some time but only needs to be done once.
2023-11-08 14:38:34,722 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Starting to pull image
2023-11-08 14:38:35,090 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  5 pending,  1 downloaded,  0 extracted, (0 bytes/? MB)
2023-11-08 14:38:35,270 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  4 pending,  2 downloaded,  0 extracted, (0 bytes/? MB)
2023-11-08 14:38:35,787 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  4 pending,  3 downloaded,  0 extracted, (0 bytes/? MB)
2023-11-08 14:38:36,136 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  4 pending,  4 downloaded,  0 extracted, (7 MB/? MB)
2023-11-08 14:38:36,186 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  3 pending,  5 downloaded,  0 extracted, (7 MB/? MB)
2023-11-08 14:38:36,741 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  3 pending,  6 downloaded,  0 extracted, (7 MB/? MB)
2023-11-08 14:38:37,027 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  2 pending,  7 downloaded,  0 extracted, (23 MB/? MB)
2023-11-08 14:38:38,234 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  5 pending,  8 downloaded,  0 extracted, (34 MB/? MB)
2023-11-08 14:38:39,445 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  4 pending,  9 downloaded,  0 extracted, (48 MB/? MB)
2023-11-08 14:38:39,542 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  3 pending, 10 downloaded,  0 extracted, (48 MB/? MB)
2023-11-08 14:38:39,593 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  2 pending, 11 downloaded,  0 extracted, (48 MB/? MB)
2023-11-08 14:38:42,871 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  1 pending, 12 downloaded,  0 extracted, (119 MB/? MB)
2023-11-08 14:38:54,239 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  0 pending, 13 downloaded,  0 extracted, (119 MB/130 MB)
2023-11-08 14:38:54,327 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  0 pending, 14 downloaded,  0 extracted, (119 MB/130 MB)
2023-11-08 14:38:54,421 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pulling image layers:  0 pending, 14 downloaded,  0 extracted, (119 MB/130 MB)
2023-11-08 14:38:54,422 INFO  [๐Ÿณ .io/postgres:14]] (docker-java-stream--933788147) Pull complete. 14 layers, pulled in 19s (downloaded 119 MB at 6 MB/s)
2023-11-08 14:38:54,490 INFO  [๐Ÿณ .io/postgres:14]] (build-5) Creating container for image: docker.io/postgres:14
2023-11-08 14:38:54,509 WARN  [๐Ÿณ .io/postgres:14]] (build-5) Reuse was requested but the environment does not support the reuse of containers
To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at /root/.testcontainers.properties
2023-11-08 14:38:54,658 INFO  [๐Ÿณ .io/postgres:14]] (build-5) Container docker.io/postgres:14 is starting: 682f4bd0b0ac3ca5ee39b04905f760ac2428bd8998ad6105d8417dbc69964e73
2023-11-08 14:38:55,614 ERROR [๐Ÿณ .io/postgres:14]] (build-5) Could not start container: com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"cause":"netavark (exit code 1): code: 3, msg: modprobe: ERROR: could not insert 'ip_tables': Operation not permitted\niptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)\nPerhaps iptables or your kernel needs to be upgraded.\n","message":"netavark (exit code 1): code: 3, msg: modprobe: ERROR: could not insert 'ip_tables': Operation not permitted\niptables v1.8.8 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)\nPerhaps iptables or your kernel needs to be upgraded.\n","response":500}

        at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:247)
        at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.post(DefaultInvocationBuilder.java:102)
        at org.testcontainers.shaded.com.github.dockerjava.core.exec.StartContainerCmdExec.execute(StartContainerCmdExec.java:31)
        at org.testcontainers.shaded.com.github.dockerjava.core.exec.StartContainerCmdExec.execute(StartContainerCmdExec.java:13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.