Giter VIP home page Giter VIP logo

common's People

Contributors

alexandear avatar arangogutierrez avatar ashley-cui avatar baude avatar cdoern avatar cevich avatar dependabot-preview[bot] avatar dependabot[bot] avatar dfr avatar flouthoc avatar giuseppe avatar jwhonce avatar kolyshkin avatar lsm5 avatar luap99 avatar mheon avatar mtrmac avatar nalind avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar qiwang19 avatar renovate[bot] avatar rhatdan avatar saschagrunert avatar sstosh avatar tomsweeneyredhat avatar umohnani8 avatar unknowndevqwq avatar vrothberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

common's Issues

When containers.conf file is overridden by libpod.conf, print warning in `podman info`

This is a place holder for a bug that isn't yet a bug! ;^) Work is well underway to create a containers.conf file that will be used by Podman, Buildah, Skopeo and perhaps more container technologies. The current design is at the start, the libpod.conf file will be used by Podman if that file and also the container.conf file are present on the machine. When that occurs, we should change the output of the podman info command to print something like this at the bottom of it's output:

Podman is currently configured to use the libpod.conf file which soon
will be deprecated.

Please move any customizations made to that file to the containers.conf file
which will replace libpod.conf if a future release.

Please see containers.conf(1) for more details

Or wording deemed acceptable. This way the user will see the upcoming change and when issues are reported, the maintainers will get a good feel as to who is using what config file. If only the containers.conf file is present, then this warning would not be shown.

Default capabilities too restrictive

Recently, Arch Linux packaged containers.conf into podman's dependencies. Now I cannot run containers as I used to:

$ podman --log-level=debug run --network=host docker.io/adminer:latest
[...]
Error: capset: Operation not permitted: OCI runtime permission denied error

It seems the cause is default_capabilities in be31ba3. Reverting it made the above command work again.

My podmani info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 35a2fa83022e56e18af7e6a865ba5d7165fa2a4a'
  cpus: 12
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: niflheim
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 10000
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 10000
  kernel: 5.9.10-arch1-1
  linkmode: dynamic
  memFree: 22212132864
  memTotal: 33567657984
  ociRuntime:
    name: crun
    package: Unknown
    path: /usr/bin/crun
    version: |-
      crun version 0.15.1
      commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.1.6
      commit: a995c1642ee9a59607dccf87758de586b501a800
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.2
  swapFree: 0
  swapTotal: 0
  uptime: 2h 21m 15.6s (Approximately 0.08 days)
registries:
  search:
  - docker.io
store:
  configFile: /home/rzl/.config/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 5
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: Unknown
      Version: |-
        fusermount3 version: 3.10.0
        fuse-overlayfs: version 1.1.0
        FUSE library version 3.10.0
        using FUSE kernel interface version 7.31
  graphRoot: /home/rzl/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 8
  runRoot: /run/user/1000/containers
  volumePath: /home/rzl/.local/share/containers/storage/volumes
version:
  APIVersion: 2.0.0
  Built: 1606240380
  BuiltTime: Tue Nov 24 17:53:00 2020
  GitCommit: 9f6d6ba0b314d86521b66183c9ce48eaa2da1de2
  GoVersion: go1.15.5
  OsArch: linux/amd64
  Version: 2.1.1

Packages:

$ pacman -Q containers-common crun podman runc skopeo
containers-common 0.29.0-2
crun 0.15.1-1
podman 2.1.1-2
runc 1.0.0rc92-1
skopeo 1.2.0-2

pkg/config tests are changing containers.conf on the system

The pkg/config tests are changing the hosts's containers.conf files (/etc/containers/containers.conf for root and ~/.config/containers/containers.conf for ordinary users). Tests shouldn't do change host files but instead create temporary files and directories.

Unfortunately, I don't have time to look into the issue at the moment.

Add annotations into containers.conf file

With the upcoming feature of containers.conf file, having the ability to hard code "--annotation" oci hooks into the conf file will be a good idea and a very handy tool to have.

This will open the door for sysadmins or even users to hard code ocihooks to prepare the environment, bindmount GPU drivers based on nvidia-smi bin or other related use cases, as to also hard code a clean up process for Podman/Buildah.

Include containers-common files in this repository for easier packaging

Summary

Add configuration files included in Fedora dist-git repo in this repository

Background

For my use case, I want to package a new version of skopeo in a RHEL environment in a custom build environment. In official Fedora/RHEL upstream, there is also the containers-common/skopeo-containers package that includes a handful of configuration files that currently live in the Fedora dist-git repo.

Some of these are helpful to have available for custom patches. For example, I want to create a patch to change the registries.conf file to use a private Quay registry by default.

Obviously this could be handled by configuration management tools, but it would be more straightforward, convenient, and time-saving to create a package that ships the right registries by default, especially if the environment is 1000s of RHEL nodes. This can be done with the Fedora model of packaging by manually copying over the files, but this has a few downsides:

  1. Files can easily become out-of-date from upstream
  2. Custom build systems that don't follow the dist-git model will have a hard time building a package if these files are not tracked in a git repository

Ultimately, shipping these files in a git repository (and possibly even splitting the RPM spec out of skopeo and into its own standalone package) would be an added convenience for downstream users with specific use cases.

Details

Short-term: Include the files in this repository, preferably with a Makefile that installs them in the right places.

Long-term: Split the containers-common package out of skopeo package and build it based on sources in this repo.

Outcome

  • Easier for downstream users to consume these files and customize them to specific needs
  • Enables administrators to ship known-good configurations in a package instead of using config management tools to update them (which can cost a lot of time)

Default `ping_group_range` value causes EINVAL when written from unshared ns

When I attempt to run podman containers from an unshared user namespace and network namespace (related to my adventures described in containers/podman#7774) I get OCI runtime errors caused by a failure to write the default ping_group_range value from containers.conf. I assume this to be because the group ID 1 probably isn't included in my subgidmap.

The default value changed from 0 1000 -> 0 1 in #319 but neither value works for me. Instead, I think we should be using 0 0 since the ping_group_range is inclusive per icmp(7). This config is vendorised in containers/podman and I think it'd need to land there to fix my issue. For the moment, it's easy for me to work around by having a per-user containers.conf like:

[containers]
default_sysctls=[]

skopeo copy --retry-times does not retry on i/o timeout

We face intermittent networking issues copying an image from Artifactory to ECR. In order to mitigate these issues, we tried setting --retry-times, but it appears that it does not consider an i/o timeout to be retryable. This makes --retry-times useless, since now we have to surround it in our own retry logic.

Seems like the root cause of this is the isRetryable helper function in github.com/container/common. For the purposes of skopeo copy, I would expect all errors to be deemed retryable, since the operation is idempotent. Even auth failures should be retryable, since in the past we have observed oddities with Artifactory with spurious auth failures.

On a related note, skopeo copy seems to be using a 30 second timeout, or something like that. It's not clear to me where that is coming from. The docs mention a --command-timeout flag (which we are not using), but they do not make it clear whether that is the timeout of a single retry attempt, or the entire command. They also do not mention any default value. My expectation here is that the timeout should apply to a single retry, or else that again makes --retry-times useless in the face of sporadic network issues.

Integration into CRI-O

Hey,

I'm wondering what the plans for an integration of this project (especially the configuration part) into CRI-O are. Is the idea that CRI-O migrates to libpod in the first place or do we want to create a configuration middle-layer based on containers/common?

check access rights for systemd-related config options

Some config knobs require access to systemd/DBUS etc. (e.g., cgroups-manager, logger, eventer). In those cases, we should check early on if we have access rights to systemd/DBUS.

In the past week we hit two Podman issues (customer portal and upstream), where the rootless users did not have access rights. One could be solved with su --login but the other could not.

Checking early will protect users from facing all kinds of errors from all kinds of execution paths and allows us to give some meaningful error message and help them resolve the isssue (e.g., using --login with su, or pointing out that user is missing group access).

From the debugging session in containers/podman#10308

@rhatdan PTAL

Version file

Unless I'm missing it in my quick scan, we don't have a version file in this repository and need one.

containers doesn't set up image signing by default

Since containers and podman implement policy and image signing by default, it feels strange that it defaults these to be unsigned for the repositories that it enables that should have signed images.

This includes registry.fedoraproject.org registry.access.redhat.com, registry.centos.org as a user, I'd expect Fedora to be able to provide the same level of security for their modern tooling as for legacy tooling like yum and dnf which both check and verify GPG signatures.

As a used, I would have expected that the default policy for containers would be to require signatures for Fedora, CentOS and Red Hat registries, and if need be, ship the public keys necessary to use this.

Security error 0.40.0 pulling go dependencies

go mod tidy

go: downloading github.com/containers/image/v5 v5.13.0
verifying github.com/containers/image/[email protected]: checksum mismatch
downloaded: h1:/7tMoRRPJ5ltba/ClmrAaYE3AjTiZ9ELxfpUK0psxV8=
go.sum: h1:iKjIj4VlQsCz/AvdMu9p4nbLwz218aklFRACjprv6z8=

SECURITY ERROR
This download does NOT match an earlier download recorded in go.sum.
The bits may have been replaced on the origin server, or an attacker may
have intercepted the download attempt.

`Login` and `Logout` API is missing unit tests

The API functions are not tested from what I can see:

common/pkg/auth/auth.go

Lines 66 to 68 in 9d34b37

// Login implements a “log in” command with the provided opts and args
// reading the password from opts.Stdin or the options in opts.
func Login(ctx context.Context, systemContext *types.SystemContext, opts *LoginOptions, args []string) error {

common/pkg/auth/auth.go

Lines 204 to 205 in 9d34b37

// Logout implements a “log out” command with the provided opts and args
func Logout(systemContext *types.SystemContext, opts *LogoutOptions, args []string) error {

It also looks like that we use ginkgo/gomega based tests mixed with standard unit tests within the project, so I'm wondering which one you'd prefer.

SVG Logo

Is there a SVG Variant of the containers logo? I want to use it in a logo for a internal project. I think this should be fine with the apache License or am I wrong?

Allow futex_time64 in default seccomp policy

I'm running ARM32 rootless container created with podman on ARM64 host. With seccomp enforced multiple applications fail with error:

The futex facility returned an unexpected error code

It appears to be caused by lack of futex_time64 in allowed syscalls. As far as I gathered futex_time64 is futex for 32-bit platforms with 64-bit time so I think it could be added to default policy.

Misleading messages when login/logout with docker-credential-pass

The messages printed when using podman login/logout are misleading.

Steps to reproduce:

The following steps assume that docker-credential-pass from https://github.com/docker/docker-credential-helpers is installed:

  1. create ~/.config/containers/auth.json containing:
{
    "credHelpers": {
        "foo.io": "pass"
    }
}
  1. add credential to pass with docker-credential-pass:
$ docker-credential-pass store # press enter
{"ServerURL":"foo.io","Username":"your_username","Secret":"your_secret"} # press ctrl+d
  1. then login with:
$ podman login foo.io
Authenticating with existing credentials...
Existing credentials are valid. Already logged in to foo.io
  1. then logout with:
$ podman logout foo.io
Not logged into foo.io with current tool. Existing credentials were established via docker login. Please use docker logout instead.

Expected results

Messages 3. and 4. should be less specific or the logic should handle when credential helpers are in use.

In fact when we use docker-credential-pass with podman we do not have to login/logout. We only need to unlock pass database.


Notes:

If you have the following error at step 3.:

 $ podman login foo.io
Error: error reading auth file: 1 error occurred:
        * error getting credentials - err: exit status 1, out: `exit status 2: gpg: decryption failed: No secret key`

to be able to enter the passphrase to unlock pass database you have to do:

export GPG_TTY=$(tty)

gopass based secret driver

It would be really nice to see a secret driver using gopass/pass since this would store secrets in a secure way in contrast to storing it in an unencrypted json file.

The implementation could just use the shell to actually call gopass directly or use its codebase to implement the driver without having to require an additional binary on the host.

Here is the project URL for reference: https://github.com/gopasspw/gopass

Defaulting to journald log driver does not pass Podman CI

We recently defaulted to journald as the default log driver but had to revert it as it does not pass Podman CI. This issue is intended to track (and not forget) about it.

What we need to do, is get back to the state before #441 open a PR in Podman and tackle the CI failures one by one. @rhatdan mentioned that flags like --tail and --since did not work as the tests expected, so we may have off by ones or formatting work to do in Podman's journald driver.

@rhatdan @ashley-cui PTAL

RFE: Support systemd credentials

Secrets sounds a lot like systemd "credentials". (Though we try a bit harder to lock them down via ramfs rather than tmpfs). Might be worth making them compatible by setting the CREDENTIALS_PATH env var implicitly in podman) By making podman secrets and systemd credentials compat you gain that you can pass them from podman to systemd running in the container further down the tree. - via twitter

freedesktop.org/software/systemd/man/systemd.exec.html#Credentials
https://twitter.com/pid_eins/status/1381731529404071940

/dev/zero not (always) owned by root:root on lxd

As it turns out, the following test code is problematic on lxd:

dev, err := DeviceFromPath("/dev/null")
assert.NoError(t, err)
assert.Equal(t, len(dev), 1)
assert.Equal(t, dev[0].Major, int64(1))
assert.Equal(t, dev[0].Minor, int64(3))
assert.Equal(t, string(dev[0].Permissions), "rwm")
assert.Equal(t, dev[0].Uid, uint32(0))
assert.Equal(t, dev[0].Gid, uint32(0))

As you can see from https://discuss.linuxcontainers.org/t/device-node-ownership-guarantees/10325 on rootless lxd device nodes actually appear as nobody:nobody instead of root:root, which is breaking this test.

Not sure how much value this test actually provides, though. What are your thoughts?

libimage: pull: don't resolve short names on explicit docker:// reference

… I might be missing something; how do short names like busybox vs explicit docker://docker.io/busybox pulls work? AFAICS this code parses both into a docker://busybox ImageReference, losing the distinction — and then may try to do short name lookup on explicit docker://… formats, which should, I think, never happen — short name lookup is used only for Docker-like non-transport-qualified formats, so that the c/image-transport-qualified strings always mean the same thing in all callers.

Originally posted by @mtrmac in #579 (comment)

Add stop-timeout field

Add a field for the stop-timeout; the timeout to wait before shooting with SIGKILL. Podman uses a hard-coded default of 10 seconds which is not enough on some machines (e.g., VMs with low resources).

Cc: @mheon @rhatdan

How to verify if a runtime supports "--format=json" ?

While trying to use runsc as runtime of podman, I've noticed an option shows "runtime_supports_json =" in containers.conf . If I want to add runsc here, how can I verify if "--format=json" is supported in runsc ?

Two different blob info caches are used

Description

Only 'buildah commit' uses a blob info cache at a different location from other commands.

This seems caused by containers/buildah#2903. It seems that most commands still get the old location here.

Also, I'm not sure if the following side-effects are intended:

  • This change also affects root mode.
    (from '/var/lib/containers/storage/cache' to '/var/lib/containers/cache')
  • Blob info caches are not separated with '--root' flag. Without this change, a cache is created at each graph root.

Steps to reproduce the issue:

  1. Run buildah pull with --log-level debug and see a message starting with:
    'Using blob info cache at ...'
  2. Run buildah commit with --log-level debug and see a message starting with:
    'Using blob info cache at ...'

Describe the results you received:

$ sudo buildah --log-level debug pull quay.io/libpod/busybox
<snip>
DEBU[0000] Using blob info cache at /var/lib/containers/storage/cache/blob-info-cache-v1.boltdb
<snip>

$ sudo buildah --log-level debug commit busybox-working-container mydir/
<snip>
DEBU[0000] Using blob info cache at /var/lib/containers/cache/blob-info-cache-v1.boltdb
<snip>

Describe the results you expected:
All commands use the blob info cache at the same location.

Output of rpm -q buildah or apt list buildah:

buildah-1.21.1-1.fc33.x86_64

Output of buildah version:

Version:         1.21.1
Go Version:      go1.15.12
Image Spec:      1.0.1-dev
Runtime Spec:    1.0.2-dev
CNI Spec:        0.4.0
libcni Version:  
image Version:   5.12.0
Git Commit:      
Built:           Wed Dec 31 19:00:00 1969
OS/Arch:         linux/amd64

Output of podman version if reporting a podman build issue:

(paste your output here)

Output of cat /etc/*release:

Fedora release 33 (Thirty Three)
NAME=Fedora
VERSION="33 (Workstation Edition)"
ID=fedora
VERSION_ID=33
VERSION_CODENAME=""
PLATFORM_ID="platform:f33"
PRETTY_NAME="Fedora 33 (Workstation Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:33"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f33/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=33
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=33
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
Fedora release 33 (Thirty Three)
Fedora release 33 (Thirty Three)

Output of uname -a:

Linux mylaptop 5.12.15-200.fc33.x86_64 containers/buildah#1 SMP Wed Jul 7 19:56:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Output of cat /etc/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library.
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/containers/storage"

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
#mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather then the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

RFE: Provide a containers.conf.d option like registries.conf.d

In the December 2020 Podman Community Meeting, Neal Gompa (@Conan-Kudo) asked if we could provide containers.conf.d like we have provided registries.conf.d. During the meeting today that was thought to be a good idea and a possible deliverable.

Adding this issue as a reminder, it might better be logged against containers/common.

Align 'containers/common' release versions with per-distribution package versions

There's currently some potential for confusion between the tagged, versioned releases of containers/common on GitHub compared with the packaged binary releases provided for various distributions via the Kubic repository.

For example:

This seems to be due to the fact that containers/common package version numbers are derived from the skopeo source (where release versions 1.2.0 and 1.2.1 do exist) by way of a GitLab packaging repository.

I don't know exactly what a solution would look like here, but perhaps it would begin with a separate containers-common packaging repository in GitLab, and dependency references between the separated spokeo and containers-common packages as appropriate.

Creating an additional repository under the rhcontainerbot in GitLab may require permissions internal to the containers project.

May be related to (already-resolved) issue #385.

Consider stopping go vendoring for projects like this

I think the main reason for using vendoring was to simplify packaging efforts, which seems still true nowadays. For projects we do not package (like this library) we could consider not vendoring the dependencies any more. WDYT?

"EOF" reading from registry should be retryable in Skopeo sync

Dockerhub requests occasionally do not complete, see the following errors:

time="2020-08-14T06:48:05Z" level=fatal msg="Error copying tag \"docker://registry.hub.docker.com/grafana/grafana:6.2.5\": Error reading config blob sha256:c912f3f026edac2ee4fec5770487187f370e2a085185b691273586a5a7e3a88d: Get \"https://registry.hub.docker.com/v2/grafana/grafana/blobs/sha256:c912f3f026edac2ee4fec5770487187f370e2a085185b691273586a5a7e3a88d\": EOF"
time="2020-08-14T06:07:05Z" level=fatal msg="Error copying tag \"docker://registry.hub.docker.com/library/nginx:1.12.0-perl\": Error initializing source docker://registry.hub.docker.com/library/nginx:1.12.0-perl: Get \"https://registry.hub.docker.com/v2/library/nginx/manifests/1.12.0-perl\": EOF"
time="2020-08-14T06:58:06Z" level=fatal msg="Error copying tag \"docker://registry.hub.docker.com/library/redis:4.0.12\": Error determining manifest MIME type for docker://registry.hub.docker.com/library/redis:4.0.12: Get \"https://registry.hub.docker.com/v2/library/redis/manifests/sha256:a79165a289dd044bb2ea88e20c04a78064849e84832f7d5e8f04ad471d79c10f\": EOF"
time="2020-08-14T06:05:05Z" level=fatal msg="Error copying tag \"docker://registry.hub.docker.com/grafana/grafana:5.2.0-beta3\": Error initializing source docker://registry.hub.docker.com/grafana/grafana:5.2.0-beta3: Get \"https://registry.hub.docker.com/v2/grafana/grafana/manifests/5.2.0-beta3\": EOF"
time="2020-08-14T07:12:05Z" level=fatal msg="Error copying tag \"docker://registry.hub.docker.com/library/redis:5.0-alpine\": Error determining manifest MIME type for docker://registry.hub.docker.com/library/redis:5.0-alpine: Get \"https://registry.hub.docker.com/v2/library/redis/manifests/sha256:abd9d0fc18e163747253aae8d69be344c7e90d6b6d6027fa7f2f2c0e6c20b2b8\": EOF"
time="2020-08-14T06:27:05Z" level=fatal msg="Error copying tag \"docker://registry.hub.docker.com/grafana/grafana:6.5.0\": Error determining manifest MIME type for docker://registry.hub.docker.com/grafana/grafana:6.5.0: Get \"https://registry.hub.docker.com/v2/grafana/grafana/manifests/sha256:ef60084bc64a867225439754c6b46689805b9ecedbd93e1be2e523dc1db438d6\": EOF"

Those are not detected as retryable, and will cause a Skopeo Sync job to fail. Could these be marked as retryable?

latest tag is maybe not correct?

Hey there! I'm using this project as the basis for a containers-common package on Arch Linux.
While tracking the upstream, I realized that you introduced a major version bump (0.31.0 -> 1.31.1), and this is probably accidental.

Debian/Ubuntu packaging doesn't mark /etc/containers/containers.conf as a conffile

Version: 1.2.0~2

(Not sure this is the right place to report, I am using the apt repo at https://download.opensuse.org/repositories/devel?)

The Debian/Ubuntu packages install a symlink from /etc/containers/containers.conf to /usr/share/containers/containers.conf, but do not mark this as a 'conffile', which means if there is an existing file in that location it gets clobbered on package install. It is done for storage.conf, though.

I am also not sure why this symlink exists in the first place - the comments at the top of the file suggest the version under /usr/share will always be read, with the /etc version selectively overriding it if it exists?

make target all fails on osx

Trying to make on osx fails.

 make all
GOARCH=amd64 GO111MODULE=on go build -mod=vendor ./...
GOARCH=386 GO111MODULE=on go build -mod=vendor ./...
cmd/go: unsupported GOOS/GOARCH pair darwin/386
make: *** [build-386] Error 2

error adding seccomp filter rule for syscall bdflush: permission denied

#573 Causes an error on Ubuntu when running any container using cri-o-runc 1.0.0~rc95.1. It also happens (not too unexpectedly) from buildah bud (first identified here). The problem does not occur when using the prior version of containers-common (which doesn't contain the #573 changes).

# podman --runtime=runc run -it --rm quay.io/libpod/alpine:latest echo hello
Error: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall bdflush: permission denied: OCI permission denied

Conveniently, I have two VM Images built 3-days apart which bisect the problem versions. See test PR containers/podman#10709

  • c4635821094469632 built on the 13th does not reproduce the problem (containers-common 1_15)
  • c4805484248039424 built on the 16th does reproduce the problem (containers-common 1_16)

Hint: If you check out commit 9efcf62 from that PR, you can use hack/get_ci_vm.sh int podman ubuntu-2010 root host to see it break. Commit f4c50f6 will get you the prior version VM (without the problem).

`no-conmon` description wrong

containers.conf currently claims that ...

# `no-conmon` Container engine runs run without conmon

But that's wrong as we always run with conmon. What it actually means is that no dedicated cgroup for conmon is created.

It caused some confusion for HPC users.

A question about name value pairs

63 64 67 lines in

# specified as "name=value",
# for example:"net.ipv4.ping_group_range = 0 0".
#
default_sysctls = [
"net.ipv4.ping_group_range=0 0",
]

71 72 79 lines in

# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
# "nofile=1024:2048"
# See setrlimit(2) for a list of resource names.
# Any limit not specified here will be inherited from the process launching the
# container engine.
# Ulimits has limits for non privileged container engines.
#
# default_ulimits = [
# "nofile=1280:2560",
# ]

Should we add spaces or remove spaces here?

seccomp: default profile has several duplicates

While working on updating the profile to include the changes I made in moby/moby#41889 I noticed that several syscalls are included in the main "allow for all containers" list and the "allow for container with these capabilities list":

  • unshare, mount, umount, and umount2 are for CAP_SYS_ADMIN and all containers.
  • reboot is permitted for CAP_SYS_BOOT and all containers.
  • name_to_handle_at is permitted for CAP_SYS_ADMIN, CAP_SYS_NICE(?), and all containers.
  • clone has some compilcated rules for CAP_SYS_ADMIN to block CLONE_NEWUSER but it's enabled for all containers as well.

Should I remove these duplicates when updating the seccomp profile? I imagine these slipped through the net when the original profile was copied from Docker and then modified over time.

make container common windows safe

For the podman-remote-windows client, we need containers common's func (c *Config) Validate() error to be windows safe. At present, it chokes on validating /var/lib/... ... for windows and probably networking, this should be a no-op.

needed for podman-2

Support for custom/relocatable install prefixes (Conda packages)

I've packaged podman and buildah (plus crun, runc, conmon, fuse-overlayfs, and dependencies) as Conda packages for conda-forge over at https://github.com/conda-forge/staged-recipes/pulls/mbargull. I also updated our build of skopeo over at https://github.com/conda-forge/skopeo-feedstock. (NB: The build PRs for podman, buildah, and updated skopeo are not yet reviewed/merged but the packages will be available soon.)

In the process of that I had to patch podman/buildah/skopeo (well https://github.com/containers/common and https://github.com/containers/image to be exact) to support the peculiarities of installing software in Conda environments:

These patches work and give me everything to be able to, e.g., conda install podman and have a user installed podman available on any (glibc-based) Linux distribution.
I wanted us (conda-forge and Bioconda) to have functional packages first (so we can start to make use of the very useful tools you provide :D) and then see to refine/replace all of that.
Because, frankly, those patches aren't high quality at all -- I went for minimal changeset, only tested them manually, and, well, I'm no Go developer and haven't dug terribly much into your code.
And this is why I'm here: To see what your expert thoughts are and to (hopefully) work out what could be done upstream in your projects to allow our use cases (with less fiddly patches).


Let me explain what those "peculiarities" we have with Conda installed software are:
The conda package manager lets users install software into "environments" created at arbitrary prefixes, e.g.,

conda create --prefix=/path/to/podman-env podman

will create a directory /path/to/podman-env (separate from the system /usr installation) where you'd find podman at /path/to/podman-env/bin/podman. You can then "activate" that environment (which is really just setting PATH="/path/to/podman-env:${PATH}") with conda activate /path/to/podman-env and have podman readily available.
From the user perspective this is sometimes compared to containers because you have "one installation installed on top another". Technically, they aren't that similar, of course, because you get (by design) no kind of isolation from the host system. You can, however, compare it to have statically linked binaries installed at /path/to/podman-env/bin. But instead of just the executables, we also have /path/to/podman-env/lib, /path/to/podman-env/share, etc. in that prefix. However, we don't (necessarily) use static linking but we do work with a binary package manager.

Which leads to this: Conda allows the user (without root privileges and without touching /etc, /usr, etc.) to install precompiled software packages into, e.g., their home folders. The packages thus not only carry relocatable binaries but all other files are also relocatable. Since we don't know the installation prefix at compile time, but do want to install fully functional packages, we have to infer paths to supplementary files at run time (or at least at installation time).


Concrete example for the containers tools: We want to ship default configuration files like etc/containers/containers.conf or etc/containers/policy.json, install them into the prefix at, e.g., /path/to/podman-env/etc/containers/ and have the tools read them for default values.
The package manager of the system would install default configuration files into /etc or /usr/share but these paths are not accessible for us since we are installing rootless.

The make-shift patches above still honor the configs from /etc/containers if they are available, but fallback to using the default configurations we have in our install prefix. For podman and skopeo this simply determines the path of the current executable (which we know is installed at PREFIX/bin/) and uses relative paths, e.g., ../etc/containers/containers.conf then. For buildah, which re-execs itself (re-exec in podman seems to be confined to the runtimes themselves) I had to make it even more hacky and embed a C string* into the binary which will be replaced at install time with the path to the installation prefix (forgive me for the ugliness of the patch; I have no experience with Go).
(* NB: The patch carries an empty strings but I replace that in a build script with a ~255 character string to make room for the replacement at install time.)


I look forward to hearing your thought on this and if/in which way this Conda use case could be incorporated upstream.

Cheers,
Marcel

Alpine package `containers-common` seems to be broken on stable alpine

No matter how I try to install buildah (or just containers-common) using apk I get an error.

$ container=$(buildah from docker.io/alpine:3.13.1)
$ buildah run $container apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.1-89-ga861aa92ff [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.1-88-gd924084049 [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]
OK: 13878 distinct packages available
$ buildah run $container apk search containers-common
$ buildah run $container apk add containers-common
ERROR: unable to select packages:
  containers-common (no such package):
    required by: world[containers-common]
error while running runtime: exit status 1
ERRO exit status 1
$ buildah run $container apk add -X http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted buildah
ERROR: unable to select packages:
  containers-common (no such package):
    required by: buildah-1.19.4-r0[containers-common]
error while running runtime: exit status 2
ERRO exit status 2

From what I can tell from the website containers-common should be in the community repo and thus be available. Even so, installing buildah from the testing repository still results in an error due to containers-common.

Am I missing something or is there something amiss?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.