Giter VIP home page Giter VIP logo

kernel-module-management's Introduction

Kernel Module Management

Go Report Card Go Reference

The Kernel Module Management Operator manages out of tree kernel modules in Kubernetes.

Getting started

Install the bleeding edge Kernel Module Management Operator in one command:

First install cert-manager which is a dependency.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
kubectl -n cert-manager wait --for=condition=Available deployment \
	cert-manager \
	cert-manager-cainjector \
	cert-manager-webhook

Install KMM.

kubectl apply -k https://github.com/kubernetes-sigs/kernel-module-management/config/default

Documentation and lab

You can find examples and labs on the documentation website.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

kernel-module-management's People

Contributors

bthurber avatar cdvultur avatar chr15p avatar dependabot[bot] avatar dharmit avatar enriquebelarte avatar erusso7 avatar fabiendupont avatar hershpa avatar iranzo avatar jongwooo avatar k8s-ci-robot avatar mrbobbytables avatar mresvanis avatar pcolledg-amd avatar qbarrand avatar tomernewman avatar ybettan avatar yevgeny-shnaidman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kernel-module-management's Issues

Add an allow-list of Helm repositories

SRO is currently able to download Helm charts from any https://, oci:// or cm:// destination.
To enhance security, it should be possible to define an allow-list of destinations from which the operator can download charts, possible with wildcards and / or regexes.

Upgrade controller / notification mechanism

SRO currently only notices upgrades when nodes with new kernels are joining the cluster. Because Kubernetes does not offer a standard upgrade facility or API, SRO cannot be notified in advance of upcoming upgrades, nor can it stop them in case it detects that there will be a problem with a SpecialResource it's managing.
Let us think of a system that would provide an abstract enough API for notifying the cluster that an upgrade is incoming or in progress. It would be up to Kubernetes vendors to implement that API.

@yevgeny-shnaidman @zvonkok please don't hesitate to elaborate your ideas in the comments.

Add a validating webhook

Some complex constraints cannot be expressed through kubebuilder / OpenAPI annotations.
To avoid errors at runtime, non-trivial validations are typically implemented in a Validating Admission Webhook. We should implement such a webhook to detect malformed Modules as early as possible.

Checks:

  • all regexes in the Module are valid
  • all kernel mappings should have a non-empty containerImage field, unless .spec.moduleLoader.container.containerImage is defined
  • .spec.moduleLoader.container.modprobe.moduleName is not empty unless both modprobe.rawArgs.load and modprobe.rawArgs.unload are defined

Canary Deployment Support

Issue Summary:

The Canary Deployment should be supported for user to execute some Testing to make sure the OOT driver can work properly on the cluster and prevent the cluster from being impacted by the potential severe issues brought by the new OOT driver modules.

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):

P2 & Medium

Issue Detail:

It is quite possible that the Kernel on the work node can be impacted by some potential issues bring by the OOT driver module. If KMMO directly deploy the driver container image to all the work nodes in the cluster without careful testing on the user clusters environment, it might cause huge problem to the cluster. KMMO can reduce this risk by supporting Canary Deployment. Using Canary Deployment, user can specify limited number of nodes( or specify some dedicated nodes) in the cluster to deploy the driver container images. And after that, user can use some Canary testing (or run some real workloads) on these nodes and verify the stability and other quality of the driver. Only when the user can be fully confident, user can then deploy the driver container image to all the other nodes in the clusters.

Solution Proposal

User can specify the number of the nodes to deploy the driver, and KMMO can pick up the nodes for user to deploy the driver.
User can also specify some dedicated nodes for KMMO to deploy the driver container image.
User can select to enable and disable Canary Deployment

Add new CA certs for HTTPS downloads

SRO currently uses the system's CA certificate store (Debian's ca-certificates, which relies on the Mozilla CA list) to trust the servers from which it is downloading Helm charts.
It is desirable to add a mechanism that dynamically loads additional CA certificates (e.g. through a ConfigMap) into SRO, to cover cases where the server offers a certificate signed by an untrusted CA.

Multiple version of prebuild driver container image support

Issue Summary:

KMM Operator should support the multiple versions of pre-built driver container images

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):

P1 & Urgent

Issue Detail:

As OOT drivers keep upgrading because of feature improvements or any other patches/bug fixes, we need a variable to keep track of such driver module. This comes from the idea mentioned in #114.

Solution Proposal:

A new variable $MODULE_VERSION need to be introduced into KMM. We can use this variable as a part of image tag to determine the correct mapping of the driver module version to specific kernel version. For ex -

spec:
  moduleVersion: 0.1
...
kernelMappings:
        - regexp: '^.*\.x86_64$'
        containerImage: quay.io/ocpeng/intel-dgpu-driver-container:$MODULE_VERSION-$KERNEL_FULL_VERSION
...

Cannot deploy KMM via kubectl apply -k

Try to install KMM using:

kubectl apply -k https://github.com/kubernetes-sigs/kernel-module-management/config/default

One of the 2 pods remains in ErrImagePull

# kubectl get pods
NAME                                               READY   STATUS         RESTARTS   AGE
kmm-operator-controller-manager-5f856ccf84-z77nt   1/2     ErrImagePull   0          11m

Events show operator image gets http/400 while pulling the image from registry:

  Normal   Pulled          6m23s                  kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
  Normal   Created         6m23s                  kubelet            Created container kube-rbac-proxy
  Normal   Started         6m23s                  kubelet            Started container kube-rbac-proxy
  Warning  Failed          5m7s (x5 over 6m21s)   kubelet            Error: ImagePullBackOff
  Normal   Pulling         4m54s (x4 over 6m23s)  kubelet            Pulling image "gcr.io/k8s-staging-kmm/kernel-module-management:main"
  Warning  Failed          4m53s (x4 over 6m21s)  kubelet            Failed to pull image "gcr.io/k8s-staging-kmm/kernel-module-management:main": rpc error: code = Unknown desc = Requesting bearer token: invalid status code from registry 400 (Bad Request)
  Warning  Failed          4m53s (x4 over 6m21s)  kubelet            Error: ErrImagePull
  Normal   BackOff         75s (x20 over 6m21s)   kubelet            Back-off pulling image "gcr.io/k8s-staging-kmm/kernel-module-management:main"

Publish KMM in operatorhub.io

This would allow partners to publish their own operator on operatorhub.io with community-level support.
Early releases in operatorhub.io would advertise KMM to a broader audience. This could be through additional channels, e.g. sprint channel for end of sprint release.

Proxy support

It's possible to supply a proxy to build phase via buildArgs (https://docs.gitlab.com/ee/ci/docker/using_kaniko.html#building-an-image-with-kaniko-behind-a-proxy). But that doesn't seem to affect the image pull phase. For example my Dockerfile uses "ubuntu:20.04" as base but the image pull timeouts as the node is running in a proxy enabled/limited network. As a trial I duplicated the build phase Pod and added proxy environment variables to it, then image pull worked fine.

Is there a way to supply proxy details to the image pull phase (or kaniko in general I suppose)?

The modprobe moduleName only allows one kernel module, rawArgs allows many which is confusing

The CRD allows only this kind of format that will insert only one module per CR

spec:
  moduleLoader:
    container:
      modprobe:
        moduleName: my-kmod

Still while using the rawArgs, multiple modules can be inserted. This is contradictory with the purpose of allowing just one module.
Also when adding the rawArgs, the moduleName has to be specified due to the fact that is mandatory, which makes things more confusing.

ER: Expose Module name as build environment variable

I find helpful to have the module name exposed as an environment variable.
This is helpful in the build process where the module name could be needed in the build process. Also will help in case of yaml reuse.

Reduce the TLS granularity configuration to be global per module.

The more we talk about it the more complex it become.

Currently, the places in which we handle TLS configurations are:

  • moduelLoader.registryTLS
  • moduleLoader.kernelMappings[].registryTLS
  • moduleLoader.kernelMappings[].build.baseImageRegistryTLS
  • moduleLoader.kernelMappings[].sign.unsignedImageRegistryTLS

Some questions to be considered:

  • Where should the registryTLS be taken from? the moduleLoader or moduleLoader.kernelMappings[]? This adds complexity to the code.
  • What happens if a user don't specify a build/sign <base/unsigned>ImageRegistryTLS? Does it expects it to be the same as the configurations in moduleLoader[.kernelMappings[]])? Or does it expect it to just get the default values?
  • How do maintain a minimal diff with m/s? OCP builds is enforcing TLS and allows mounting trusted CAs
    • Does kaniko know where to take the trusted CAs from depending on the OS it is running on? Or do we need to mount them?

This issue is for discussion, to understand if we really need to maintain all the options?

It will be much simpler to set a single TLS configuration for the entire module, without deferring between kernel mappings and without differing between the TLS options for accessing images in general or during the build/sign processes.

The proposed solution is:

  • Having a single TLS configuration per module
  • A user will be able to enable TLS for all access to image registries without granularity, or disable it.
    • Should enabled be the default?

Will be happy to get some feedback from the community here about potential usage.

Also would love to get your feedback @yevgeny-shnaidman @qbarrand @chr15p @enriquebelarte @mresvanis

Tighter seLinux policy to insert/remove OOT kernel modules.

Issue Summary:
We would like to evaluate if it is possible to use a tighter seLinux policy to insert/remove OOT kernel modules. With a finer control seLinux policy, this would eliminate any unnecessary permissions provided by spc_t.

Issue Detail:
KMMO uses seLinux option spc_t (super privileged container type) for Module Loader pod which may be over-privileged to insert/remove the OOT kernel modules. We would like to explore if there is a tighter seLinux policy we can use instead. One particular option we would like to try is the user-container-selinux. Currently, KMMO does not expose any interface to set a custom seLinux policy for the driver container deployed by daemonSet.

Suggested Priority (P1-P3) & Urgent (Urgent, medium, Low):
P1 urgent

Solution Proposal:
Is it possible to expose an interface available in module yaml to try this seLinux policy instead of default spc_t? In the meantime, we are in the process of testing whether user-container-selinux policy can be used by directly modifying KMMO.

Support image push during preflight build verification

Currently Build verification just verifies successful Build completion (image creation). We should also
allow customer ti specify if he also want the created images to be push to the registry. This will allow customer to use preflight as
pre-built image creator

Separate Build and Image existence verification

Currently in the build flow ( Sync function) we are checking if image exists and then proceed to to create/check build job. Since Sync will be used from preflight also , and maybe from other flows, it is better to extract image verification from Build API, since each flow will check image existence at its own convenience

Replace SpecialResource dependency mechanism with Helm's

Currently, a SpecialResource can depend on several dependencies, although only one level is supported.
To specify dependencies, the spec.dependencies field is used.
This feature seems to duplicate Helm's own dependency management system. Should we move to the Helm dependency management system and deprecate the existing one?

Pros

  • All existing features should be supported: values injection, any number of dependencies can be specified;
  • SRO's dependency management code can be removed;
  • Users do not have to create a SpecialResource instance for each dependency that they want to use.

Cons

  • Dependencies need to be downloaded and packaged together with the recipe. This can lead to issues when using SRO's ConfigMap Helm provider, as ConfigMaps can be up to 1MB in size;
  • If a dependency needs to be updated, SRO will re-render the entire bundle (dependencies + recipe) and re-apply it all.

Support Driver (Code) version control

Issue Summary:

KMMO should support the scenario of driver (code) version upgrading.

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):

P1 & Urgent

Issue Detail:

For the OOT(OutOfTree) Driver, the Driver(code) version might kept upgrading for adding new feature or bug fixing. So KMMO should support the upgrading of driver version to automatically (build and) upgrade kernel version according to user's configuration.

For Pre-build mode when the pre-build and released OOT driver container images are used, the KMMO should check whether there is a new released(latest) driver image can be used and upgrade the existing driver according to user's configuration. If there are several OOT driver version images exists, the latest (or the specific) version of driver container images can be deployed according to user's configuration.

For on-promise build mode it also can pick up the proper version (or latest) driver source from repo to build driver and package the driver binaries into the driver container image with the proper name and version.

Solution Proposal

Let's start from the OOT driver image name and version control. Below Name Patten is recommended.
DriverName:DriverVersion-KernelVersion(-MachineType)
Machine Type is optional If only single Machine Type is supported.
Using KMMO existing regexp and $KERNEL_FULL_VERSION variable mechanism, KMMO can find the proper driver container image for specific machine type and kernel version.
Adding the DriverVersion into the Name Patten, KMMO can find the proper driver version according to user's configuration.
In order to achieve this goal,

  1. A new variable $MODULE_VERSION need to be introduced into KMMO​
  2. $MODULE_VERSION should be acquired from the driver project git hub Tags by KMMO​
  3. MODULE_PROJECT_URL Primitive should be introduced into module CRD for KMMO for user to input the URL of the driver project​
  4. MODULE_VERSION primitive should be introduced into module CRD for user to select to use the Latest or specific Version of driver ​
  5. in on-premise build mode the Proper driver version code in the specified DRIVER_VERSION (this can be secured when we decouple the building process from module CRD yamal)​
  6. KMMO should automatically upgrade the driver image if user uses latest driver release

Example:

Module CR Yaml
spec:
  moduleVersion: Latest
  moduleProjectURL: https://github.com/intel/dgpu
...
kernelMappings:
        - regexp: '^.*\.x86_64$'
        containerImage: quay.io/ocpeng/intel-dgpu-driver-container:$MODUEL_VERSION-$KERNEL_FULL_VERSION
...

For this example, KMMO will always use the latest version of driver code to build the driver container image in on-premise mode; in pre-build mode, it will always find the latest version of prebuild driver container image with the proper kernel version. And the Module Version should acquired by KMMO from the tags of moduleProjetURL: https://github.com/intel/dgpu.

Module CR Yaml
spec:
  moduleVersion: 0.1
  moduleProjectURL: https://github.com/intel/dgpu
...
kernelMappings:
        - regexp: '^.*\.x86_64$'
        containerImage: quay.io/ocpeng/intel-dgpu-driver-container:$MODUEL_VERSION-$KERNEL_FULL_VERSION
...

For this example, KMMO will always use the 0.1 version of driver code to build the driver container image in on-premise mode; in pre-build mode, it will always find the 0.1 version of prebuild driver container image with the proper kernel version. And the Module Version should acquired by KMMO from the tags of moduleProjetURL: https://github.com/intel/dgpu.

Subtasks

Obviously, it is a big task and we should split it into several sub tasks to discuss and track

  • Mulitple version of prebuild driver container image support
  • safely remove of the driver container image/kernel module
  • Seamless upgrade supporting see (seamless upgrading)[https://sdk.operatorframework.io/docs/overview/operator-capabilities/#level-2---seamless-upgrades]
  • Build the version of the module specified by user
  • Auto build on the new/latest released module

Log should indicate missing/invalid ImageRepoSecret field

When using in-cluster build, the repository push configuration is taken from ImageRepoSecret field
In case push has failed, log message must point customer to check ImageRepoSecret:

  1. that it is configured with an actual secret in the cluster
  2. that values in the secret ( in the format of dockerconfig) are correct

Moving auth package usage into registry package

currently auth interface is created in the ModuleReconcilerand Preflight Reconciler, but then passed to the registry APIs and used there.
It also means that any future use of the the registry API will require to create the interface and then pass it again.
We should change the implementation so that the auth interface itself will be created in the registry API methods. This will simplify the use of registry API in our code

Create a base runtime image for Module Loader

When creating the Module Loader DaemonSet, KMM currently sets sleep infinity as the container's command.
sleep runs as the main process in the container (PID 1) and is therefore unkillable. That makes the container hang when terminated, because running as PID 1 means that the TERM signal that Kubernetes sends is ignored unless the binary specifically implements a TERM handler (which sleep doesn't).
The following solutions would mitigate that problem:

  1. set the Module Loader pod's shareProcessNamespace property to true, but that implies that other containers in the pod may be able to interact with the Module Loader (athough there is a single container in that pod now);
  2. introduce a requirement that pause be present in the Module Loader image - it explicitly handles the TERM signal which is the desired behaviour.

To facilitate option 2, we could release a slim image that includes both modprobe and pause - the two components that KMM requires in the image.

Extract the `Hash` logic to it own API

build and sign are using both hash computation. We should extract the hash logic to it own API and consume it twice, once in build and once in sign instead of duplicating the code.

Separate node selector for build process

Feature Summary:
Separate node selector for build process.

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):
P2 and medium

Feature Detail:
Currently, KMM uses node selector for both build and DaemonSet. It would be nice to have a separate node selector for build process since it is not always required that the build occur on the same node as that selected for the DaemonSet to modprobe the kernel module. This allows flexibility for the build to occur on any other node.

Solution Proposal:
Provide a separate node selector for build process.

Allow unloading in-tree kernel modules

Kernel module management may be used to replace an in-tree module with an out-of-tree version of the same module. This would require calling modprobe -r <module_name>, before calling modprobe -d <module_path> <module_name>. This could be configurable per module, with a default to false, so that there needs to be an intent to remove the existing module.

Error message should give a hint that ImageRepoSecret field is missing

When an user creates a new CR, that builds new kmod, given that the ImageRepoSecret is optional, the operator fails to build the image and in the logs of the operator an error like below is seen:

cannot get keychain from the registry auth getter: could not get the service account's pull secrets

Steps to reproduce:

  1. Create a module that builds a driver and omit to specify the ImageRepoSecret field
  2. The containerImage should point to quay.io, or a registry that required auth for image push/pull . The container image should not exist

Expected results:
In case the ImageRepoSecret is missing some reference to secret definition should be logged in the logs.

Helm chart?

More of a question than an issue, is there helm charts for KMMO? I couldn't find any myself. If not, are there plans?

decouple building process from Module CRD/CR

Issue Summary:

The building process should be decoupled from Module CRD/CR for driver version control feature

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):

P2 & Medium

Issue Detail:

Let start from the the roles and players. Normally there are three players and roles in the whole KMMO picture.

  1. Cluster Administrators who use KMMO to ( build ) deploy and manage the OOT drivers.
  2. Driver developer who develop and release the driver code
  3. KMMO developer who develop and maintain the KMMO operator in some K8S platform
    Currently, the Driver building process is embedded into Module CR yaml, and this CR yaml file owner should be cluster Administrators. But the driver build process should be maintained by Driver developer, and the building process might change to different version of driver. That is out of the Cluster Administrators' knowledge scope. So to maintain the CR yaml and let it match with each driver version should not be handled by Cluster Administers. We should find a way to decouple building process from Module CRD/CR, and let the build process can natively match with each driver version.

Solution Proposal

Let driver developer maintain a building process which can be docker file or scripts for each driver version in the driver project. With the moduleVersion and moduleProjectURL KMMO knows which build process can be used to build the driver container image. So clusterAdminstrators don't need to maintain the build process in the module CRD/CR yaml file anymore

remove in tree module as a pre-requisite.

Hello, Is there a way to tell the KMMO to remove a certain kernel module before we load ours? In a OOT driver we are currently working on, we need to remove the in tree driver before we load ours. Thanks
To elaborate, in order to insert our OOT graphics driver, we first need to remove in tree drm driver or else the insert will fail. if there no current functionality to do this before insert, a way to run a pre-requisite commands/scripts would be a good idea.

Preflight should clean Build verification resource on deletion

in case preflight requires build verification, upon preflight CR deletion, all build jobs (completed or not) should also
be deleted. Since each job is labeled with module name and kernel, this will allow to easily distinguish between module jobs and preflight jobs

Sanity Checking and Driver State Label support

Issue Summary:

KMMO needs to set and maintain the OOT driver statues Machine of each work node and label the nodes and report to K8S cluster properly

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):

P2 & Medium

Issue Detail:

KMMO in charge of deploying the driver container on the nodes. And the driver module will be insmod into kernel. Possibly the other operations also need to be done to enable the driver on the nodes. It is quite possible that enabling the driver in kernel might has some problems. So KMMO needs check the driver enabling operation result and make sure every steps is executed with the proper result. If KMMO find any issue, it needs to label the nodes properly and report to user. Even after the driver is enabled without any errors, the sanity check should be executed to check whether the driver is ready to use. For example, a simple checking of the driver interfaces and some statues from sysfs or procfs. After the sanity test, we can set the driver label as READY, if the sanity test failed, the label should be FAIL.

solution proposal

Each driver should has a label to maintain the OOT driver state machine for the node for example:
intel_dGPU_Driver: None/Available/Ready/Fail
intel_dGPU_Driver on the node: from "None" to "Available" when the driver container image is deployed on the node and the driver enable process is done without any issue.
intel_dGPU_Driver on the node: from "Available" to "Ready" when the sanity Checking is done and verify the driver is ready otherwise change it to Fail.
intel_dGPU_Driver on the node change to None once the driver container image is removed from the node and the driver has been rmmoded from the kernel.
User should has a way to configure the Sanity Check method and use the proper way to check the readiness of the driver after KMMO enabled the driver on the node. And this checking should driver specific.

While removing kmm operator, the kmm.node.kubernetes.io/kernel-version.full label is not cleaned up

While removing kmm operator, the label kmm.node.kubernetes.io/kernel-version.full remains on the nodes.

# kubectl delete -k config/default/
namespace "kmm-operator-system" deleted
customresourcedefinition.apiextensions.k8s.io "modules.kmm.sigs.k8s.io" deleted
customresourcedefinition.apiextensions.k8s.io "preflightvalidations.kmm.sigs.k8s.io" deleted
serviceaccount "kmm-operator-controller-manager" deleted
role.rbac.authorization.k8s.io "kmm-operator-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "kmm-operator-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "kmm-operator-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "kmm-operator-proxy-role" deleted
rolebinding.rbac.authorization.k8s.io "kmm-operator-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "kmm-operator-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "kmm-operator-proxy-rolebinding" deleted
configmap "kmm-operator-manager-config" deleted
service "kmm-operator-controller-manager-metrics-service" deleted
deployment.apps "kmm-operator-controller-manager" deleted

# kubectl get node -o json | grep kmm
                    "kmm.node.kubernetes.io/kernel-version.full": "4.18.0-372.19.1.el8_6.x86_64",
                    "kmm.node.kubernetes.io/kernel-version.full": "4.18.0-372.19.1.el8_6.x86_64",
                    "kmm.node.kubernetes.io/kernel-version.full": "4.18.0-372.19.1.el8_6.x86_64",
                    "kmm.node.kubernetes.io/kernel-version.full": "4.18.0-372.19.1.el8_6.x86_64",
                    "kmm.node.kubernetes.io/kernel-version.full": "4.18.0-372.19.1.el8_6.x86_64",
# 

FirmwarePath extra directory level causing FW initialization issue.

Issue Summary:
FirmwarePath extra directory level is causing FW initialization to fail.

Suggested Priority(P1-P3) & Urgent(Urgent, medium, Low):
P1 and Urgent

Issue Detail:
We have added firmware_class.path=/var/lib/firmware as a kernel parameter using MCO to tell the kernel to look for firmware in /var/lib/firmware since the /lib/firmware is read only on RHCOS.

KMMO copies the firmware to /var/lib/firmware/Module.Name/FirmwarePath/. Due to this extra directory level, FirmwarePath, the FW initialization fails as it cannot find the firmware binaries. When I manually copy the firmware to /var/lib/firmware/Module.Name/, then the FW initialization works.

Solution Proposal:
Is it possible for KMMO to copy firmware to /var/lib/firmware/FirmwarePath/ instead of /var/lib/firmware/Module.Name/FirmwarePath/. This removes Module.Name from the path and thus can be customized based on what value the user sets for FirmwarePath.

One other related question, can the FirmwarePath be referenced as a variable in the dockerfile?

The use case would be user can simply reference FirmwarePath as a variable when they are copying the firmware from builder image. It would be a improvement from a user point of view so that firmware is always in the right place and user does not have to explicitly list path again in dockerfile.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.