Giter VIP home page Giter VIP logo

k8s-client's Introduction

Kontena Classic

IMPORTANT! This project is deprecated. Please see Kontena Pharos - The Simple, Solid, Certifified Kubernetes Distribution That Just Works!

Build Status Join the chat at https://slack.kontena.io

Kontena Classic is a developer friendly, open source platform for orchestrating applications that are run on Docker containers. It simplifies deploying and running containerized applications on any infrastructure. By leveraging technologies such as Docker, Container Linux and Weave Net, it provides complete solution for organizations of any size.

Kontena Classic is built to maximize developer happiness; it is designed for application developers and therefore does not require ops teams to setup or maintain. Therefore, it is an ideal choice for organizations without aspiration to configure and maintain scalable Docker container infrastructure.

Kontena Classic Introduction

To accelerate and break barriers for containerized application development, Kontena Classic features some of the most essential technologies built-in such as:

  • Multi-host, multi AZ container orchestration
  • Overlay network technology by Weaveworks
  • Zero-downtime dynamic load balancing
  • Abstraction to describe services running in containers
  • Private Docker image repository
  • Kontena Vault - a secure storage for managing secrets
  • VPN access to backend containers
  • Heroku-like application deployment workflow

Kontena Classic supports any application that can run in a Docker container, and can run on any machine that supports CoreOS. You can run Kontena on the cloud provider of your choice or on your own servers. We hope you enjoy!

Learn more about Kontena:

Getting Started

Please see our Quick Start guide.

Contact Us

Found a bug? Suggest a feature? Have a question? Please submit an issue or email us at [email protected].

Follow us on Twitter: @KontenaInc.

Slack: Join the Kontena Community Slack channel.

License

Kontena Classic software is open source, and you can use it for any purpose, personal or commercial. Kontena is licensed under the Apache License, Version 2.0. See LICENSE for full license text.

k8s-client's People

Contributors

jakolehm avatar jgnagy avatar jnummelin avatar kke avatar lesnyrumcajs avatar matti avatar nielsslot avatar spcomb avatar theomarkkuspaul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-client's Issues

API discovery needs to deal with HTTP 404/503 errors for apiservice API groups

The apiservice API groups (such as the metrics-server => metrics.k8s.io/v1beta1) seems to to be rather fragile, and the kube apiserver seems to often return HTTP 404 or 503 errors for the /apis/metrics.k8s.io/v1beta1 endpoint.

I suspect this is related to the metrics-server pod backing the api service being broken, so retry wouldn't necessarily help. The client may need some policy to just skip broken API groups for things like stack purge?

Using watches and handling timeouts

I've been using https://github.com/abonas/kubeclient library, but I like the flavor of this library better.
I'm a heavy user of the watch API, and when testing out this code:

 begin
    MyLog.log.info "Starting watcher..."
    client.api('v1').resource('namespaces').watch() do |watch_event|
      o = watch_event[:object]
      puts "type=#{watch_event.type} namespace=#{watch_event.resource.metadata.name}"
    end
    MyLog.log.info "Exited normally."
  rescue Exception => e
    MyLog.log.error "Watcher error: #{e}"
  end

  MyLog.log.info "Finished watcher..."

After roughly 5 minutes the exception handler reports:

ERROR -- : Watcher error: end of file reached (EOFError)

I sort of expected this, if you issue kubectl namespaces -w it will timeout in about the same
amount of time.

We are stuck on an older version of Kubernetes (1.89) and the Kube API server was getting hammered because the abonas client was not handling the closing of the terminated connection and it was causing the API server to backup and affect the cluster.

What is the proper way to handle the timeout/EOFError ?

Kubeconfig loader does not accept expiry timestamps

Ref: kontena/mortar#93

If ~/.kube/config / KUBECONFIG contains timestamps, the loading will raise a Psych::DisallowedClass exception.

Example:

  user:
    auth-provider:
      config:
        access-token: 
        cmd-args: config config-helper --format=json
        cmd-path: /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud
        expiry: 2019-01-29T14:20:36Z
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

Time/Date/DateTime should probably be whitelisted.

Opaque error for text/plain responses

If the HTTP API returns a Content-Type: text/plain response, then the error should show what request failed and with what response.

I, [2018-07-13T13:43:39.737062 #1]  INFO -- Pharos::Kube::Transport<https://167.99.39.233:6443>: GET /apis/extensions/v1beta1 => HTTP 200: <Pharos::Kube::API::MetaV1::APIResourceList> in 0.047s
I, [2018-07-13T13:43:39.787304 #1]  INFO -- Pharos::Kube::Transport<https://167.99.39.233:6443>: GET /apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-dns => HTTP 200: <Pharos::Kube::Resource> in 0.049s
I, [2018-07-13T13:43:39.852477 #1]  INFO -- Pharos::Kube::Transport<https://167.99.39.233:6443>: PUT /apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-dns <Pharos::Kube::Resource> => HTTP 200: <Pharos::Kube::Resource> in 0.063s
    Completed Configure DNS @ 167.99.39.233 in 0.164s
==> Configure Calico network @ 167.99.39.233
Invalid response Content-Type: text/plain; charset=utf-8
/usr/local/bundle/bundler/gems/pharos-kube-client-256e580aa10d/lib/pharos/kube/transport.rb:129:in `parse_response'
/usr/local/bundle/bundler/gems/pharos-kube-client-256e580aa10d/lib/pharos/kube/transport.rb:157:in `request'
/usr/local/bundle/bundler/gems/pharos-kube-client-256e580aa10d/lib/pharos/kube/transport.rb:195:in `get'
/usr/local/bundle/bundler/gems/pharos-kube-client-256e580aa10d/lib/pharos/kube/api_client.rb:41:in `api_resources'
/usr/local/bundle/bundler/gems/pharos-kube-client-256e580aa10d/lib/pharos/kube/api_client.rb:51:in `resource'
/app/lib/pharos/phases/configure_calico.rb:28:in `get_ippool'
/app/lib/pharos/phases/configure_calico.rb:35:in `validate_ippool'
/app/lib/pharos/phases/configure_calico.rb:45:in `validate'
/app/lib/pharos/phases/configure_calico.rb:49:in `call'
/app/lib/pharos/phase_manager.rb:70:in `block in apply'
/app/lib/pharos/phase_manager.rb:40:in `block in run_serial'
/app/lib/pharos/phase_manager.rb:39:in `map'
/app/lib/pharos/phase_manager.rb:39:in `run_serial'
/app/lib/pharos/phase_manager.rb:51:in `run'
/app/lib/pharos/phase_manager.rb:67:in `apply'
/app/lib/pharos/cluster_manager.rb:119:in `apply_phase'
/app/lib/pharos/cluster_manager.rb:99:in `apply_phases'
/app/lib/pharos/up_command.rb:104:in `configure'
/app/lib/pharos/up_command.rb:44:in `block in execute'
/app/lib/pharos/up_command.rb:43:in `chdir'
/app/lib/pharos/up_command.rb:43:in `execute'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:63:in `run'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/subcommand/execution.rb:11:in `execute'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:63:in `run'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:132:in `run'
/app/lib/pharos/root_command.rb:16:in `run'
bin/pharos-cluster:12:in `<top (required)>'

Support for OIDC auth

With a authentication scheme

  user:
    auth-provider:
      config:
        client-id: xxx
        client-secret: xxx
        id-token: xxx
        idp-issuer-url: https://accounts.google.com
        refresh-token: xxx
      name: oidc

It breaks with this stacktrace

Traceback (most recent call last):
	12: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/bin/mortar:23:in `<main>'
	11: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/bin/mortar:23:in `load'
	10: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/kontena-mortar-0.3.2/bin/mortar:13:in `<top (required)>'
	 9: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/clamp-1.3.0/lib/clamp/command.rb:140:in `run'
	 8: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/clamp-1.3.0/lib/clamp/command.rb:66:in `run'
	 7: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/clamp-1.3.0/lib/clamp/subcommand/execution.rb:18:in `execute'
	 6: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/clamp-1.3.0/lib/clamp/command.rb:66:in `run'
	 5: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/kontena-mortar-0.3.2/lib/mortar/fire_command.rb:64:in `execute'
	 4: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/kontena-mortar-0.3.2/lib/mortar/mixins/client_helper.rb:7:in `client'
	 3: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/kontena-mortar-0.3.2/lib/mortar/mixins/client_helper.rb:14:in `create_client'
	 2: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/k8s-client-0.8.2/lib/k8s/client.rb:39:in `config'
	 1: from /Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/k8s-client-0.8.2/lib/k8s/transport.rb:79:in `config'
/Users/pa/.brew/Cellar/mortar/HEAD-317118c/libexec/gems/k8s-client-0.8.2/lib/k8s/transport.rb:79:in

It's been very long since i'm out of ruby but that'd be a nice addition, or at least a decent error message.

I've falled back to basic/token auth to make it work but working with OIDC auth would be awesome.

K8s::Client.autoconfig is broken when KUBECONFIG env is set

ArgumentError: wrong number of arguments (given 2, expected 0..1)
  /Users/user/.gem/ruby/2.5.3/gems/k8s-client-0.8.1/lib/k8s/config.rb:113:in `from_kubeconfig_env'
  /Users/user/.gem/ruby/2.5.3/gems/k8s-client-0.8.1/lib/k8s/client.rb:73:in `autoconfig'

excon malformed headers when doing resource update inside of watch

  asia_resource.watch() do |asia_watch_event|
    resource = asia_watch_event.resource
    name = resource[:metadata][:name]
    who = resource[:spec][:who]

    resource[:spec][:greeting] = "Hello, #{who}"
    asia_resource.update_resource(resource)
  end

will

1: from /Users/mpa/.rvm/gems/ruby-2.6.0@operaattori/gems/excon-0.62.0/lib/excon/response.rb:89:in `parse'
/Users/mpa/.rvm/gems/ruby-2.6.0@operaattori/gems/excon-0.62.0/lib/excon/response.rb:186:in `parse_headers': malformed header (Excon::Error::ResponseParse) (Excon::Error::Socket) (Excon::Error::Socket)

while

  asia_resource.watch() do |asia_watch_event|
    resource = asia_watch_event.resource
    name = resource[:metadata][:name]
    who = resource[:spec][:who]

    resource[:spec][:greeting] = "Hello, #{who}"
    Thread.new do
      asia_resource.update_resource(resource)
    end
  end

works

Client#api_groups assumes that `preferredVersion` includes all resources

The Client#api_groups use of K8s::API::MetaV1::APIGroupList#preferredVersion assumes that the preferred API version includes all possible APIResource definitions. That may not be the case, different versions of the same API group can define different, non-overlapping resources, so it needs to be possible to deal with multiple versions of the same API.


Example of this is the batch API in kube 1.11, where the CronJob and Job kinds belong to different versions of the same API group. This prevents stack prune from listing and deleting any CronJob resources. Note that the stack apply still works, since the apiVersion: batch/v1beta1 from the resource definition is handled correctly, even though it isn't returned by api_groups.

/apis

    {
      "name": "batch",
      "versions": [
        {
          "groupVersion": "batch/v1",
          "version": "v1"
        },
        {
          "groupVersion": "batch/v1beta1",
          "version": "v1beta1"
        }
      ],
      "preferredVersion": {
        "groupVersion": "batch/v1",
        "version": "v1"
      }
    },

/apis/batch/v1

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "batch/v1",
  "resources": [
    {
      "name": "jobs",
      "singularName": "",
      "namespaced": true,
      "kind": "Job",
      "verbs": [
        "create",
        "delete",
        "deletecollection",
        "get",
        "list",
        "patch",
        "update",
        "watch"
      ],
      "categories": [
        "all"
      ]
    },
    {
      "name": "jobs/status",
      "singularName": "",
      "namespaced": true,
      "kind": "Job",
      "verbs": [
        "get",
        "patch",
        "update"
      ]
    }
  ]
}

/apis/batch/v1beta1

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "batch/v1beta1",
  "resources": [
    {
      "name": "cronjobs",
      "singularName": "",
      "namespaced": true,
      "kind": "CronJob",
      "verbs": [
        "create",
        "delete",
        "deletecollection",
        "get",
        "list",
        "patch",
        "update",
        "watch"
      ],
      "shortNames": [
        "cj"
      ],
      "categories": [
        "all"
      ]
    },
    {
      "name": "cronjobs/status",
      "singularName": "",
      "namespaced": true,
      "kind": "CronJob",
      "verbs": [
        "get",
        "patch",
        "update"
      ]
    }
  ]
}

Job resource creation raises: POST /apis/batch/v1/jobs => HTTP 405 Method Not Allowed

I'm trying to create a job via k8s-client, but server returns response with status 405, here is my ruby code with transport logs:

k8s_client = K8s::Client.config(
    K8s::Config.new(YAML.safe_load(cluster.kube_config)),
    auth_token: Google::Clusters::GetConfigInteraction.run!.dig('credential', 'access_token')
)
job_resource = K8s::Resource.from_files('./hellojob.yaml').first
=> #<K8s::Resource apiVersion="batch/v1", kind="Job", metadata={:name=>"hello-job"}, spec={:parallelism=>1, :template=>{:metadata=>{:name=>"hello-job"}, :spec=>{:restartPolicy=>"Never", :containers=>[{:name=>"hello-job", :image=>"centos:7", :command=>["bin/bash", "-c", "echo 'Hello World'"]}]}}}>

k8s_client.api('batch/v1').resource('jobs').list
I, [2018-12-27T15:49:34.873617 #63]  INFO -- K8s::Transport<https://35.231.138.215>: GET /apis/batch/v1 => HTTP 200: <K8s::API::MetaV1::APIResourceList> in 0.583s
I, [2018-12-27T15:49:35.054349 #63]  INFO -- K8s::Transport<https://35.231.138.215>: GET /apis/batch/v1/jobs => HTTP 200: <K8s::API::MetaV1::List> in 0.179s
=> []

k8s_client.api('batch/v1').resource('jobs').create_resource(job_resource)
W, [2018-12-27T15:49:40.795294 #63]  WARN -- K8s::Transport<https://35.231.138.215>: POST /apis/batch/v1/jobs <K8s::Resource> => HTTP 405 Method Not Allowed in 0.171s
K8s::Error::MethodNotAllowed: POST /apis/batch/v1/jobs => HTTP 405 Method Not Allowed: the server does not allow this method on the requested resource
from /usr/local/bundle/gems/k8s-client-0.6.4/lib/k8s/transport.rb:215:in `parse_response'

it looks weird, because I'm able to create the same job resource on the same cluster via `kubectl(also via curl with enabled proxy)

$ kubectl create -f hellojob.yaml
=> job.batch/hello-job created
$ kubectl logs hello-job-mlfv8
=> Hello World

hellojob.yaml

kind: Job
metadata:
  name: hello-job
spec:
  parallelism: 1
  template:
    metadata:
      name: hello-job
    spec:
      restartPolicy: Never
      containers:
      - name: hello-job
        image: centos:7
        command: ['bin/bash', '-c', "echo 'Hello World'"]

via k8s-client I able to create deployments, services, list and existed jobs but not to create

in case if there will be some permission issue I'd get 403 error code in response, not 405, isn't it?

Override current-context in K8s::Client.config

According to the documentation, current-context is one of the supported overrides. I can't seem to get this to work. Have tried various ways of specifying it. What's the correct way to do this?

K8s::Client.config(
  K8s::Config.load_file(config_path),
  :'current-context' => 'minikube'
)

When I have the current-context set to production in my shell, this override does not seem to take effect.

HashDiff dependency causing trouble with webmock 3.6

Bundler could not find compatible versions for gem "hashdiff":
  In Gemfile:
    k8s-client (>= 0.10) was resolved to 0.10.1, which depends on
      hashdiff (~> 0.3.7)

    webmock (~> 3.6) was resolved to 3.6.0, which depends on
      hashdiff (< 2.0.0, >= 0.4.0)

Hashdiff recently released a new version that changes the constant name due to some name clash with other gem (liufengyun/hashdiff#65)

GKE support

GKE kubeconfig has calls to gcloud CLI. And token is somehow mangled inside a service account.

So either implement GKE support or have documentation on how to get the required token and steps.

Recursive compact does not seem to be recursive

    # Recursive compact for Hash/Array
    #
    # @param hash_or_array [Hash,Array]
    # @return [Hash,Array]
    def self.recursive_compact(hash_or_array)
      p = proc do |*args|
        v = args.last
        v.delete_if(&p) if v.respond_to?(:delete_if) && !v.is_a?(Array)
        v.nil? || v.respond_to?(:empty?) && (v.empty? && (v.is_a?(Hash) || v.is_a?(Array)))
      end

      hash_or_array.delete_if(&p)
    end

It's weird and I don't see how it would go deeper than one level.

Create a set of examples

There should be some working examples in examples/ as it is the usual convention.

As mentioned in #142, the bin/k8s-client is probably out of sync and rarely used, but it can be used as a basis for extracting code snippets for the examples.

Get without exception

It would be nice to have exceptionless version of get(and maybe for put/patch etc?):

def cluster_id 
  client.api('v1')
    .resource('configmaps', namespace: 'kube-public')
    .get('cluster-info')
    .metadata
    .uid
rescue K8s::Error::NotFound
  nil
end

vs:

def cluster_id
  client.api('v1')
     .resource('configmaps', namespace: 'kube-public')
     .get!('cluster-info')&.metadata&.uid
end

`nil` vs. `[]` vs. not-set-att-all

kubernetes/kubernetes#70281

So if I send something either as nil or as in empty array, k8s API treats it as not set at all and omits the attribute on the resulting object. This causes problems now when k8s-client saves the config in annotation and determines the json patch operations based on that.

Apply following with k8s-client:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      tolerations:
      containers:
      - name: myapp
        image: nginx:latest
        ports:
        - containerPort: 80

and after that try to apply e.g.:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      tolerations: []
      containers:
      - name: myapp
        image: nginx:latest
        ports:
        - containerPort: 80

The json patch operations send would be:

[{"op": "replace", "path":"/spec/tolerations", "value":[]}]

Kube API borks with:

Error from server: jsonpatch replace operation does not apply: doc is missing key: /spec/tolerations

That's because the initial sending of "empty" tolerations results in the deployment object actually not having tolerations set at all so we cannot replace them either on later patches.

I think there's two ways to "fix" this:

  1. "compact" the resource hash before storing it in the las-applied-config annotation. That way whatever was used omits nulls and empty arrays as if they were not given at all
  2. Make the json patch part deal with these. If the patch "calculation" sees the old value as nil/empty array it could transform the operation from replace to add.

I think option 1 might be more future proof and robust.

@jakolehm @kke WDYT?

Hello from kubeclient, kubernetes-client :wave:

Hi, I was happy to discover a 3rd ruby client besides https://github.com/abonas/kubeclient and https://github.com/kubernetes-client. You have great momentum ๐Ÿ‘

I wanted to reach out to ask if you see aspects on which we should collaborate.

Specifically, I feel it could be nice to extract the best parts of parsing kubeconfig and obtaining auth to a separate gem, and get all 3 client gem to use it.

  • This functionality requires catch-up to the reference Go implementation...
  • The interfaces have a lot of arbitrary details like "how is the param for custom CA called?", which unfortunately have no de-facto standard in Ruby ecosystem.
    A single good config interface could make it easier for users to learn/switch between the client gems.

On similar discussion in kubernetes-client/ruby#20 (comment) some people expressed support. What do you think?

client.update_resource(some_service_resource) fails with missing resourceVersion and empty clusterIP

I'm trying to do the following (should essentially be a no-op, equivalent to kubectl apply with unchanged resources):

resources = K8s::Resource.from_files('/path/to/manifests')
resources.each do |resource|
  begin
    client.update_resource(resource)
  rescue K8s::Error::NotFound
    client.create_resource(resource)
  end
end

Which produces an error because resourceVersion and clusterIP seem to be sent as empty strings to the server.

/Users/d11wtq/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0/gems/k8s-client-0.3.4/lib/k8s/transport.rb:192:in `parse_response': PUT /api/v1/namespaces/projectname/services/api => HTTP 422 Unprocessable Entity: Service "api" is invalid: [metadata.resourceVersion: Invalid value: "": must be specified for an update, spec.clusterIP: Invalid value: "": field is immutable] (K8s::Error::Invalid)
        from /Users/d11wtq/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0/gems/k8s-client-0.3.4/lib/k8s/transport.rb:210:in `request'
        from /Users/d11wtq/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0/gems/k8s-client-0.3.4/lib/k8s/resource_client.rb:218:in `update_resource'
        from /Users/d11wtq/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0/gems/k8s-client-0.3.4/lib/k8s/client.rb:164:in `update_resource'
        from test.rb:41:in `block in <main>'
        from test.rb:32:in `each'
        from test.rb:32:in `<main>'

There are two resources in the list: Deployment and Service (NodePort). Both were originally created with kubectl apply. The deployment seems to be update fine, but the service always blows up on the error above. I'm not sure if this is expected, or if I'm supposed to change the spec in some way once I've read the manifests off disk.

Is this the correct way to do something along the lines of kubectl apply? Happy with anything that more or less performs an upsert style operation for the resource.

Here is the entire manifest, if it helps:

---
apiVersion: v1
kind: Service
metadata:
  namespace: projectname
  name: api
  labels:
    app: projectname
    service: api
spec:
  type: NodePort
  selector:
    app: projectname
    service: api
  ports:
    - name: http
      port: 8082
      protocol: TCP

Instance creation and configuration api should be refactored

Oddities / annoyances:

  • K8s:Client.new should probably not require transport as an argument.
  • K8s::Client.config takes a config and passes it to Transport.config. I think it's a misleading method name.
  • K8s::Transport.config same as above, returns an instance of K8s::Transport, I would expect it to return a config instead of accepting one.
  • Transport.new takes auth token/username/password, but there's K8s::Config::UserAuthProvider, maybe Transport.new should actually take an instance of some kind of Authentication class
  • Transport is a mixture of abstraction for Excon and generic methods.
  • The ! methods such as api_groups! and api_resources! are unintuitive. Perhaps api_groups(force_refresh: true) or something would be better.
  • Method visibility in general has been ignored completely, there are no private methods. (I didn't find any methods that should be private except maybe the Util mixin should use module_function)
  • K8s::Resource.from_files reads a single file or all files in a directory. I would expect it to take an array of paths or IOs.
  • There's a million methods for creating a client instance.
  • The client is a bit cumbersome to mock/stub because of the long method chains. It should probably have some kind of test mode / mock transport.

Other than that, it's actually pretty great.

Server addresses with path-prefix do not work

As discussed in #110 - there seems to be some sort of bug in the path prefix handling.

The path is in the request twice.

client.api('v1').resource('ingresses', namespace: 'default').list()

Tries to make a request to:

/k8s/clusters/c-blahblah/k8s/clusters/c-blahblah/api/v1

And receives NotFound.

`in_cluster_config': uninitialized constant K8s::Error::Config (NameError)

When #in_cluster_config fails, it gives out uninitialized constant K8s::Error::Config (NameError).

Repro:

require 'k8s-client'

# Note: This works inside cluster as expected!
K8s::Client.in_cluster_config

This fails with

k8s-client-0.8.1/lib/k8s/transport.rb:101:in `in_cluster_config': uninitialized constant K8s::Error::Config (NameError)

Stack prune assumes read access to all kube resources

K8s::Stack#prune uses client.list_resources to enumerate all resources with the stack label. While the stack apply itself will work with a limited ClusterRole whitelisting the resource types used in the stack, the prune will always fail unless granted something along the lines of cluster-admin:

I, [2018-08-21T13:31:46.507863 #1]  INFO -- K8s::Transport: Using config with server=https://master.terom-pharos-dev.kontena.works:6443
I, [2018-08-21T13:31:46.806386 #1]  INFO -- K8s::Transport<https://master.terom-pharos-dev.kontena.works:6443>: GET /version => HTTP 200: <K8s::API::Version> in 0.298s
I, [2018-08-21T13:31:46.806602 #1]  INFO -- : Kube server version: v1.11.1
I, [2018-08-21T13:31:46.859414 #1]  INFO -- K8s::Transport<https://master.terom-pharos-dev.kontena.works:6443>: GET /apis => HTTP 200: <K8s::API::MetaV1::APIGroupList> in 0.048s
I, [2018-08-21T13:31:47.009313 #1]  INFO -- K8s::Transport<https://master.terom-pharos-dev.kontena.works:6443>: [GET /api/v1, GET /apis/apiregistration.k8s.io/v1, GET /apis/apiregistration.k8s.io/v1beta1, GET /apis/extensions/v1beta1, GET /apis/apps/v1, GET /apis/apps/v1beta2, GET /apis/apps/v1beta1, GET /apis/events.k8s.io/v1beta1, GET /apis/authentication.k8s.io/v1, GET /apis/authentication.k8s.io/v1beta1, GET /apis/authorization.k8s.io/v1, GET /apis/authorization.k8s.io/v1beta1, GET /apis/autoscaling/v1, GET /apis/autoscaling/v2beta1, GET /apis/batch/v1, GET /apis/batch/v1beta1, GET /apis/certificates.k8s.io/v1beta1, GET /apis/networking.k8s.io/v1, GET /apis/policy/v1beta1, GET /apis/rbac.authorization.k8s.io/v1, GET /apis/rbac.authorization.k8s.io/v1beta1, GET /apis/storage.k8s.io/v1, GET /apis/storage.k8s.io/v1beta1, GET /apis/admissionregistration.k8s.io/v1beta1, GET /apis/apiextensions.k8s.io/v1beta1, GET /apis/scheduling.k8s.io/v1beta1, GET /apis/pharos-test.k8s.io/v0, GET /apis/crd.projectcalico.org/v1, GET /apis/certmanager.k8s.io/v1alpha1, GET /apis/metrics.k8s.io/v1beta1] => HTTP [200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200] in 0.129s
I, [2018-08-21T13:31:47.009471 #1]  INFO -- : Apply stack test...
I, [2018-08-21T13:31:47.149323 #1]  INFO -- K8s::Transport<https://master.terom-pharos-dev.kontena.works:6443>: [GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/tests.pharos-test.k8s.io, GET /apis/pharos-test.k8s.io/v0/namespaces/default/tests/test] => HTTP [200, 200] in 0.138s
I, [2018-08-21T13:31:47.155711 #1]  INFO -- K8s::Stack<test>: Keep resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/tests.pharos-test.k8s.io in namespace  with checksum=dc117930cbb575475374f832f73ecb79
I, [2018-08-21T13:31:47.157476 #1]  INFO -- K8s::Stack<test>: Keep resource pharos-test.k8s.io/v0:Test/test in namespace default with checksum=dc117930cbb575475374f832f73ecb79
W, [2018-08-21T13:31:47.469508 #1]  WARN -- K8s::Transport<https://master.terom-pharos-dev.kontena.works:6443>: [GET /api/v1/componentstatuses, GET /api/v1/configmaps, GET /api/v1/endpoints, GET /api/v1/events, GET /api/v1/limitranges, GET /api/v1/namespaces, GET /api/v1/nodes, GET /api/v1/persistentvolumeclaims, GET /api/v1/persistentvolumes, GET /api/v1/pods, GET /api/v1/podtemplates, GET /api/v1/replicationcontrollers, GET /api/v1/resourcequotas, GET /api/v1/secrets, GET /api/v1/serviceaccounts, GET /api/v1/services, GET /apis/apiregistration.k8s.io/v1/apiservices, GET /apis/apiregistration.k8s.io/v1beta1/apiservices, GET /apis/extensions/v1beta1/daemonsets, GET /apis/extensions/v1beta1/deployments, GET /apis/extensions/v1beta1/ingresses, GET /apis/extensions/v1beta1/networkpolicies, GET /apis/extensions/v1beta1/podsecuritypolicies, GET /apis/extensions/v1beta1/replicasets, GET /apis/apps/v1/controllerrevisions, GET /apis/apps/v1/daemonsets, GET /apis/apps/v1/deployments, GET /apis/apps/v1/replicasets, GET /apis/apps/v1/statefulsets, GET /apis/apps/v1beta2/controllerrevisions, GET /apis/apps/v1beta2/daemonsets, GET /apis/apps/v1beta2/deployments, GET /apis/apps/v1beta2/replicasets, GET /apis/apps/v1beta2/statefulsets, GET /apis/apps/v1beta1/controllerrevisions, GET /apis/apps/v1beta1/deployments, GET /apis/apps/v1beta1/statefulsets, GET /apis/events.k8s.io/v1beta1/events, GET /apis/autoscaling/v1/horizontalpodautoscalers, GET /apis/autoscaling/v2beta1/horizontalpodautoscalers, GET /apis/batch/v1/jobs, GET /apis/batch/v1beta1/cronjobs, GET /apis/certificates.k8s.io/v1beta1/certificatesigningrequests, GET /apis/networking.k8s.io/v1/networkpolicies, GET /apis/policy/v1beta1/poddisruptionbudgets, GET /apis/policy/v1beta1/podsecuritypolicies, GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings, GET /apis/rbac.authorization.k8s.io/v1/clusterroles, GET /apis/rbac.authorization.k8s.io/v1/rolebindings, GET /apis/rbac.authorization.k8s.io/v1/roles, GET /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings, GET /apis/rbac.authorization.k8s.io/v1beta1/clusterroles, GET /apis/rbac.authorization.k8s.io/v1beta1/rolebindings, GET /apis/rbac.authorization.k8s.io/v1beta1/roles, GET /apis/storage.k8s.io/v1/storageclasses, GET /apis/storage.k8s.io/v1beta1/storageclasses, GET /apis/storage.k8s.io/v1beta1/volumeattachments, GET /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations, GET /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations, GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions, GET /apis/scheduling.k8s.io/v1beta1/priorityclasses, GET /apis/pharos-test.k8s.io/v0/tests, GET /apis/crd.projectcalico.org/v1/globalnetworksets, GET /apis/crd.projectcalico.org/v1/hostendpoints, GET /apis/crd.projectcalico.org/v1/ippools, GET /apis/crd.projectcalico.org/v1/networkpolicies, GET /apis/crd.projectcalico.org/v1/clusterinformations, GET /apis/crd.projectcalico.org/v1/felixconfigurations, GET /apis/crd.projectcalico.org/v1/globalnetworkpolicies, GET /apis/crd.projectcalico.org/v1/bgpconfigurations, GET /apis/crd.projectcalico.org/v1/bgppeers, GET /apis/certmanager.k8s.io/v1alpha1/clusterissuers, GET /apis/certmanager.k8s.io/v1alpha1/certificates, GET /apis/certmanager.k8s.io/v1alpha1/issuers, GET /apis/metrics.k8s.io/v1beta1/nodes, GET /apis/metrics.k8s.io/v1beta1/pods] => HTTP 403 Forbidden in 0.307s
bundler: failed to load command: bin/k8s-client (bin/k8s-client)
K8s::Error::Forbidden: GET /api/v1/componentstatuses => HTTP 403 Forbidden: componentstatuses is forbidden: User "system:serviceaccount:default:deploy-test" cannot list componentstatuses at the cluster scope
  /app/lib/k8s/transport.rb:192:in `parse_response'
  /app/lib/k8s/transport.rb:243:in `block in requests'
  /app/lib/k8s/transport.rb:239:in `map'
  /app/lib/k8s/transport.rb:239:in `requests'
  /app/lib/k8s/transport.rb:284:in `gets'
  /app/lib/k8s/resource_client.rb:50:in `list'
  /app/lib/k8s/client.rb:109:in `list_resources'
  /app/lib/k8s/stack.rb:120:in `prune'
  /app/lib/k8s/stack.rb:107:in `apply'
  bin/k8s-client:324:in `<top (required)>'

Similarly to #13, the K8s::Stack#prune => K8s::Client#list_resources could just ignore HTTP 403 errors... if the API user is not allowed to list a resource type, then it's unlikely that it would have access to create any resources of that type, nevermind deleting them.

Incompatibility issues with active_support

When you have the deep_merge and active_support both installed, you get:

> K8s::Resource.new(...).merge(metadata: { namespace: 'hello' })
ArgumentError: wrong number of arguments (given 2, expected 1)
gems/activesupport-5.2.2/lib/active_support/core_ext/hash/deep_merge.rb:23:in `deep_merge!'
k8s-client-0.6.4/lib/k8s/resource.rb:67:in `merge'

Probably dependant on load order, looking at the deep_merge sources.

Delete fails on OpenShift when trying to parse version

OpenShift returns 1.11.0+d4cacc0 as the gitVersion in the GET /version endpoint.
That can't be parsed by Gem::Version.

I, [2019-06-04T19:00:07.060615 #62646]  INFO -- K8s::Transport<https://master.devel-311.3sca.net:8443>: GET /version => HTTP 200: <K8s::API::Version> in 0.042s
ArgumentError: Malformed version number string 1.11.0+d4cacc0
from /Users/mikz/.rubies/ruby-2.4.2/lib/ruby/2.4.0/rubygems/version.rb:208:in `initialize'
 "/Users/mikz/.rubies/ruby-2.4.2/lib/ruby/2.4.0/rubygems/version.rb:208:in `initialize'",
 "/Users/mikz/.rubies/ruby-2.4.2/lib/ruby/2.4.0/rubygems/version.rb:199:in `new'",
 "/Users/mikz/.rubies/ruby-2.4.2/lib/ruby/2.4.0/rubygems/version.rb:199:in `new'",
 "/Users/mikz/.gem/ruby/2.4.2/gems/k8s-client-0.10.0/lib/k8s/transport.rb:355:in `need_delete_body?'",
 "/Users/mikz/.gem/ruby/2.4.2/gems/k8s-client-0.10.0/lib/k8s/transport.rb:275:in `request'",
 "/Users/mikz/.gem/ruby/2.4.2/gems/k8s-client-0.10.0/lib/k8s/resource_client.rb:312:in `delete'",
 "/Users/mikz/.gem/ruby/2.4.2/gems/k8s-client-0.10.0/lib/k8s/resource_client.rb:346:in `delete_resource'",
 "/Users/mikz/.gem/ruby/2.4.2/gems/k8s-client-0.10.0/lib/k8s/client.rb:267:in `delete_resource'",

Resource unused Forwardable / Comparable

module K8s
  class Resource < RecursiveOpenStruct
    extend Forwardable
    include Comparable
  • There's no def_delegator/ def_delegators, appears the Forwardable is not being used
  • The <=> method has not been implemented so Comparable is doing nothing.

K8s::Error also extends Forwardable but does not forward anything.

Stack apply fails for stacks containing both CRD + instances of that custom resource

The K8s::Stack#apply => client.get_resources attempts to perform API discovery for all stack resources before starting the POST the resources. This fails for custom resources associated with CRDs in the same stack because the CRDs are not defined yet, so the API discovery returns a HTTP 404.

GET /apis/certmanager.k8s.io/v1alpha1 => HTTP 404 Not Found: 404 page not found
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/transport.rb:171:in `parse_response'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/transport.rb:215:in `block in requests'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/transport.rb:213:in `map'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/transport.rb:213:in `requests'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/transport.rb:254:in `gets'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/client.rb:72:in `apis'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/client.rb:123:in `get_resources'
/usr/local/bundle/gems/k8s-client-0.3.0/lib/k8s/stack.rb:72:in `apply'
/app/lib/pharos/addon.rb:205:in `apply_resources'
/app/lib/pharos/addon.rb:171:in `apply_install'
/app/lib/pharos/addon.rb:157:in `apply'
/app/lib/pharos/cluster_manager.rb:126:in `block in apply_addons'
/app/lib/pharos/addon_manager.rb:74:in `block in each'
/app/lib/pharos/addon_manager.rb:86:in `block in with_enabled_addons'
/app/lib/pharos/addon_manager.rb:83:in `each'
/app/lib/pharos/addon_manager.rb:83:in `with_enabled_addons'
/app/lib/pharos/addon_manager.rb:70:in `each'
/app/lib/pharos/cluster_manager.rb:123:in `apply_addons'
/app/lib/pharos/up_command.rb:107:in `configure'
/app/lib/pharos/up_command.rb:44:in `block in execute'
/app/lib/pharos/up_command.rb:43:in `chdir'
/app/lib/pharos/up_command.rb:43:in `execute'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:63:in `run'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/subcommand/execution.rb:11:in `execute'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:63:in `run'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:132:in `run'
/app/lib/pharos/root_command.rb:16:in `run'
bin/pharos-cluster:12:in `<main>'

The API discovery at this point should just ignore the 404 errors, and assume that those resources will not exist yet.

K8s::APIClient#client_for_resource => NoMethodError: undefined method `kind' for nil:NilClass

Broken error handling for K8s::APIClient#client_for_resource.

/usr/local/bundle/gems/k8s-client-0.3.1/lib/k8s/client.rb:107:in `client_for_resource'
/usr/local/bundle/gems/k8s-client-0.3.1/lib/k8s/client.rb:113:in `create_resource'
/usr/local/bundle/gems/k8s-client-0.3.1/lib/k8s/stack.rb:86:in `block in apply'
/usr/local/bundle/gems/k8s-client-0.3.1/lib/k8s/stack.rb:74:in `map'
/usr/local/bundle/gems/k8s-client-0.3.1/lib/k8s/stack.rb:74:in `apply'
/app/lib/pharos/addon.rb:205:in `apply_resources'
/app/lib/pharos/addon.rb:171:in `apply_install'
/app/lib/pharos/addon.rb:157:in `apply'
/app/lib/pharos/cluster_manager.rb:126:in `block in apply_addons'
/app/lib/pharos/addon_manager.rb:74:in `block in each'
/app/lib/pharos/addon_manager.rb:86:in `block in with_enabled_addons'
/app/lib/pharos/addon_manager.rb:83:in `each'
/app/lib/pharos/addon_manager.rb:83:in `with_enabled_addons'
/app/lib/pharos/addon_manager.rb:70:in `each'
/app/lib/pharos/cluster_manager.rb:123:in `apply_addons'
/app/lib/pharos/up_command.rb:107:in `configure'
/app/lib/pharos/up_command.rb:44:in `block in execute'
/app/lib/pharos/up_command.rb:43:in `chdir'
/app/lib/pharos/up_command.rb:43:in `execute'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:63:in `run'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/subcommand/execution.rb:11:in `execute'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:63:in `run'
/usr/local/bundle/gems/clamp-1.2.1/lib/clamp/command.rb:132:in `run'
/app/lib/pharos/root_command.rb:16:in `run'
bin/pharos-cluster:12:in `<main>'

Client#watch returns an unhandled error message on Forbidden response

Let

  lolcats = K8s::Client.in_cluster_config
    .api('something/v1')
    .resource('lolcats', namespace: 'default')

Calling lolcats.watch will give misleading error message caused by permission denied:

[applikator-6469bf9b68-n5k2p applikator] I, [2019-01-09T09:09:30.623639 #1]  INFO -- K8s::Transport<https://10.33.0.1:443>: GET /apis/something/v1 => HTTP 200: <K8s::API::MetaV1::APIResourceList> in 0.014s
[applikator-6469bf9b68-n5k2p applikator] /usr/local/bundle/gems/dry-struct-0.5.1/lib/dry/struct/class_interface.rb:208:in `rescue in new': [K8s::API::MetaV1::WatchEvent.new] :type is missing in Hash input (Dry::Struct::Error) (Excon::Error::Socket)

Calling lolcats.list gives an error message which gives you out a reason:

WARN -- K8s::Transport<https://10.33.0.1:443>: GET /apis/something/v1/namespaces/default/lolcats => HTTP 403 Forbidden in 0.004s
[operator-6478f68bcd-t6w75 operator] /usr/local/bundle/gems/k8s-client-0.6.4/lib/k8s/transport.rb:211:in `parse_response': GET /apis/something/v1/namespaces/default/lolcats => HTTP 403 Forbidden: lolcats.xyz is forbidden: User "system:serviceaccount:kube-system:operator" cannot list resource "lolcats" in API group "xyz" in the namespace "default" (K8s::Error::Forbidden)

It took me some time to figure out the reason without a reasonable failure message.

ApiClient exceptions should clear caches

The find_api_resource and client_for_resource methods will raise K8s::Error::UndefinedResource when something is not found. The collections are cached indefinitely, so the error will be raised until a new client instance is created, even if the resource becomes available.

If the cache was cleared before raising the exception, then the user could do something like:

begin
  client.find_api_resource(xyz)
rescue K8s::Error::UndefinedResource
  sleep 2
  retry
end

Travis specs fail on ruby-head

Worker information
hostname: ac1d91a1-a7e2-4fb1-8c00-897740fe2d9e@1.production-1-worker-com-gce-8nts
version: v6.2.0 https://github.com/travis-ci/worker/tree/5e5476e01646095f48eec13196fdb3faf8f5cbf7
instance: travis-job-fecb1aee-f09f-4100-b475-dfa04ec934d2 travis-ci-garnet-trusty-1512502259-986baf0 (via amqp)
startup: 6.389735813s
system_info
Build system information
Build language: ruby
Build group: stable
Build dist: trusty
Build id: 100908923
Job id: 177732223
Runtime kernel version: 4.4.0-101-generic
travis-build version: 7675838a8
Build image provisioning date and time
Tue Dec  5 19:58:13 UTC 2017
Operating System Details
Distributor ID:	Ubuntu
Description:	Ubuntu 14.04.5 LTS
Release:	14.04
Codename:	trusty
Cookbooks Version
7c2c6a6 https://github.com/travis-ci/travis-cookbooks/tree/7c2c6a6
git version
git version 2.15.1
bash version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
gcc version
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
docker version
Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:38 2017
 OS/Arch:      linux/amd64
Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:41:20 2017
 OS/Arch:      linux/amd64
 Experimental: false
clang version
clang version 5.0.0 (tags/RELEASE_500/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/local/clang-5.0.0/bin
jq version
jq-1.5
bats version
Bats 0.4.0
shellcheck version
0.4.6
shfmt version
v2.0.0
ccache version
ccache version 3.1.9
Copyright (C) 2002-2007 Andrew Tridgell
Copyright (C) 2009-2011 Joel Rosdahl
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 3 of the License, or (at your option) any later
version.
cmake version
cmake version 3.9.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
heroku version
heroku-cli/6.14.39-addc925 (linux-x64) node-v9.2.0
imagemagick version
Version: ImageMagick 6.7.7-10 2017-07-31 Q16 http://www.imagemagick.org
md5deep version
4.2
mercurial version
Mercurial Distributed SCM (version 4.2.2)
(see https://mercurial-scm.org for more information)
Copyright (C) 2005-2017 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
mysql version
mysql  Ver 14.14 Distrib 5.6.33, for debian-linux-gnu (x86_64) using  EditLine wrapper
openssl version
OpenSSL 1.0.1f 6 Jan 2014
packer version
Packer v1.0.2
Your version of Packer is out of date! The latest version
is 1.1.2. You can update by downloading from www.packer.io
postgresql client version
psql (PostgreSQL) 9.6.6
ragel version
Ragel State Machine Compiler version 6.8 Feb 2013
Copyright (c) 2001-2009 by Adrian Thurston
subversion version
svn, version 1.8.8 (r1568071)
   compiled Aug 10 2017, 17:20:39 on x86_64-pc-linux-gnu
Copyright (C) 2013 The Apache Software Foundation.
This software consists of contributions made by many people;
see the NOTICE file for more information.
Subversion is open source software, see http://subversion.apache.org/
The following repository access (RA) modules are available:
* ra_svn : Module for accessing a repository using the svn network protocol.
  - with Cyrus SASL authentication
  - handles 'svn' scheme
* ra_local : Module for accessing a repository on local disk.
  - handles 'file' scheme
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
  - using serf 1.3.3
  - handles 'http' scheme
  - handles 'https' scheme
sudo version
Sudo version 1.8.9p5
Configure options: --prefix=/usr -v --with-all-insults --with-pam --with-fqdn --with-logging=syslog --with-logfac=authpriv --with-env-editor --with-editor=/usr/bin/editor --with-timeout=15 --with-password-timeout=0 --with-passprompt=[sudo] password for %p:  --without-lecture --with-tty-tickets --disable-root-mailer --enable-admin-flag --with-sendmail=/usr/sbin/sendmail --with-timedir=/var/lib/sudo --mandir=/usr/share/man --libexecdir=/usr/lib/sudo --with-sssd --with-sssd-lib=/usr/lib/x86_64-linux-gnu --with-selinux
Sudoers policy plugin version 1.8.9p5
Sudoers file grammar version 43
Sudoers path: /etc/sudoers
Authentication methods: 'pam'
Syslog facility if syslog is being used for logging: authpriv
Syslog priority to use when user authenticates successfully: notice
Syslog priority to use when user authenticates unsuccessfully: alert
Send mail if the user is not in sudoers
Use a separate timestamp for each user/tty combo
Lecture user the first time they run sudo
Root may run sudo
Allow some information gathering to give useful error messages
Require fully-qualified hostnames in the sudoers file
Visudo will honor the EDITOR environment variable
Set the LOGNAME and USER environment variables
Length at which to wrap log file lines (0 for no wrap): 80
Authentication timestamp timeout: 15.0 minutes
Password prompt timeout: 0.0 minutes
Number of tries to enter a password: 3
Umask to use or 0777 to use user's: 022
Path to mail program: /usr/sbin/sendmail
Flags for mail program: -t
Address to send mail to: root
Subject line for mail messages: *** SECURITY information for %h ***
Incorrect password message: Sorry, try again.
Path to authentication timestamp dir: /var/lib/sudo
Default password prompt: [sudo] password for %p: 
Default user to run commands as: root
Value to override user's $PATH with: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
Path to the editor for use by visudo: /usr/bin/editor
When to require a password for 'list' pseudocommand: any
When to require a password for 'verify' pseudocommand: all
File descriptors >= 3 will be closed before executing a command
Environment variables to check for sanity:
	TZ
	TERM
	LINGUAS
	LC_*
	LANGUAGE
	LANG
	COLORTERM
Environment variables to remove:
	RUBYOPT
	RUBYLIB
	PYTHONUSERBASE
	PYTHONINSPECT
	PYTHONPATH
	PYTHONHOME
	TMPPREFIX
	ZDOTDIR
	READNULLCMD
	NULLCMD
	FPATH
	PERL5DB
	PERL5OPT
	PERL5LIB
	PERLLIB
	PERLIO_DEBUG 
	JAVA_TOOL_OPTIONS
	SHELLOPTS
	GLOBIGNORE
	PS4
	BASH_ENV
	ENV
	TERMCAP
	TERMPATH
	TERMINFO_DIRS
	TERMINFO
	_RLD*
	LD_*
	PATH_LOCALE
	NLSPATH
	HOSTALIASES
	RES_OPTIONS
	LOCALDOMAIN
	CDPATH
	IFS
Environment variables to preserve:
	JAVA_HOME
	TRAVIS
	CI
	DEBIAN_FRONTEND
	XAUTHORIZATION
	XAUTHORITY
	PS2
	PS1
	PATH
	LS_COLORS
	KRB5CCNAME
	HOSTNAME
	HOME
	DISPLAY
	COLORS
Locale to use while parsing sudoers: C
Directory in which to store input/output logs: /var/log/sudo-io
File in which to store the input/output log: %{seq}
Add an entry to the utmp/utmpx file when allocating a pty
PAM service name to use
PAM service name to use for login shells
Create a new PAM session for the command to run in
Maximum I/O log sequence number: 0
Local IP address and netmask pairs:
	10.240.0.28/255.255.255.255
	172.17.0.1/255.255.0.0
Sudoers I/O plugin version 1.8.9p5
gzip version
gzip 1.6
Copyright (C) 2007, 2010, 2011 Free Software Foundation, Inc.
Copyright (C) 1993 Jean-loup Gailly.
This is free software.  You may redistribute copies of it under the terms of
the GNU General Public License <http://www.gnu.org/licenses/gpl.html>.
There is NO WARRANTY, to the extent permitted by law.
Written by Jean-loup Gailly.
zip version
Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.
This is Zip 3.0 (July 5th 2008), by Info-ZIP.
Currently maintained by E. Gordon.  Please send bug reports to
the authors using the web page at www.info-zip.org; see README for details.
Latest sources and executables are at ftp://ftp.info-zip.org/pub/infozip,
as of above date; see http://www.info-zip.org/ for other sites.
Compiled with gcc 4.8.2 for Unix (Linux ELF) on Oct 21 2013.
Zip special compilation options:
	USE_EF_UT_TIME       (store Universal Time)
	BZIP2_SUPPORT        (bzip2 library version 1.0.6, 6-Sept-2010)
	    bzip2 code and library copyright (c) Julian R Seward
	    (See the bzip2 license for terms of use)
	SYMLINK_SUPPORT      (symbolic links supported)
	LARGE_FILE_SUPPORT   (can read and write large files on file system)
	ZIP64_SUPPORT        (use Zip64 to store large files in archives)
	UNICODE_SUPPORT      (store and read UTF-8 Unicode paths)
	STORE_UNIX_UIDs_GIDs (store UID/GID sizes/values using new extra field)
	UIDGID_NOT_16BIT     (old Unix 16-bit UID/GID extra field not used)
	[encryption, version 2.91 of 05 Jan 2007] (modified for Zip 3)
Encryption notice:
	The encryption code of this program is not copyrighted and is
	put in the public domain.  It was originally written in Europe
	and, to the best of our knowledge, can be freely distributed
	in both source and object forms from any country, including
	the USA under License Exception TSU of the U.S. Export
	Administration Regulations (section 740.13(e)) of 6 June 2002.
Zip environment options:
             ZIP:  [none]
          ZIPOPT:  [none]
vim version
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Nov 24 2016 16:43:18)
Included patches: 1-52
Extra patches: 8.0.0056
Modified by [email protected]
Compiled by buildd@
Huge version without GUI.  Features included (+) or not (-):
+acl             +farsi           +mouse_netterm   +syntax
+arabic          +file_in_path    +mouse_sgr       +tag_binary
+autocmd         +find_in_path    -mouse_sysmouse  +tag_old_static
-balloon_eval    +float           +mouse_urxvt     -tag_any_white
-browse          +folding         +mouse_xterm     -tcl
++builtin_terms  -footer          +multi_byte      +terminfo
+byte_offset     +fork()          +multi_lang      +termresponse
+cindent         +gettext         -mzscheme        +textobjects
-clientserver    -hangul_input    +netbeans_intg   +title
-clipboard       +iconv           +path_extra      -toolbar
+cmdline_compl   +insert_expand   -perl            +user_commands
+cmdline_hist    +jumplist        +persistent_undo +vertsplit
+cmdline_info    +keymap          +postscript      +virtualedit
+comments        +langmap         +printer         +visual
+conceal         +libcall         +profile         +visualextra
+cryptv          +linebreak       +python          +viminfo
+cscope          +lispindent      -python3         +vreplace
+cursorbind      +listcmds        +quickfix        +wildignore
+cursorshape     +localmap        +reltime         +wildmenu
+dialog_con      -lua             +rightleft       +windows
+diff            +menu            -ruby            +writebackup
+digraphs        +mksession       +scrollbind      -X11
-dnd             +modify_fname    +signs           -xfontset
-ebcdic          +mouse           +smartindent     -xim
+emacs_tags      -mouseshape      -sniff           -xsmp
+eval            +mouse_dec       +startuptime     -xterm_clipboard
+ex_extra        +mouse_gpm       +statusline      -xterm_save
+extra_search    -mouse_jsbterm   -sun_workshop    -xpm
   system vimrc file: "$VIM/vimrc"
     user vimrc file: "$HOME/.vimrc"
 2nd user vimrc file: "~/.vim/vimrc"
      user exrc file: "$HOME/.exrc"
  fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H     -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1      
Linking: gcc   -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,--as-needed -o vim        -lm -ltinfo -lnsl  -lselinux  -lacl -lattr -lgpm -ldl    -L/usr/lib/python2.7/config-x86_64-linux-gnu -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions      
iptables version
iptables v1.4.21
curl version
curl 7.35.0 (x86_64-pc-linux-gnu) libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
wget version
GNU Wget 1.15 built on linux-gnu.
rsync version
rsync  version 3.1.0  protocol version 31
gimme version
v1.2.0
nvm version
0.33.6
perlbrew version
/home/travis/perl5/perlbrew/bin/perlbrew  - App::perlbrew/0.80
phpenv version
rbenv 1.1.1-25-g6aa70b6
rvm version
rvm 1.29.3 (latest) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin [https://rvm.io]
default ruby version
ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]
CouchDB version
couchdb 1.6.1
ElasticSearch version
5.5.0
Installed Firefox version
firefox 56.0.2
MongoDB version
MongoDB 3.4.10
mPhantomJS version
2.1.1
Pre-installed PostgreSQL versions
9.2.24
9.3.20
9.4.15
9.5.10
9.6.6
RabbitMQ Version
3.6.14
Redis version
redis-server 4.0.6
riak version
2.2.3
Pre-installed Go versions
1.7.4
ant version
Apache Ant(TM) version 1.9.3 compiled on April 8 2014
mvn version
Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z)
Maven home: /usr/local/maven-3.5.2
Java version: 1.8.0_151, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-oracle/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.4.0-98-generic", arch: "amd64", family: "unix"
gradle version
------------------------------------------------------------
Gradle 4.0.1
------------------------------------------------------------
Build time:   2017-07-07 14:02:41 UTC
Revision:     38e5dc0f772daecca1d2681885d3d85414eb6826
Groovy:       2.4.11
Ant:          Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM:          1.8.0_151 (Oracle Corporation 25.151-b12)
OS:           Linux 4.4.0-98-generic amd64
lein version
Leiningen 2.8.1 on Java 1.8.0_151 Java HotSpot(TM) 64-Bit Server VM
Pre-installed Node.js versions
v4.8.6
v6.12.0
v6.12.1
v8.9
v8.9.1
phpenv versions
  system
  5.6
* 5.6.32 (set by /home/travis/.phpenv/version)
  7.0
  7.0.25
  7.1
  7.1.11
  hhvm
  hhvm-stable
composer --version
Composer version 1.5.2 2017-09-11 16:59:25
Pre-installed Ruby versions
ruby-2.2.7
ruby-2.3.4
ruby-2.4.1
git.checkout
0.61s$ git clone --depth=50 https://github.com/kontena/k8s-client.git kontena/k8s-client
Cloning into 'kontena/k8s-client'...
remote: Enumerating objects: 502, done.
remote: Counting objects: 100% (502/502), done.
remote: Compressing objects: 100% (257/257), done.
remote: Total 502 (delta 274), reused 411 (delta 224), pack-reused 0
Receiving objects: 100% (502/502), 129.19 KiB | 4.45 MiB/s, done.
Resolving deltas: 100% (274/274), done.
$ cd kontena/k8s-client
0.40s$ git fetch origin +refs/pull/105/merge:
remote: Enumerating objects: 22, done.
remote: Counting objects: 100% (22/22), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 7 (delta 3), reused 6 (delta 3), pack-reused 0
Unpacking objects: 100% (7/7), done.
From https://github.com/kontena/k8s-client
 * branch            refs/pull/105/merge -> FETCH_HEAD
$ git checkout -qf FETCH_HEAD
rvm
0.08s$ command curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -
gpg: key D39DC0E3: "Michal Papis (RVM signing) <[email protected]>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
0.52s$ command curl -sSL https://rvm.io/pkuczynski.asc | gpg2 --import -
gpg: key 39499BDB: public key "Piotr Kuczynski <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
Setting up latest ruby-head
$ export ruby_alias=`rvm alias show ruby-head 2>/dev/null`
0.19s$ rvm alias delete ruby-head
Deleting alias: ruby-head...
4.49s$ rvm remove ${ruby_alias:-ruby-head} --gems
ruby-head - #already gone
Using /home/travis/.rvm/gems/ruby-2.4.1
** Updating RubyGems to the latest compatible version for security reasons. **
** If you need an older version, you can downgrade with 'gem update --system OLD_VERSION'. **
0.77s$ rvm remove ruby-head --gems --fuzzy
ruby-head - #already gone
Using /home/travis/.rvm/gems/ruby-2.4.1
25.18s$ rvm install ruby-head --binary
curl: (22) The requested URL returned error: 404 Not Found
curl: (22) The requested URL returned error: 404 Not Found
Searching for binary rubies, this might take some time.
Found remote file https://rubies.travis-ci.org/ubuntu/14.04/x86_64/ruby-head.tar.bz2
Checking requirements for ubuntu.
Requirements installation successful.
ruby-head - #configure
ruby-head - #download
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 14.6M  100 14.6M    0     0  3005k      0  0:00:04  0:00:04 --:--:-- 3471k
No checksum for downloaded archive, recording checksum in user configuration.
ruby-head - #validate archive
ruby-head - #extract
ruby-head - #validate binary
ruby-head - #setup
ruby-head - #gemset created /home/travis/.rvm/gems/ruby-head@global
ruby-head - #importing gemset /home/travis/.rvm/gemsets/global.gems..............there was an error installing gem gem-wrappers
............there was an error installing gem rubygems-bundler
.......................there was an error installing gem rake
..............there was an error installing gem rvm
.
ruby-head - #generating global wrappers..............
Error running 'run_gem_wrappers_regenerate',
please read /home/travis/.rvm/log/1550139831_ruby-head/gemset.wrappers.global.log
ruby-head - #gemset created /home/travis/.rvm/gems/ruby-head
ruby-head - #importing gemset /home/travis/.rvm/gemsets/default.gems.............there was an error installing gem rake
.........
ruby-head - #generating default wrappers..............
Error running 'run_gem_wrappers_regenerate',
please read /home/travis/.rvm/log/1550139833_ruby-head/gemset.wrappers.default.log
0.39s$ rvm use ruby-head
Using /home/travis/.rvm/gems/ruby-head
$ export BUNDLE_GEMFILE=$PWD/Gemfile
ruby.versions
$ ruby --version
ruby 2.7.0dev (2019-02-14 trunk 67073) [x86_64-linux]
$ rvm --version
rvm 1.29.3 (latest) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin [https://rvm.io]
$ bundle --version
Bundler version 2.0.1
$ gem --version
3.1.0.pre1
before_install
3.39s$ gem install bundler -v 1.16.2
Fetching bundler-1.16.2.gem
Successfully installed bundler-1.16.2
Parsing documentation for bundler-1.16.2
Installing ri documentation for bundler-1.16.2
Done installing documentation for bundler after 2 seconds
1 gem installed
8.46s$ bundle install --jobs=3 --retry=3
Fetching gem metadata from https://rubygems.org/...........
Fetching gem metadata from https://rubygems.org/.
Resolving dependencies...
Bundler could not find compatible versions for gem "bundler":
  In Gemfile:
    bundler (~> 1.16)
  Current Bundler version:
    bundler (2.0.1)
This Gemfile requires a different version of Bundler.
Perhaps you need to update Bundler by running `gem install bundler`?
Could not find gem 'bundler (~> 1.16)' in any of the relevant sources:
  the local ruby installation
The command "eval bundle install --jobs=3 --retry=3 " failed. Retrying, 2 of 3.
Fetching gem metadata from https://rubygems.org/...........
Fetching gem metadata from https://rubygems.org/.
Resolving dependencies...
Bundler could not find compatible versions for gem "bundler":
  In Gemfile:
    bundler (~> 1.16)
  Current Bundler version:
    bundler (2.0.1)
This Gemfile requires a different version of Bundler.
Perhaps you need to update Bundler by running `gem install bundler`?
Could not find gem 'bundler (~> 1.16)' in any of the relevant sources:
  the local ruby installation
The command "eval bundle install --jobs=3 --retry=3 " failed. Retrying, 3 of 3.
Fetching gem metadata from https://rubygems.org/...........
Fetching gem metadata from https://rubygems.org/.
Resolving dependencies...
Bundler could not find compatible versions for gem "bundler":
  In Gemfile:
    bundler (~> 1.16)
  Current Bundler version:
    bundler (2.0.1)
This Gemfile requires a different version of Bundler.
Perhaps you need to update Bundler by running `gem install bundler`?
Could not find gem 'bundler (~> 1.16)' in any of the relevant sources:
  the local ruby installation
The command "eval bundle install --jobs=3 --retry=3 " failed 3 times.
The command "bundle install --jobs=3 --retry=3" failed and exited with 6 during .
Your build has been stopped.

Add a changelog

It's rare to see a gem without CHANGES.txt or such in the root.

There's probably some service or tools to automatically update it when commits to master occur.

Automatic configuration

We have something like this in several tools that use k8s-client:

    def create_client
      if ENV['KUBE_TOKEN'] && ENV['KUBE_CA'] && ENV['KUBE_SERVER']
        K8s::Client.new(K8s::Transport.config(build_kubeconfig_from_env))
      elsif ENV['KUBECONFIG']
        K8s::Client.config(K8s::Config.load_file(ENV['KUBECONFIG']))
      elsif File.exist?(File.join(Dir.home, '.kube', 'config'))
        K8s::Client.config(K8s::Config.load_file(File.join(Dir.home, '.kube', 'config')))
      else
        K8s::Client.in_cluster_config
      end
    end

    # @return [K8s::Config]
    def build_kubeconfig_from_env
      token = ENV['KUBE_TOKEN']
      token = Base64.strict_decode64(token)

      K8s::Config.new(
        clusters: [
          {
            name: 'kubernetes',
            cluster: {
              server: ENV['KUBE_SERVER'],
              certificate_authority_data: ENV['KUBE_CA']
            }
          }
        ],
        users: [
          {
            name: 'mortar',
            user: {
              token: token
            }
          }
        ],
        contexts: [
          {
            name: 'mortar',
            context: {
              cluster: 'kubernetes',
              user: 'mortar'
            }
          }
        ],
        preferences: {},
        current_context: 'mortar'
      )
    rescue ArgumentError
      signal_usage_error "KUBE_TOKEN env doesn't seem to be base64 encoded!"
    end

I think this should be in k8s-client itself and K8s::Config should know how to merge multiple configs when KUBECONFIG includes references to multiple files as documented here

Excon 0.62.0 issue

Hi! Thanks for this lovely gem. It's the backbone of https://github.com/bgeesaman/inspec-k8s (the train-kubernetes plugin, specifically). I noticed when installing excon from other gems, it installs 0.64.0. When I install k8s-client, it pulls in 0.62.0. When using inspec, this can lead to both 0.62.0 and 0.64.0 installed and then they conflict when running inspec.

Would you consider changing the gem dependency to >= 0.62.0 and making a new versioned release?

Thanks!

Replace YAJL with a pure ruby solution

It's the only dependency with a native extension, requiring build tools. It would be nice to replace it with a pure ruby alternative if one exists or is released.

I believe the only reason it's used is because the watch api didn't work without a streaming JSON parser.

Tail logs

Perhaps it is just undocumented, but is there a way to tail the logs of a pod using k8s-client?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.