Giter VIP home page Giter VIP logo

kubernetes-client's Introduction

Kubernetes & OpenShift Java Client Join the chat at https://gitter.im/fabric8io/kubernetes-client

This client provides access to the full Kubernetes & OpenShift REST APIs via a fluent DSL.

Build Sonar Scanner Bugs E2E Tests Release Twitter

Module Maven Central Javadoc
kubernetes-client Maven Central Javadocs
openshift-client Maven Central Javadocs
Extensions Maven Central Javadoc
knative-client Maven Central Javadocs
tekton-client Maven Central Javadocs
servicecatalog-client Maven Central Javadocs
chaosmesh-client Maven Central Javadocs
volumesnapshot-client Maven Central Javadocs
volcano-client Maven Central Javadocs
istio-client Maven Central Javadocs
open-cluster-management-client Maven Central Javadocs

Contents

Usage

Creating a client

The easiest way to create a client is:

KubernetesClient client = new KubernetesClientBuilder().build();

DefaultOpenShiftClient implements both the KubernetesClient & OpenShiftClient interface so if you need the OpenShift extensions, such as Builds, etc then simply do:

OpenShiftClient osClient = new KubernetesClientBuilder().build().adapt(OpenShiftClient.class);

Configuring the client

This will use settings from different sources in the following order of priority:

  • System properties
  • Environment variables
  • Kube config file
  • Service account token & mounted CA certificate

System properties are preferred over environment variables. The following system properties & environment variables can be used for configuration:

Property / Environment Variable Description Default value
kubernetes.disable.autoConfig / KUBERNETES_DISABLE_AUTOCONFIG Disable automatic configuration (KubernetesClient would not look in ~/.kube/config, mounted ServiceAccount, environment variables or System properties for Kubernetes cluster information) false
kubernetes.master / KUBERNETES_MASTER Kubernetes master URL https://kubernetes.default.svc
kubernetes.api.version / KUBERNETES_API_VERSION API version v1
openshift.url / OPENSHIFT_URL OpenShift master URL Kubernetes master URL value
kubernetes.oapi.version / KUBERNETES_OAPI_VERSION OpenShift API version v1
kubernetes.trust.certificates / KUBERNETES_TRUST_CERTIFICATES Trust all certificates false
kubernetes.disable.hostname.verification / KUBERNETES_DISABLE_HOSTNAME_VERIFICATION false
kubernetes.certs.ca.file / KUBERNETES_CERTS_CA_FILE
kubernetes.certs.ca.data / KUBERNETES_CERTS_CA_DATA
kubernetes.certs.client.file / KUBERNETES_CERTS_CLIENT_FILE
kubernetes.certs.client.data / KUBERNETES_CERTS_CLIENT_DATA
kubernetes.certs.client.key.file / KUBERNETES_CERTS_CLIENT_KEY_FILE
kubernetes.certs.client.key.data / KUBERNETES_CERTS_CLIENT_KEY_DATA
kubernetes.certs.client.key.algo / KUBERNETES_CERTS_CLIENT_KEY_ALGO Client key encryption algorithm RSA
kubernetes.certs.client.key.passphrase / KUBERNETES_CERTS_CLIENT_KEY_PASSPHRASE
kubernetes.auth.basic.username / KUBERNETES_AUTH_BASIC_USERNAME
kubernetes.auth.basic.password / KUBERNETES_AUTH_BASIC_PASSWORD
kubernetes.auth.serviceAccount.token / KUBERNETES_AUTH_SERVICEACCOUNT_TOKEN Name of the service account token file /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes.auth.tryKubeConfig / KUBERNETES_AUTH_TRYKUBECONFIG Configure client using Kubernetes config true
kubeconfig / KUBECONFIG Name of the kubernetes config file to read ~/.kube/config
kubernetes.auth.tryServiceAccount / KUBERNETES_AUTH_TRYSERVICEACCOUNT Configure client from Service account true
kubernetes.tryNamespacePath / KUBERNETES_TRYNAMESPACEPATH Configure client namespace from Kubernetes service account namespace path true
kubernetes.auth.token / KUBERNETES_AUTH_TOKEN
kubernetes.watch.reconnectInterval / KUBERNETES_WATCH_RECONNECTINTERVAL Watch reconnect interval in ms 1000
kubernetes.watch.reconnectLimit / KUBERNETES_WATCH_RECONNECTLIMIT Number of reconnect attempts (-1 for infinite) -1
kubernetes.connection.timeout / KUBERNETES_CONNECTION_TIMEOUT Connection timeout in ms (0 for no timeout) 10000
kubernetes.request.timeout / KUBERNETES_REQUEST_TIMEOUT Read timeout in ms 10000
kubernetes.upload.connection.timeout / KUBERNETES_UPLOAD_CONNECTION_TIMEOUT Pod upload connection timeout in ms 10000
kubernetes.upload.request.timeout / KUBERNETES_UPLOAD_REQUEST_TIMEOUT Pod upload request timeout in ms 120000
kubernetes.request.retry.backoffLimit / KUBERNETES_REQUEST_RETRY_BACKOFFLIMIT Number of retry attempts (-1 for infinite) 10
kubernetes.request.retry.backoffInterval / KUBERNETES_REQUEST_RETRY_BACKOFFINTERVAL Retry initial backoff interval in ms 100
kubernetes.rolling.timeout / KUBERNETES_ROLLING_TIMEOUT Rolling timeout in ms 900000
kubernetes.logging.interval / KUBERNETES_LOGGING_INTERVAL Logging interval in ms 20000
kubernetes.scale.timeout / KUBERNETES_SCALE_TIMEOUT Scale timeout in ms 600000
kubernetes.websocket.timeout / KUBERNETES_WEBSOCKET_TIMEOUT Websocket timeout in ms 5000
kubernetes.websocket.ping.interval / KUBERNETES_WEBSOCKET_PING_INTERVAL Websocket ping interval in ms 30000
kubernetes.max.concurrent.requests / KUBERNETES_MAX_CONCURRENT_REQUESTS 64
kubernetes.max.concurrent.requests.per.host / KUBERNETES_MAX_CONCURRENT_REQUESTS_PER_HOST 5
kubernetes.impersonate.username / KUBERNETES_IMPERSONATE_USERNAME Impersonate-User HTTP header value
kubernetes.impersonate.group / KUBERNETES_IMPERSONATE_GROUP Impersonate-Group HTTP header value
kubernetes.tls.versions / KUBERNETES_TLS_VERSIONS TLS versions separated by , TLSv1.2,TLSv1.3
kubernetes.truststore.file / KUBERNETES_TRUSTSTORE_FILE
kubernetes.truststore.passphrase / KUBERNETES_TRUSTSTORE_PASSPHRASE
kubernetes.keystore.file / KUBERNETES_KEYSTORE_FILE
kubernetes.keystore.passphrase / KUBERNETES_KEYSTORE_PASSPHRASE
kubernetes.backwardsCompatibilityInterceptor.disable / KUBERNETES_BACKWARDSCOMPATIBILITYINTERCEPTOR_DISABLE Disable the BackwardsCompatibilityInterceptor true
no.proxy / NO_PROXY comma-separated list of domain extensions proxy should not be used for

Alternatively you can use the ConfigBuilder to create a config object for the Kubernetes client:

Config config = new ConfigBuilder().withMasterUrl("https://mymaster.com").build();
KubernetesClient client = new KubernetesClientBuilder().withConfig(config).build();

Using the DSL is the same for all resources.

List resources:

NamespaceList myNs = client.namespaces().list();

ServiceList myServices = client.services().list();

ServiceList myNsServices = client.services().inNamespace("default").list();

Get a resource:

Namespace myns = client.namespaces().withName("myns").get();

Service myservice = client.services().inNamespace("default").withName("myservice").get();

Delete:

Namespace myns = client.namespaces().withName("myns").delete();

Service myservice = client.services().inNamespace("default").withName("myservice").delete();

Editing resources uses the inline builders from the Kubernetes Model:

Namespace myns = client.namespaces().withName("myns").edit(n -> new NamespaceBuilder(n)
                   .editMetadata()
                     .addToLabels("a", "label")
                   .endMetadata()
                   .build());

Service myservice = client.services().inNamespace("default").withName("myservice").edit(s -> new ServiceBuilder(s)
                     .editMetadata()
                       .addToLabels("another", "label")
                     .endMetadata()
                     .build());

In the same spirit you can inline builders to create:

Namespace myns = client.namespaces().create(new NamespaceBuilder()
                   .withNewMetadata()
                     .withName("myns")
                     .addToLabels("a", "label")
                   .endMetadata()
                   .build());

Service myservice = client.services().inNamespace("default").create(new ServiceBuilder()
                     .withNewMetadata()
                       .withName("myservice")
                       .addToLabels("another", "label")
                     .endMetadata()
                     .build());

You can also set the apiVersion of the resource like in the case of SecurityContextConstraints :

SecurityContextConstraints scc = new SecurityContextConstraintsBuilder()
		.withApiVersion("v1")
		.withNewMetadata().withName("scc").endMetadata()
		.withAllowPrivilegedContainer(true)
		.withNewRunAsUser()
		.withType("RunAsAny")
		.endRunAsUser()
		.build();

Following events

Use io.fabric8.kubernetes.api.model.Event as T for Watcher:

client.events().inAnyNamespace().watch(new Watcher<Event>() {

  @Override
  public void eventReceived(Action action, Event resource) {
    System.out.println("event " + action.name() + " " + resource.toString());
  }

  @Override
  public void onClose(KubernetesClientException cause) {
    System.out.println("Watcher close due to " + cause);
  }

});

Working with extensions

The kubernetes API defines a bunch of extensions like daemonSets, jobs, ingresses and so forth which are all usable in the extensions() DSL:

e.g. to list the jobs...

jobs = client.batch().jobs().list();

Loading resources from external sources

There are cases where you want to read a resource from an external source, rather than defining it using the clients DSL. For those cases the client allows you to load the resource from:

  • A file (Supports both java.io.File and java.lang.String)
  • A url
  • An input stream

Once the resource is loaded, you can treat it as you would, had you created it yourself.

For example lets read a pod, from a yml file and work with it:

Pod refreshed = client.load('/path/to/a/pod.yml').fromServer().get();
client.load('/workspace/pod.yml').delete();
LogWatch handle = client.load('/workspace/pod.yml').watchLog(System.out);

Passing a reference of a resource to the client

In the same spirit you can use an object created externally (either a reference or using its string representation).

For example:

Pod pod = someThirdPartyCodeThatCreatesAPod();
client.resource(pod).delete();

Adapting the client

The client supports plug-able adapters. An example adapter is the OpenShift Adapter which allows adapting an existing KubernetesClient instance to an OpenShiftClient one.

For example:

KubernetesClient client = new KubernetesClientBuilder().build();

OpenShiftClient oClient = client.adapt(OpenShiftClient.class);

The client also support the isAdaptable() method which checks if the adaptation is possible and returns true if it does.

KubernetesClient client = new KubernetesClientBuilder().build();
if (client.isAdaptable(OpenShiftClient.class)) {
    OpenShiftClient oClient = client.adapt(OpenShiftClient.class);
} else {
    throw new Exception("Adapting to OpenShiftClient not support. Check if adapter is present, and that env provides /oapi root path.");
}

Adapting and close

Note that when using adapt() both the adaptee and the target will share the same resources (underlying http client, thread pools etc). This means that close() is not required to be used on every single instance created via adapt. Calling close() on any of the adapt() managed instances or the original instance, will properly clean up all the resources and thus none of the instances will be usable any longer.

Mocking Kubernetes

Along with the client this project also provides a kubernetes mock server that you can use for testing purposes. The mock server is based on https://github.com/square/okhttp/tree/master/mockwebserver but is empowered by the DSL and features provided by https://github.com/fabric8io/mockwebserver.

The Mock Web Server has two modes of operation:

  • Expectations mode
  • CRUD mode

Expectations mode

It's the typical mode where you first set which are the expected http requests and which should be the responses for each request. More details on usage can be found at: https://github.com/fabric8io/mockwebserver

This mode has been extensively used for testing the client itself. Make sure you check kubernetes-test.

To add a Kubernetes server to your test:

@Rule
public KubernetesServer server = new KubernetesServer();

CRUD mode

Defining every single request and response can become tiresome. Given that in most cases the mock webserver is used to perform simple crud based operations, a crud mode has been added. When using the crud mode, the mock web server will store, read, update and delete kubernetes resources using an in memory map and will appear as a real api server.

To add a Kubernetes Server in crud mode to your test:

@Rule
public KubernetesServer server = new KubernetesServer(true, true);

Then you can use the server like:

@Test
public void testInCrudMode() {
    KubernetesClient client = server.getClient();
    final CountDownLatch deleteLatch = new CountDownLatch(1);
    final CountDownLatch closeLatch = new CountDownLatch(1);

    //CREATE
    client.pods().inNamespace("ns1").create(new PodBuilder().withNewMetadata().withName("pod1").endMetadata().build());

    //READ
    podList = client.pods().inNamespace("ns1").list();
    assertNotNull(podList);
    assertEquals(1, podList.getItems().size());

    //WATCH
    Watch watch = client.pods().inNamespace("ns1").withName("pod1").watch(new Watcher<Pod>() {
        @Override
        public void eventReceived(Action action, Pod resource) {
            switch (action) {
                case DELETED:
                    deleteLatch.countDown();
                    break;
                default:
                    throw new AssertionFailedError(action.toString().concat(" isn't recognised."));
            }
        }

        @Override
        public void onClose(KubernetesClientException cause) {
            closeLatch.countDown();
        }
    });

    //DELETE
    client.pods().inNamespace("ns1").withName("pod1").delete();

    //READ AGAIN
    podList = client.pods().inNamespace("ns1").list();
    assertNotNull(podList);
    assertEquals(0, podList.getItems().size());

    assertTrue(deleteLatch.await(1, TimeUnit.MINUTES));
    watch.close();
    assertTrue(closeLatch.await(1, TimeUnit.MINUTES));
}

JUnit5 support through extension

You can use KubernetesClient mocking mechanism with JUnit5. Since it doesn't support @Rule and @ClassRule there is dedicated annotation @EnableKubernetesMockClient. If you would like to create instance of mocked KubernetesClient for each test (JUnit4 @Rule) you need to declare instance of KubernetesClient as shown below.

@EnableKubernetesMockClient
class ExampleTest {

    KubernetesClient client;

    @Test
    public void testInStandardMode() {
            ...
    }
}

In case you would like to define static instance of mocked server per all the test (JUnit4 @ClassRule) you need to declare instance of KubernetesClient as shown below. You can also enable crudMode by using annotation field crud.

@EnableKubernetesMockClient(crud = true)
class ExampleTest {

    static KubernetesClient client;

    @Test
    public void testInCrudMode() {
            // ...
    }
}

Testing Against real Kubernetes API Server with Kube API Test

In order to test against real Kubernetes API the project provides a lightweight approach, thus starting up Kubernetes API Server and etcd binaries.

@EnableKubeAPIServer
class KubeAPITestSample {

  static KubernetesClient client;
  
  @Test
  void testWithClient() {
    // test using the client against real K8S API Server   
  }
}

For details see docs for Kube API Test.

Compatibility

Kubernetes

Starting from v5.5, the Kubernetes Client should be compatible with any supported Kubernetes cluster version. We provide DSL methods (for example client.pods(), client.namespaces(), and so on) for the most commonly used Kubernetes resources. If the resource you're looking for is not available through the DSL, you can always use the generic client.resource() method to interact with it. You can also open a new issue to request the addition of a new resource to the DSL.

We provide Kubernetes Java model types (for example Pod) and their corresponding builders (for example PodBuilder) for every vanilla Kubernetes resource (and some extensions). If you don't find a specific resource, and you think that it should be part of the Kubernetes Client, please open a new issue.

OpenShift

Starting from v5.5, the OpenShift Client should be compatible with any OpenShift cluster version currently supported by Red Hat. The Fabric8 Kubernetes Client is one of the few Kubernetes Java clients that provides full support for any supported OpenShift cluster version. If you find any incompatibility or something missing, please open a new issue.

Major Changes in Kubernetes Client 4.0.0

All the resource objects used here will be according to OpenShift 3.9.0 and Kubernetes 1.9.0. All the resource objects will give all the fields according to OpenShift 3.9.0 and Kubernetes 1.9.0

  • SecurityContextConstraints has been moved to OpenShift client from Kubernetes Client
  • Job dsl is in both batch and extensions(Extensions is deprecated)
  • DaemonSet dsl is in both apps and extensions(Extensions is deprecated)
  • Deployment dsl is in both apps and extensions(Extensions is deprecated)
  • ReplicaSet dsl is in both apps and extensions(Extensions is deprecated)
  • NetworkPolicy dsl is in both network and extensions(Extensions is deprecated)
  • Storage Class moved from client base DSL to storage DSL
  • PodSecurityPolicies moved from client base DSL and extensions to only extensions
  • ThirdPartyResource has been removed.

Who uses Kubernetes & OpenShift Java client?

Extensions:

Frameworks/Libraries/Tools:

CI Plugins:

Build Tools:

Platforms:

Proprietary Platforms:

As our community grows, we would like to track keep track of our users. Please send a PR with your organization/community name.

Tests we run for every new Pull Request

There are the links of the Github Actions and Jenkins for the tests which run for every new Pull Request. You can view all the recent builds also.

To get the updates about the releases, you can join https://groups.google.com/forum/embed/?place=forum/fabric8-devclients

Kubectl Java Equivalents

This table provides kubectl to Kubernetes Java Client mappings. Most of the mappings are quite straightforward and are one liner operations. However, some might require slightly more code to achieve same result:

kubectl Fabric8 Kubernetes Client
kubectl config view ConfigViewEquivalent.java
kubectl config get-contexts ConfigGetContextsEquivalent.java
kubectl config current-context ConfigGetCurrentContextEquivalent.java
kubectl config use-context minikube ConfigUseContext.java
kubectl config view -o jsonpath='{.users[*].name}' ConfigGetCurrentContextEquivalent.java
kubectl get pods --all-namespaces PodListGlobalEquivalent.java
kubectl get pods PodListEquivalent.java
kubectl get pods -w PodWatchEquivalent.java
kubectl get pods --sort-by='.metadata.creationTimestamp' PodListGlobalEquivalent.java
kubectl run PodRunEquivalent.java
kubectl create -f test-pod.yaml PodCreateYamlEquivalent.java
kubectl exec my-pod -- ls / PodExecEquivalent.java
kubectl attach my-pod PodAttachEquivalent.java
kubectl delete pod my-pod PodDelete.java
kubectl delete -f test-pod.yaml PodDeleteViaYaml.java
kubectl cp /foo_dir my-pod:/bar_dir UploadDirectoryToPod.java
kubectl cp my-pod:/tmp/foo /tmp/bar DownloadFileFromPod.java
kubectl cp my-pod:/tmp/foo -c c1 /tmp/bar DownloadFileFromMultiContainerPod.java
kubectl cp /foo_dir my-pod:/tmp/bar_dir UploadFileToPod.java
kubectl logs pod/my-pod PodLogsEquivalent.java
kubectl logs pod/my-pod -f PodLogsFollowEquivalent.java
kubectl logs pod/my-pod -c c1 PodLogsMultiContainerEquivalent.java
kubectl port-forward my-pod 8080:80 PortForwardEquivalent.java
kubectl get pods --selector=version=v1 -o jsonpath='{.items[*].metadata.name}' PodListFilterByLabel.java
kubectl get pods --field-selector=status.phase=Running PodListFilterFieldSelector.java
kubectl get pods --show-labels PodShowLabels.java
kubectl label pods my-pod new-label=awesome PodAddLabel.java
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq PodAddAnnotation.java
kubectl get configmap cm1 -o jsonpath='{.data.database}' ConfigMapJsonPathEquivalent.java
kubectl create -f test-svc.yaml LoadAndCreateService.java
kubectl create -f test-deploy.yaml LoadAndCreateDeployment.java
kubectl set image deploy/d1 nginx=nginx:v2 RolloutSetImageEquivalent.java
kubectl scale --replicas=4 deploy/nginx-deployment ScaleEquivalent.java
kubectl scale statefulset --selector=app=my-database --replicas=4 ScaleWithLabelsEquivalent.java
kubectl rollout restart deploy/d1 RolloutRestartEquivalent.java
kubectl rollout pause deploy/d1 RolloutPauseEquivalent.java
kubectl rollout resume deploy/d1 RolloutResumeEquivalent.java
kubectl rollout undo deploy/d1 RolloutUndoEquivalent.java
kubectl create -f test-crd.yaml LoadAndCreateCustomResourceDefinition.java
kubectl create -f customresource.yaml CustomResourceCreateDemo.java
kubectl create -f customresource.yaml CustomResourceCreateDemoTypeless.java
kubectl get ns NamespaceListEquivalent.java
kubectl create namespace test NamespaceCreateEquivalent.java
kubectl apply -f test-resource-list.yml CreateOrReplaceResourceList.java
kubectl get events EventsGetEquivalent.java
kubectl top nodes TopEquivalent.java
kubectl auth can-i create deployment.apps CanIEquivalent.java
kubectl create -f test-csr-v1.yml CertificateSigningRequestCreateYamlEquivalent.java
kubectl certificate approve my-cert CertificateSigningRequestApproveYamlEquivalent.java
kubectl certificate deny my-cert CertificateSigningRequestDenyYamlEquivalent.java
kubectl create -f quota.yaml --namespace=default CreateResourceQuotaInNamespaceYamlEquivalent.java

kubernetes-client's People

Contributors

andreatp avatar bbeaudreault avatar carlossg avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar fabian-k avatar fusesource-ci avatar gastaldi avatar geoand avatar honnix avatar iocanel avatar jglick avatar jimmidyson avatar jorsol avatar jstrachan avatar kunal-kushwaha avatar laxmikantbpandhare avatar lburgazzoli avatar manusa avatar metacosm avatar muzairs15 avatar nicolaferraro avatar oscerd avatar piyush-garg avatar rawlingsj avatar rohankanojia avatar sgitario avatar shawkins avatar vlatombe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-client's Issues

Can't close a watch

I continue in solving of the problem which is described in #205. It's not possible to close a watch because it discard client's internal ThreadPoolExecutor, see #205 for all details.

The original fix is still not working as OkHttpClient 's clone method does shall copy so the cloned instance shares the same ThreadPoolExecutor. Thus, closing one of these two client instances causes that the second one is discarded as well.

However, it needs some conceptual solution. I mean my code is following:

client.endpoints().withLabels(labels).watch(watcher)

In fact, the watch method now clones client. This is tricky.

My current workaroud is to create a new client always when I need to watch something. Actually, IMHO this is probably the best solution. But how can a user find it out that this is the correct way how to do a watching?

Watch method: invalid status code 200

I'm currently working on a Camel-Kubernetes component.
What I'm trying to do now is the consumers part of my component and I want to use the Kubernetes client watch feature.
My example code is related to pod creation:

getEndpoint().getKubernetesClient().namespaces().withName("default").watch(new Watcher<Namespace>() {

            @Override
            public void eventReceived(
                    io.fabric8.kubernetes.client.Watcher.Action action,
                    Namespace resource) {
                System.out.println("action " +  action + " resource "  + resource);

            }

            @Override
            public void onClose(KubernetesClientException cause) {
                cause.printStackTrace();

            }});
.
.
.
.
.

EditablePod podCreating = new PodBuilder().withNewMetadata()
                .withName(podName).withLabels(labels).endMetadata()
                .withSpec(podSpec).build();
pod = getEndpoint().getKubernetesClient().pods()
                .inNamespace(namespaceName).create(podCreating);

When I run a test, I'm getting the following stacktrace:

io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
    at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.watch(BaseOperation.java:414)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.watch(BaseOperation.java:407)
    at org.apache.camel.component.kubernetes.producer.KubernetesPodsProducer.doCreatePod(KubernetesPodsProducer.java:144)
    at org.apache.camel.component.kubernetes.producer.KubernetesPodsProducer.process(KubernetesPodsProducer.java:85)
    at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
    at org.apache.camel.processor.SendProcessor$2.doInAsyncProducer(SendProcessor.java:169)
    at org.apache.camel.impl.ProducerCache.doInAsyncProducer(ProducerCache.java:341)
    at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:164)
    at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
    at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:62)
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
    at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
    at org.apache.camel.processor.UnitOfWorkProducer.process(UnitOfWorkProducer.java:68)
    at org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:412)
    at org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:380)
    at org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:270)
    at org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:380)
    at org.apache.camel.impl.ProducerCache.send(ProducerCache.java:238)
    at org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:128)
    at org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:115)
    at org.apache.camel.impl.DefaultProducerTemplate.request(DefaultProducerTemplate.java:297)
    at org.apache.camel.component.kubernetes.KubernetesPodsProducerTest.createAndDeletePod(KubernetesPodsProducerTest.java:137)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
    at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Invalid Status Code 200
    at com.ning.http.client.providers.netty.future.NettyResponseFuture.done(NettyResponseFuture.java:220)
    at com.ning.http.client.providers.netty.handler.WebSocketProtocol.handle(WebSocketProtocol.java:102)
    at com.ning.http.client.providers.netty.handler.Processor.messageReceived(Processor.java:88)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Invalid Status Code 200
    at com.ning.http.client.ws.WebSocketUpgradeHandler.onCompleted(WebSocketUpgradeHandler.java:76)
    at com.ning.http.client.ws.WebSocketUpgradeHandler.onCompleted(WebSocketUpgradeHandler.java:29)
    at com.ning.http.client.providers.netty.future.NettyResponseFuture.getContent(NettyResponseFuture.java:177)
    at com.ning.http.client.providers.netty.future.NettyResponseFuture.done(NettyResponseFuture.java:214)
    ... 32 more
2015-09-20 11:09:11,742 [ I/O worker #65] WARN  WebSocketProtocol              - onError {}
java.lang.IllegalStateException: Invalid Status Code 200
    at com.ning.http.client.ws.WebSocketUpgradeHandler.onCompleted(WebSocketUpgradeHandler.java:76)
    at com.ning.http.client.providers.netty.handler.WebSocketProtocol.handle(WebSocketProtocol.java:100)
    at com.ning.http.client.providers.netty.handler.Processor.messageReceived(Processor.java:88)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
2015-09-20 11:09:11,748 [ I/O worker #65] ERROR WebSocketProtocol              - onError
java.lang.IllegalStateException: Invalid Status Code 200
    at com.ning.http.client.ws.WebSocketUpgradeHandler.onCompleted(WebSocketUpgradeHandler.java:76)
    at com.ning.http.client.providers.netty.handler.WebSocketProtocol.onError(WebSocketProtocol.java:186)
    at com.ning.http.client.providers.netty.handler.Processor.exceptionCaught(Processor.java:179)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaught(FrameDecoder.java:377)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:626)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
    at org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:48)
    at org.jboss.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:566)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

All the code is in a Camel Component, so a part of stacktrace can be ignored. I'm running all my tests by using the openshift vagrant image https://github.com/fabric8io/fabric8-installer/tree/master/vagrant/openshift

The Kubernetes client (getKubernetesClient() ) works fine without a Watch instantiated: I'm able to create pods,services,replication controllers, service account, secrets and so on. When I try to use Watch object this problem arise.

I'm using the kubernetes-client version 1.3.35. If I go back to version 1.3.2 the same code works fine and I get:

action MODIFIED resource Namespace(apiVersion=v1, kind=Namespace, metadata=ObjectMeta(annotations={openshift.io/sa.initialized-roles=true, openshift.io/sa.scc.mcs=s0:c3,c2, openshift.io/sa.scc.uid-range=1000010000/10000}, creationTimestamp=2015-09-19T09:18:41Z, deletionTimestamp=null, generateName=null, generation=null, labels=null, name=default, namespace=null, resourceVersion=3280, selfLink=/api/v1/namespaces/default, uid=6b296287-5eaf-11e5-b2f4-080027a295a0, additionalProperties={}), spec=NamespaceSpec(finalizers=[kubernetes, openshift.io/origin], additionalProperties={}), status=NamespaceStatus(phase=Active, additionalProperties={}), additionalProperties={})
action MODIFIED resource Namespace(apiVersion=v1, kind=Namespace, metadata=ObjectMeta(annotations={openshift.io/sa.scc.mcs=s0:c5,c0, openshift.io/sa.scc.uid-range=1000020000/10000}, creationTimestamp=2015-09-19T09:18:43Z, deletionTimestamp=null, generateName=null, generation=null, labels=null, name=openshift, namespace=null, resourceVersion=105, selfLink=/api/v1/namespaces/openshift, uid=6c80d00c-5eaf-11e5-b2f4-080027a295a0, additionalProperties={}), spec=NamespaceSpec(finalizers=[kubernetes, openshift.io/origin], additionalProperties={}), status=NamespaceStatus(phase=Active, additionalProperties={}), additionalProperties={})
action MODIFIED resource Namespace(apiVersion=v1, kind=Namespace, metadata=ObjectMeta(annotations={openshift.io/sa.scc.mcs=s0:c1,c0, openshift.io/sa.scc.uid-range=1000000000/10000}, creationTimestamp=2015-09-19T09:18:43Z, deletionTimestamp=null, generateName=null, generation=null, labels=null, name=openshift-infra, namespace=null, resourceVersion=95, selfLink=/api/v1/namespaces/openshift-infra, uid=6c617e40-5eaf-11e5-b2f4-080027a295a0, additionalProperties={}), spec=NamespaceSpec(finalizers=[kubernetes, openshift.io/origin], additionalProperties={}), status=NamespaceStatus(phase=Active, additionalProperties={}), additionalProperties={})

Thank you.

Andrea

Reuse http client when adapting client

Currently when we adapt a client to e.g. OpenShiftClient we create a new instance of OkHttpClient. This creates a separate ThreadPoolExecutor which may be wasteful considering the client is using the same config to create the OkHttpClient instance.

One intricacy to deal with if we do this is making sure we don't close the underlying ThreadPoolExecutor until every client using it is closed. Could get tricky.

Thoughts @iocanel?

show a nicer error message if the client is not authenticated

the following is a little cryptic:

Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'Unauthorized': was expecting ('true', 'false' or 'null')
 at [Source: io.fabric8.kubernetes.client.internal.org.jboss.netty.buffer.ChannelBufferInputStream@73a19967; line: 1, column: 14]
    at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1581)
    at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:533)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3448)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2607)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:841)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:737)
    at com.fasterxml.jackson.databind.ObjectReader._initForReading(ObjectReader.java:378)
    at com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1494)
    at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1102)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleResponse(BaseOperation.java:433)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleGet(BaseOperation.java:463)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.get(BaseOperation.java:100)

wonder if we can give the user a hint they need to login via oc login or kubectl login?

if using kubernetes (and not openshift) then we need to use a different URL to access templates / buildconfigs

so kubernetes does not have templates or buildconfigs. So right now we have a shim service to implement those entities in a service called 'templates'
https://github.com/fabric8io/quickstarts/blob/master/apps/templates/src/main/java/io/fabric8/templates/TemplatesService.java#L60

so if not on openshift; we need to not use this:

        config.setOpenShiftUrl(config.getMasterUrl() + "oapi/" + config.getOapiVersion() + "/");

but something like

        config.setOpenShiftUrl(config.getMasterUrl() + "api/" + config.getApiVersion() + "proxy/namespaces/default/services/templates/oapi/" + config.getOapiVersion() + "/");

etc

regression breaking mvn fabric8:apply: io.fabric8.kubernetes.client.KubernetesClientException: the namespace of the provided object does not match the namespace sent on the request

I now get the following in 2.2.35 when trying to do a mvn fabric8:apply

[INFO] --- fabric8-maven-plugin:2.2.35:apply (default-cli) @ jamesthingy ---
[INFO] Using kubernetes at: https://kubernetes.default.svc/ in namespace gogsadmin-jamesthingy-staging
[INFO] Kubernetes JSON: /var/jenkins_home/workspace/gogsadmin-jamesthingy/target/classes/kubernetes.json
[INFO] Is OpenShift: true
[INFO] Creating a namespace gogsadmin-jamesthingy-staging
[INFO] Created namespace: target/fabric8/applyJson/gogsadmin-jamesthingy-staging/namespace-gogsadmin-jamesthingy-staging.json
[INFO] Creating a template from kubernetes.json namespace default name jamesthingy
[ERROR] Failed to template entity from kubernetes.json. io.fabric8.kubernetes.client.KubernetesClientException: the namespace of the provided object does not match the namespace sent on the request. Template(apiVersion=v1, kind=Template, labels={}, metadata=ObjectMeta(annotations={description=Camel route using CDI in a standalone Java Container, fabric8.jamesthingy/summary=Camel route using CDI in a standalone Java Container, fabric8.jamesthingy/iconUrl=https://cdn.rawgit.com/fabric8io/fabric8/master/fabric8-maven-plugin/src/main/resources/icons/camel.svg}, creationTimestamp=null, deletionTimestamp=null, generateName=null, generation=null, labels={}, name=jamesthingy, namespace=gogsadmin-jamesthingy-staging, resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), objects=[Service(apiVersion=v1, kind=Service, metadata=ObjectMeta(annotations={prometheus.io/port=9779, prometheus.io/scrape=true}, creationTimestamp=null, deletionTimestamp=null, generateName=null, generation=null, labels={container=java, component=jamesthingy, provider=fabric8, project=jamesthingy, version=1.0.1, group=quickstarts}, name=qs-java-camel-cdi, namespace=null, resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=ServiceSpec(clusterIP=None, deprecatedPublicIPs=[], portalIP=null, ports=[ServicePort(name=null, nodePort=null, port=1, protocol=null, targetPort=null, additionalProperties={})], selector={container=java, project=jamesthingy, component=jamesthingy, provider=fabric8, group=quickstarts}, sessionAffinity=null, type=null, additionalProperties={}), status=null, additionalProperties={}), ReplicationController(apiVersion=v1, kind=ReplicationController, metadata=ObjectMeta(annotations={fabric8.io/build-id=1}, creationTimestamp=null, deletionTimestamp=null, generateName=null, generation=null, labels={container=java, component=jamesthingy, provider=fabric8, project=jamesthingy, version=1.0.1, group=quickstarts}, name=jamesthingy, namespace=null, resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=ReplicationControllerSpec(replicas=1, selector={container=java, component=jamesthingy, provider=fabric8, project=jamesthingy, version=1.0.1, group=quickstarts}, template=PodTemplateSpec(metadata=ObjectMeta(annotations={}, creationTimestamp=null, deletionTimestamp=null, generateName=null, generation=null, labels={container=java, component=jamesthingy, provider=fabric8, project=jamesthingy, version=1.0.1, group=quickstarts}, name=null, namespace=null, resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=PodSpec(activeDeadlineSeconds=null, containers=[Container(args=[], command=[], env=[EnvVar(name=KUBERNETES_NAMESPACE, value=null, valueFrom=EnvVarSource(fieldRef=ObjectFieldSelector(apiVersion=null, fieldPath=metadata.namespace, additionalProperties={}), additionalProperties={}), additionalProperties={})], image=docker.io/fabric8/jamesthingy:1.0.1, imagePullPolicy=null, lifecycle=null, livenessProbe=null, name=jamesthingy, ports=[ContainerPort(containerPort=8778, hostIP=null, hostPort=null, name=jolokia, protocol=null, additionalProperties={})], readinessProbe=null, resources=null, securityContext=SecurityContext(capabilities=null, privileged=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, additionalProperties={}), stdin=null, terminationMessagePath=null, tty=null, volumeMounts=[], workingDir=null, additionalProperties={})], dnsPolicy=null, host=null, hostNetwork=null, imagePullSecrets=[], nodeName=null, nodeSelector={}, restartPolicy=null, serviceAccount=null, serviceAccountName=null, terminationGracePeriodSeconds=null, volumes=[], additionalProperties={}), additionalProperties={}), additionalProperties={}), status=null, additionalProperties={})], parameters=[], additionalProperties={})
io.fabric8.kubernetes.client.KubernetesClientException: the namespace of the provided object does not match the namespace sent on the request
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.assertResponseCode(BaseOperation.java:493)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleResponse(BaseOperation.java:506)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleCreate(BaseOperation.java:520)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.create(BaseOperation.java:208)
    at io.fabric8.kubernetes.api.Controller.doCreateTemplate(Controller.java:355)
    at io.fabric8.kubernetes.api.Controller.installTemplate(Controller.java:346)
    at io.fabric8.kubernetes.api.Controller.applyTemplate(Controller.java:308)
    at io.fabric8.maven.ApplyMojo.applyTemplates(ApplyMojo.java:256)
    at io.fabric8.maven.ApplyMojo.execute(ApplyMojo.java:197)

Question: Does JSON to DSL or rawRequest API make sense ?

In many cases a client application already has a data model that can be serialized into a payload that can be sent to kubernetes API. Would it be useful to allow for making raw requests using this client library ? If not, can that json be fed into this client code for mapping the json into callable DSL API (something like a DSL builder from JSON) ? The later may not make too much sense, but if you think of the reusable code this client library has, it does make sense to me.

provide an equilvalent to .get() which fails if the resource is missing with a meaningful error message describing the namespace/name/labels being filtered

I like the default of say

kubernetes.replicationControllers().withName("cheese").get()

returning null if not found.

It would be nice to add a version of this which returns a non-null value or throws a meaningful exception.

e.g. if you do something like...

ReplicationController rc = kubernetes.replicationControllers().withName("cheese").require()

if cheese doesn't exist it would be nice to get some exception like new ResourceNotFoundException("ReplicationController cheese is not found in default namespace").

It then avoids doing these kinds of assertions at a higher level. These are kinda handy when writing test cases etc

kubernetes client with watcher doesn't seem to work on vanilla kubernetes

I get this in the hubot-notifier when I remove the use of the custom $KUBERNETES_SERVICE_HOST env var pointing at DNS.

Exception in thread "main" org.jboss.weld.exceptions.DeploymentException: Exception List with 1 exceptions:
Exception 0 :
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
    at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.get(BaseOperation.java:110)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.getIfExists(BaseOperation.java:117)
    at io.fabric8.kubernetes.api.KubernetesHelper.getServiceURL(KubernetesHelper.java:1261)
    at io.fabric8.cdi.Services.toServiceUrl(Services.java:38)
    at io.fabric8.cdi.producers.ServiceUrlProducer.produce(ServiceUrlProducer.java:47)
    at io.fabric8.cdi.producers.ServiceUrlProducer.produce(ServiceUrlProducer.java:26)
    at io.fabric8.cdi.bean.ProducerBean.create(ProducerBean.java:42)
    at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
    at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:101)
    at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
    at org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:761)
    at org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:861)
    at org.jboss.weld.injection.ParameterInjectionPointImpl.getValueToInject(ParameterInjectionPointImpl.java:76)
    at org.jboss.weld.injection.ConstructorInjectionPoint.getParameterValues(ConstructorInjectionPoint.java:150)
    at org.jboss.weld.injection.ConstructorInjectionPoint.newInstance(ConstructorInjectionPoint.java:75)
    at org.jboss.weld.injection.producer.AbstractInstantiator.newInstance(AbstractInstantiator.java:28)
    at org.jboss.weld.injection.producer.BasicInjectionTarget.produce(BasicInjectionTarget.java:116)
    at org.jboss.weld.injection.producer.BeanInjectionTarget.produce(BeanInjectionTarget.java:179)
    at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:158)
    at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
    at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:101)
    at org.jboss.weld.bean.ContextualInstanceStrategy$ApplicationScopedContextualInstanceStrategy.get(ContextualInstanceStrategy.java:141)
    at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
    at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:99)
    at org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:125)
    at io.fabric8.hubot.notifier.KubernetesHubotNotifier$Proxy$_$$_WeldClientProxy.toString(Unknown Source)
    at io.fabric8.cdi.eager.EagerCDIExtension.afterDeploymentValidation(EagerCDIExtension.java:42)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
    at org.jboss.weld.injection.MethodInvocationStrategy$SpecialParamPlusBeanManagerStrategy.invoke(MethodInvocationStrategy.java:144)
    at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:306)
    at org.jboss.weld.event.ExtensionObserverMethodImpl.sendEvent(ExtensionObserverMethodImpl.java:121)
    at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:284)
    at org.jboss.weld.event.ObserverMethodImpl.notify(ObserverMethodImpl.java:262)
    at org.jboss.weld.event.ObserverNotifier.notifySyncObservers(ObserverNotifier.java:271)
    at org.jboss.weld.event.ObserverNotifier.notify(ObserverNotifier.java:260)
    at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:154)
    at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:148)
    at org.jboss.weld.bootstrap.events.AbstractContainerEvent.fire(AbstractContainerEvent.java:54)
    at org.jboss.weld.bootstrap.events.AbstractDeploymentContainerEvent.fire(AbstractDeploymentContainerEvent.java:35)
    at org.jboss.weld.bootstrap.events.AfterDeploymentValidationImpl.fire(AfterDeploymentValidationImpl.java:28)
    at org.jboss.weld.bootstrap.WeldStartup.validateBeans(WeldStartup.java:447)
    at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:90)
    at org.jboss.weld.environment.se.Weld.initialize(Weld.java:143)
    at org.jboss.weld.environment.se.StartMain.go(StartMain.java:48)
    at org.jboss.weld.environment.se.StartMain.main(StartMain.java:58)
Caused by: java.util.concurrent.ExecutionException: java.net.UnknownHostException: kubernetes.default.svc.cluster.local: Name or service not known
    at com.ning.http.client.providers.netty.future.NettyResponseFuture.abort(NettyResponseFuture.java:231)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.abort(NettyRequestSender.java:420)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithNewChannel(NettyRequestSender.java:288)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithCertainForceConnect(NettyRequestSender.java:140)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequest(NettyRequestSender.java:115)
    at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.execute(NettyAsyncHttpProvider.java:87)
    at com.ning.http.client.AsyncHttpClient.executeRequest(AsyncHttpClient.java:517)
    at com.ning.http.client.AsyncHttpClient$BoundRequestBuilder.execute(AsyncHttpClient.java:229)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleResponse(BaseOperation.java:466)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleGet(BaseOperation.java:499)
    at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.get(BaseOperation.java:108)
    ... 48 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local: Name or service not known
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:922)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1316)
    at java.net.InetAddress.getAllByName0(InetAddress.java:1269)
    at java.net.InetAddress.getAllByName(InetAddress.java:1185)
    at java.net.InetAddress.getAllByName(InetAddress.java:1119)
    at java.net.InetAddress.getByName(InetAddress.java:1069)
    at com.ning.http.client.NameResolver$JdkNameResolver.resolve(NameResolver.java:28)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.remoteAddress(NettyRequestSender.java:356)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.connect(NettyRequestSender.java:367)
    at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithNewChannel(NettyRequestSender.java:281)
    ... 56 more

    at org.jboss.weld.bootstrap.events.AbstractDeploymentContainerEvent.fire(AbstractDeploymentContainerEvent.java:37)
    at org.jboss.weld.bootstrap.events.AfterDeploymentValidationImpl.fire(AfterDeploymentValidationImpl.java:28)
    at org.jboss.weld.bootstrap.WeldStartup.validateBeans(WeldStartup.java:447)
    at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:90)
    at org.jboss.weld.environment.se.Weld.initialize(Weld.java:143)
    at org.jboss.weld.environment.se.StartMain.go(StartMain.java:48)
    at org.jboss.weld.environment.se.StartMain.main(StartMain.java:58)

SubjectAccessReviewSupport

The openshift-elasticsearch-plugin has need to utilize a SAR to determine if a user has cluster-admin rights.

Rolling update image for multi container pod

Is there any way to do a rolling update on the Docker image for a multi-container pod?

I was looking at FullExample.java in the tests where it does a rolling update at https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java#L146-L153.

However, there is no way shown to do an update for a multi container pod. From my understanding, the way to do this would be to:

  • Read the entire pod spec first (store it in an object)
  • Update the spec with the new image (find container by name)
  • Do a rolling update withNewSpec
PodSpec podSpec = client.replicationControllers().inNamespace("thisisatest").withName("nginx-controller").get().getSpec().getTemplate().getSpec();

List<Container> containers = podSpec.getContainers();
/* Logic to iterate, find container by name and update the image comes here.. */

client.replicationControllers().inNamespace("thisisatest").withName("nginx-controller")
.rolling()
.edit().
editSpec().
editTemplate().
withNewSpecLike(podSpec)
.endSpec()
.endTemplate()
.endSpec()
.done();

Is there a better way to do this?
For example,

client.replicationControllers().inNamespace("thisisatest").withName("nginx-controller")
.rolling()
.updateImageForContainerName("nginx", "container-name");

Kubernetes client should use cloned OkHttpClient in watch method

I wanted to watch kubernetes endpoints using the java client:

watch = client.endpoints().withLabels(labels).watch(watcher);
// Once I'm fine with watching, it should discard the watch
watch.close();

This code snippet worked fine until it performed second watch. Following exception was thrown:

java.util.concurrent.RejectedExecutionException: Task com.squareup.okhttp.Call$AsyncCall@35ca42f1 rejected from java.util.concurrent.ThreadPoolExecutor@72f61470[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) ~[?:1.8.0_45-internal]
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) ~[?:1.8.0_45-internal]
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) ~[?:1.8.0_45-internal]
    at com.squareup.okhttp.Dispatcher.enqueue(Dispatcher.java:110) ~[okhttp-2.5.0.jar:?]
    at com.squareup.okhttp.Call.enqueue(Call.java:113) ~[okhttp-2.5.0.jar:?]
    at com.squareup.okhttp.OkHttpClient$1.callEnqueue(OkHttpClient.java:133) ~[okhttp-2.5.0.jar:?]
    at com.squareup.okhttp.ws.WebSocketCall.enqueue(WebSocketCall.java:113) ~[okhttp-ws-2.5.0.jar:?]
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.runWatch(WatchConnectionManager.java:105) ~[kubernetes-client-1.3.57.jar:?]
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.<init>(WatchConnectionManager.java:75) ~[kubernetes-client-1.3.57.jar:?]
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:438) ~[kubernetes-client-1.3.57.jar:?]
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:433) ~[kubernetes-client-1.3.57.jar:?]

The root cause of this exception is terminated dispatcher' ThreadPoolExecutor. How it could happen?

watch.close();

The close method just calls ...

clonedClient.getDispatcher().getExecutorService().shutdown();

... which means that the dispatcher is now discarded.

According to the watch method source code:

  public Watch watch(String resourceVersion, final Watcher<T> watcher) throws KubernetesClientException {
    try {
      return new WatchConnectionManager(client, this, resourceVersion, watcher, config.getWatchReconnectInterval(), config.getWatchReconnectLimit());
    } catch (MalformedURLException | InterruptedException | ExecutionException e) {
      throw KubernetesClientException.launderThrowable(e);
    }
  }

and the ctor ...

public WatchConnectionManager(final OkHttpClient clonedClient ...

... I would say that the client should be cloned e.g.:

eturn new WatchConnectionManager(client.clone() ...

as the clone method is implemented.

Creation using RC/Service/Namespace temlpates

I was looking at the creation example where builders are used to create k8s components.
I wanted to know if there's a way to pass a json/yaml to the constructor for creation.

If I was creating a service whose spec had many ports, it would be painful to define all of these in the builder. The amount of code would be very large.

Instead, a better option would be to pass the json and maybe internally ObjectMapper could be used to map it to the k8s model.

Pod exec("ls","-al") shows ascii codes for color enabled output

Example output below:

onfig:io.fabric8.kubernetes.client.EditableConfig@4c873330
onOpen: Response{protocol=http/1.1, code=101, message=Switching Protocols, url=https://ip:6443/api/v1/namespaces/default/pods/service-demo-erusf/exec?command=ls&command=-al&tty=true&stdin=true&stdout=true&stderr=true}
total 8092
drwxr-xr-x 4 root root 4096 Dec 27 2014 �[1;34m.�[0m
drwxr-xr-x 37 root root 4096 Jan 6 02:13 �[1;34m..�[0m
-rwxr-xr-x 1 root root 8267160 Dec 27 2014 �[1;32mgo-app�[0m
drwxr-xr-x 4 root root 4096 Dec 27 2014 �[1;34mpublic�[0m
drwxr-xr-x 2 root root 4096 Dec 27 2014 �[1;34mtemplates�[0m

Change to immutable DSL steps

Currently not every step in the DSL returns a new instance. This means that steps aren't reusable. Currently you'd have to do:

client.replicationControllers().inNamespace("default").withName("test").get();
client.replicationControllers().inNamespace("default").withName("test").get();

Would be much nicer to be able to do:

NamespaceAwareResourceList defaultRCs = client.replicationControllers().inNamespace("default");
defaultRCs.withName("test").get();
defaultRCs.withName("test").get();

Cannot handle non-base64 encoded cert/key data

io.fabric8.kubernetes.client.internal.CertUtils.getInputStreamFromDataOrFile(String data, String file)
and io.fabric8.kubernetes.client.internal.CertUtils.decodePem(InputStream keyInputStream)
getInputStreamFromDataOrFile do not need "-----BEGIN" but decodePem need it

anyone come across this issue!

Support exec endpoint

Something like:

client.pods().inNamespace(ns).withName(n).exec().command(whatever).inContainer(cntr).run()

Discussed on IRC @iocanel doesn't like how it reads which is fair enough so let's nail it down before implementing.

Tests failing on windows

Looks like an issue with java nio file handling on windows.

Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.046 sec <<< FAILURE! - in io.fabric8.kubernetes.client.ConfigTest
testWithKubeConfigAndSytemPropertiesAndBuilder(io.fabric8.kubernetes.client.ConfigTest)  Time elapsed: 0.037 sec  <<< ERROR!
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
        at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
        at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
        at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
        at java.nio.file.Paths.get(Paths.java:84)
        at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:236)
        at io.fabric8.kubernetes.client.Config.<init>(Config.java:106)
        at io.fabric8.kubernetes.client.ConfigBuilder.<init>(ConfigBuilder.java:17)
        at io.fabric8.kubernetes.client.ConfigTest.testWithKubeConfigAndSytemPropertiesAndBuilder(ConfigTest.java:174)

testWithKubeConfigAndSystemProperties(io.fabric8.kubernetes.client.ConfigTest)  Time elapsed: 0.002 sec  <<< ERROR!
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
        at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
        at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
        at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
        at java.nio.file.Paths.get(Paths.java:84)
        at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:236)
        at io.fabric8.kubernetes.client.Config.<init>(Config.java:106)
        at io.fabric8.kubernetes.client.ConfigTest.testWithKubeConfigAndSystemProperties(ConfigTest.java:162)

testWithBuilderAndSystemProperties(io.fabric8.kubernetes.client.ConfigTest)  Time elapsed: 0.002 sec  <<< ERROR!
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
        at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
        at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
        at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
        at java.nio.file.Paths.get(Paths.java:84)
        at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:236)
        at io.fabric8.kubernetes.client.Config.<init>(Config.java:106)
        at io.fabric8.kubernetes.client.ConfigBuilder.<init>(ConfigBuilder.java:17)
        at io.fabric8.kubernetes.client.ConfigTest.testWithBuilderAndSystemProperties(ConfigTest.java:138)

testWithKubeConfig(io.fabric8.kubernetes.client.ConfigTest)  Time elapsed: 0.001 sec  <<< ERROR!
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
        at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
        at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
        at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
        at java.nio.file.Paths.get(Paths.java:84)
        at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:236)
        at io.fabric8.kubernetes.client.Config.<init>(Config.java:106)
        at io.fabric8.kubernetes.client.ConfigTest.testWithKubeConfig(ConfigTest.java:149)

testWithBuilder(io.fabric8.kubernetes.client.ConfigTest)  Time elapsed: 0.002 sec  <<< ERROR!
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
        at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
        at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
        at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
        at java.nio.file.Paths.get(Paths.java:84)
        at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:236)
        at io.fabric8.kubernetes.client.Config.<init>(Config.java:106)
        at io.fabric8.kubernetes.client.ConfigBuilder.<init>(ConfigBuilder.java:17)
        at io.fabric8.kubernetes.client.ConfigTest.testWithBuilder(ConfigTest.java:89)

testWithSystemProperties(io.fabric8.kubernetes.client.ConfigTest)  Time elapsed: 0.001 sec  <<< ERROR!
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /C:/Dev/git/kubernetes-client/kubernetes-client/target/test-classes/test-kubeconfig
        at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
        at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
        at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
        at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
        at java.nio.file.Paths.get(Paths.java:84)
        at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:236)
        at io.fabric8.kubernetes.client.Config.<init>(Config.java:106)
        at io.fabric8.kubernetes.client.ConfigTest.testWithSystemProperties(ConfigTest.java:80)

Rolling updates for RC

This should scale down existing RC while scaling up new RC until old RC pods == 0 & new RC pods == required replicas.

This should also support just updating the image rather than the whole RC as this is a common thing to do (& supported by kubectl).

Can't watch specific resource entity

When I try to set the watch for a specific resource entity, like pod:

    Watch watch = client.pods().inNamespace("default")
            .withName(podName).watch(new Watcher<Pod>() {

.....
});

I see an error in the log:
watch.onClose io.fabric8.kubernetes.client.KubernetesClientException: Connection unexpectedly closed

Most probably it happens, because the watch as query parameter could be set only for resource entity list, but not the specific resource, to set a watch for a specific resource different URL has to be used, like:
http://host:port/api/v1/watch/namespaces/default/pods/pod_name

Add additional options to get logs of a pod which already provides by Kubernetes rest api

I have been trying get logs of a running pod with fabric8 client.

As I understand we can get logs as following,
As I understand we can get logs as following,

String log = kube.pods().inNamespace().withName(<pod_name>).getLog(true);

Here through the getLog() methos it provides only snapshot logs. But from Kubernetes rest api we have more flexibility to do stuff.

String url = "https://10.245.1.2/api/v1/namespaces//pods/<pod_name>/log?container=tomcat-server&follow=false&previous=false&timestamps=false";

As with the URL parameters we have more flexibility on the output to

  • Get previous terminated pod logs
  • Stream the logs of pods
  • Display only the most recent #of lines of pods
  • Show all logs from pods written in the certain time period

Integrate the client DSL with a mocking framework

Ideally one should be able to do:

client.services().inNamespace("default").withName("myservice").get().andReturn(myRef).once();
client.services().inNamespace("default").withName("myservice").get().andReturn(myRef).anyTimes();
client.services().inNamespace("default").withName("myservice").get().andThrow(Error);

client.replay();
//do things with the client
client.verify();

Changing the order of kubernetes workflow DSL command results in kubernetes-client validation exception

Two examples which are the same apart from the second which fails has withEnvVar at the end.

working

node('kubernetes'){
    echo 'worked'
    kubernetes.pod('buildpod').withImage('fabric8/builder-openshift-client').withPrivileged(true).withHostPathMount('/var/run/docker.sock','/var/run/docker.sock').withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/').withSecret('jenkins-docker-cfg','/home/jenkins/.docker').withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/').inside {
      ...
    }
}

fails

node('kubernetes'){
    echo 'worked'
    kubernetes.pod('buildpod').withImage('fabric8/builder-openshift-client').withPrivileged(true).withHostPathMount('/var/run/docker.sock','/var/run/docker.sock').withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/').withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/').withSecret('jenkins-docker-cfg','/home/jenkins/.docker').inside {
      ...
    }
}

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default.svc/api/v1/namespaces/default/pods. Message: Pod "buildpod-694cc974-885c-4b62-85e9-fc56b1ee587e" is invalid: [spec.containers[0].env[1].name: invalid value 'jenkins-docker-cfg', Details: must be a C identifier (matching regex [A-Za-z_][A-Za-z0-9_]*): e.g. "my_name" or "MyName", spec.containers[0].env[3].name: invalid value 'jenkins-docker-cfg', Details: must be a C identifier (matching regex [A-Za-z_][A-Za-z0-9_]*): e.g. "my_name" or "MyName"]. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].env[1].name, message=invalid value 'jenkins-docker-cfg', Details: must be a C identifier (matching regex [A-Za-z_][A-Za-z0-9_]*): e.g. "my_name" or "MyName", reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[0].env[3].name, message=invalid value 'jenkins-docker-cfg', Details: must be a C identifier (matching regex [A-Za-z_][A-Za-z0-9_]*): e.g. "my_name" or "MyName", reason=FieldValueInvalid, additionalProperties={})], kind=Pod, name=buildpod-694cc974-885c-4b62-85e9-fc56b1ee587e, retryAfterSeconds=null, additionalProperties={}), kind=Status, message=Pod "buildpod-694cc974-885c-4b62-85e9-fc56b1ee587e" is invalid: [spec.containers[0].env[1].name: invalid value 'jenkins-docker-cfg', Details: must be a C identifier (matching regex [A-Za-z_][A-Za-z0-9_]*): e.g. "my_name" or "MyName", spec.containers[0].env[3].name: invalid value 'jenkins-docker-cfg', Details: must be a C identifier (matching regex [A-Za-z_][A-Za-z0-9_]*): e.g. "my_name" or "MyName"], metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:263)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:234)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:207)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:185)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:475)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:206)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation$1.visit(BaseOperation.java:223)
    at io.fabric8.kubernetes.api.model.DoneablePod.done(DoneablePod.java:20)
    at io.fabric8.kubernetes.workflow.KubernetesFacade.createPod(KubernetesFacade.java:128)
    at io.fabric8.kubernetes.workflow.PodStepExecution.start(PodStepExecution.java:63)
    at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:136)
    at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:112)
    at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45)
    at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
    at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:15)
    at io.fabric8.kubernetes.workflow.Kubernetes$Pod.inside(jar:file:/var/jenkins_home/plugins/kubernetes-workflow/WEB-INF/lib/kubernetes-workflow.jar!/io/fabric8/kubernetes/workflow/Kubernetes.groovy:100)
    at io.fabric8.kubernetes.workflow.Kubernetes.node(jar:file:/var/jenkins_home/plugins/kubernetes-workflow/WEB-INF/lib/kubernetes-workflow.jar!/io/fabric8/kubernetes/workflow/Kubernetes.groovy:30)
    at io.fabric8.kubernetes.workflow.Kubernetes$Pod.inside(jar:file:/var/jenkins_home/plugins/kubernetes-workflow/WEB-INF/lib/kubernetes-workflow.jar!/io/fabric8/kubernetes/workflow/Kubernetes.groovy:99)
    at WorkflowScript.run(WorkflowScript:3)
    at Unknown.Unknown(Unknown)
    at ___cps.transform___(Native Method)
    at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:69)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79)
    at sun.reflect.GeneratedMethodAccessor139.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
    at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:40)
    at com.cloudbees.groovy.cps.Next.step(Next.java:58)
    at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:145)
    at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:274)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:74)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:183)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:181)
    at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
    at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE

does update work or is the Controller in kubernetes-api now broken?

see this when folks use mvn fabric8:apply....

mvn fabric8:apply
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=256m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Jenkins 2.3-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:2.2.23:apply (default-cli) @ jenkins ---
[INFO] Using kubernetes at: https://vagrant.f8:8443/api/v1/ in namespace default
[INFO] Kubernetes JSON: /Users/chmoulli/Fuse/Fuse-projects/fabric8/fabric8-devops/jenkins/target/classes/kubernetes.json
[INFO] Is OpenShift: true
[INFO] Creating a namespace default
[INFO] No property defined for template parameter: fabric8.apply.JENKINS_GOGS_USER
[INFO] No property defined for template parameter: fabric8.apply.JENKINS_WORKFLOW_GIT_REPOSITORY
[INFO] No property defined for template parameter: fabric8.apply.DOMAIN
[INFO] No property defined for template parameter: fabric8.apply.GPG_PASSPHRASE
[INFO] No property defined for template parameter: fabric8.apply.JENKINS_GOGS_EMAIL
[INFO] No property defined for template parameter: fabric8.apply.JENKINS_GOGS_PASSWORD
[INFO] No property defined for template parameter: fabric8.apply.SONATYPE_USERNAME
[INFO] No property defined for template parameter: fabric8.apply.DOCKER_REGISTRY_PASSWORD
[INFO] No property defined for template parameter: fabric8.apply.DOCKER_REGISTRY_USERNAME
[INFO] No property defined for template parameter: fabric8.apply.JENKINS_SLAVE_IMAGE
[INFO] No property defined for template parameter: fabric8.apply.SONATYPE_PASSWORD
[INFO] No property defined for template parameter: fabric8.apply.FABRIC8_DEFAULT_ENVIRONMENTS
[INFO] No property defined for template parameter: fabric8.apply.SEED_GIT_URL
[INFO] Creating a template from kubernetes.json namespace default name jenkins
[INFO] Created template: jenkins/target/fabric8/applyJson/default/template-jenkins.json
[INFO] Updating a service account from kubernetes.json
[INFO] Updated service account: jenkins/target/fabric8/applyJson/default/serviceaccount-jenkins.json
[INFO] Updating a service from kubernetes.json
[ERROR] Failed to update controller from kubernetes.json. io.fabric8.kubernetes.client.KubernetesClientException: Cannot update read-only resources. Service(apiVersion=v1, kind=Service, metadata=ObjectMeta(annotations={}, creationTimestamp=null, deletionTimestamp=null, generateName=null, generation=null, labels={provider=fabric8, component=jenkins}, name=jenkins, namespace=null, resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=ServiceSpec(clusterIP=null, deprecatedPublicIPs=[], portalIP=null, ports=[ServicePort(name=null, nodePort=null, port=80, protocol=TCP, targetPort=IntOrString(IntVal=8080, Kind=null, StrVal=null, additionalProperties={}), additionalProperties={})], selector={provider=fabric8, component=jenkins}, sessionAffinity=null, type=LoadBalancer, additionalProperties={}), status=null, additionalProperties={})
io.fabric8.kubernetes.client.KubernetesClientException: Cannot update read-only resources

Either that or maybe the Controller code in kubernetes-api needs some work?

have a client.namespace(ns) method to default the namespace (returning a new client object maybe)

its kinda boring to have to keep typing .inNamespace(foo) in every single line of code that uses the KubernetesClient when the namespace is usually always the exact same value.

The old fabric8 client you could just set the namespace once. (e.g. in fabric8-arquillian) and you were done.

Can we try come up with a way to default the namespace somehow?

e.g. something like

x = client.namespace("x"); 
y = client.namespace("y"); 
ypods =y.pods().get("foo");

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.