Giter VIP home page Giter VIP logo

kubeapps's Introduction

Kubeapps

Main Pipeline Full Integration Pipeline CodeQL Netlify Status

Overview

Kubeapps is an in-cluster web-based application that enables users with a one-time installation to deploy, manage, and upgrade applications on a Kubernetes cluster.

With Kubeapps you can:

Note: Kubeapps 2.0 and onwards supports Helm 3 only. While only the Helm 3 API is supported, in most cases, charts made for Helm 2 will still work.

Getting started with Kubeapps

Installing Kubeapps is as simple as:

helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create namespace kubeapps
helm install kubeapps --namespace kubeapps bitnami/kubeapps

See the Getting Started Guide for detailed instructions on how to install and use Kubeapps.

Kubeapps is deployed using the official Bitnami Kubeapps chart from the separate Bitnami charts repository. Although the Kubeapps repository also defines a chart, this is intended for development purposes only.

Documentation

Complete documentation available in Kubeapps documentation section. Including complete tutorials, how-to guides, and reference for configuration and development in Kubeapps.

For getting started into Kubeapps, please refer to:

See how to deploy and configure Kubeapps on VMware Tanzu™ Kubernetes Grid™

Troubleshooting

If you encounter issues, please review the troubleshooting docs, review our project board, file an issue, or talk to Kubeapps maintainers on the #Kubeapps channel on the Kubernetes Slack server.

Contributing

If you are ready to jump in and test, add code, or help with documentation, follow the instructions on the start contributing documentation for guidance on how to setup Kubeapps for development.

Changelog

Take a look at the list of releases to stay tuned for the latest features and changes.

kubeapps's People

Contributors

aanthonyrizzo avatar absoludity avatar andresmgot avatar anguslees avatar antgamdia avatar arapulido avatar batiati avatar beni0888 avatar carrodher avatar castelblanque avatar davidkarlsen avatar dependabot[bot] avatar dlaloue-vmware avatar evanlouie avatar gfichtenholt avatar github-actions[bot] avatar jessehu avatar jjo avatar kubeapps-bot avatar latiif avatar migmartri avatar ngtuna avatar ppbaena avatar prydonius avatar satya-dillikar avatar sebgoa avatar simonalling avatar sozercan avatar tiffanyfay avatar vikram-bitnami avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeapps's Issues

Proposal: enable management of Helm CLI-created releases in Dashboard

Problem

Currently, we have isolated Tiller (see security issues) by running it as a sidecar of the Monocular API pod. This allows the Monocular API to talk to Tiller over gRPC without the need to expose the Tiller service throughout the cluster.

However, we have also configured this isolated Tiller to record releases (stored as ConfigMaps) in the kubeapps namespace, rather than kube-system. This means that a Tiller installed using helm init is not able to discover releases installed through the Dashboard, and vice versa. This is actually a significant limitation and deviation from what people expect to happen.

One example of the limitations of this current approach:

  • Install Ghost chart from Dashboard
  • Notice in the NOTES that in order to complete the installation you have to run helm upgrade and pass in an extra value (in the Ghost example, you need to provide a ghostHost so the application can be correctly configured)
  • Try to run the command as printed out in the NOTES from the UI and notice that Helm complains about the release not existing

Solution

I suggest we keep our sidecar Tiller service, but configure it to record releases in the kube-system namespace to be compatible with the Helm client's Tiller. If users want to just use the dashboard to manage charts, they still can in a secure way. If they want to use the Helm CLI, they will need to explicitly run helm init (we will document this), but they will be able to manage the chart releases from both the Dashboard and the Helm CLI interchangeably.

Note that Tiller is not like a controller and clients connect directly to it. It is a stateless service, all state is stored in ConfigMaps. Therefore, it is either not possible or very unlikely for multiple Tiller services to interfere with each other.

Alternative solution

To be more compatible with the Helm CLI, we could install Tiller as helm init does but without creating the Service that exposes it to the whole cluster, and restricting it to localhost (Helm CLI will port-forward) or generate a certificate (more work). IMO this wouldn't necessarily decrease the attack surface of Tiller, apart from the fact that a hostile service might know to look for tiller-deploy in the kube-system namespace to port-forward and get access to Tiller, but it would need a service account with permissions to port-forward to pods.

cc @anguslees @arapulido @sebgoa @migmartri

Offline (Disconnected) Installation?

This isnt uncommon for newer products, but my installation must be done offline. I can pull containers and push them into my local Registry, but I don't see a parameter that can be passed to the installer for the registry location. Is there any consideration in adding an option to change the pull location for the containers?

I havent looked at the source version yet, just tried the compiled for a quick test - so Ill see what I can hack together to get it up - but I was curious if an add on option to pass a registry location was being considered.

The --namespace flag is ignored

As the manifest is embedded and it points to the kubeapps namespace, the --namespace flag is ignored.

If we are not going to use it, we should remove it

`kubeapps up` fails on k8s 1.8 gke

$ ./kubeapps up
INFO[0001] Updating namespaces kubeapps                 
INFO[0001]  Creating non-existent namespaces kubeapps   
INFO[0001] Updating namespaces kubeless                 
INFO[0001]  Creating non-existent namespaces kubeless   
Error: strconv.Atoi: parsing "8+": invalid syntax
Usage:
  kubeapps up FLAG [flags]

Flags:
      --dry-run            Provides output to be submitted to the server
  -h, --help               help for up
      --namespace string   Specify namespace for the Kubeapps components (default "default")

ERRO[0002] strconv.Atoi: parsing "8+": invalid syntax 

ERRO[0000] error upgrading connection: unable to upgrade connection: pod not found

After running kubeapps up,
"
NAMESPACE NAME DESIRED CURRENT
kubeless statefulsets/kafka 1 1
kubeless statefulsets/zoo 1 1

NAMESPACE NAME STATUS
kubeapps pod/default-http-backend-786cc69958-47fdj Pending
kubeapps pod/kubeapps-dashboard-api-56fb98f459-f6dfd Pending
kubeapps pod/kubeapps-dashboard-ui-7685fdc9cf-pfprl Pending
kubeapps pod/kubeless-ui-587f9b9cfd-cwfz8 Pending
kubeapps pod/mongodb-f55447769-m6tsn Pending
kubeapps pod/nginx-ingress-controller-78db565655-l9qbt Pending

You can run kubectl get all --all-namespaces -l created-by=kubeapps to check the status of the Kubeapps components.
"
And:
ERRO[0000] error upgrading connection: unable to upgrade connection: pod not found ("nginx-ingress-controller-78db565655-l9qbt_kubeapps")

Garbage collection fails sometimes for kubeapps up

This might be a bug in Kubecfg

Sometimes, when doing kubeapps up you get the following error:

INFO[0000] Updating namespaces kubeapps
INFO[0000]  Creating non-existent namespaces kubeapps
INFO[0000] Updating namespaces kubeless
INFO[0004] Garbage collecting deployments kube-system.sealed-secrets-controller (extensions/v1beta1)
INFO[0004] Garbage collecting deployments kube-system.tiller-deploy (extensions/v1beta1)
INFO[0006] Garbage collecting deployments kube-system.sealed-secrets-controller (apps/v1beta1)
INFO[0006] Garbage collecting deployments kube-system.tiller-deploy (apps/v1beta1)
INFO[0006] Garbage collecting deployments kube-system.sealed-secrets-controller (apps/v1beta2)
INFO[0007] Garbage collecting deployments kube-system.tiller-deploy (apps/v1beta2)
INFO[0008] Garbage collecting clusterrolebindings kubeless-controller-deployer (rbac.authorization.k8s.io/v1)
INFO[0008] Garbage collecting clusterrolebindings sealed-secrets-controller (rbac.authorization.k8s.io/v1)
INFO[0009] Garbage collecting clusterroles kubeless-controller-deployer (rbac.authorization.k8s.io/v1)
INFO[0009] Garbage collecting clusterroles secrets-unsealer (rbac.authorization.k8s.io/v1)
INFO[0009] Garbage collecting rolebindings kube-system.sealed-secrets-controller (rbac.authorization.k8s.io/v1)
INFO[0010] Garbage collecting roles kube-system.sealed-secrets-key-admin (rbac.authorization.k8s.io/v1)
INFO[0011] Garbage collecting customresourcedefinitions functions.k8s.io (apiextensions.k8s.io/v1beta1)
INFO[0011] Garbage collecting customresourcedefinitions sealedsecrets.bitnami.com (apiextensions.k8s.io/v1beta1)
Error: the server could not find the requested resource (get functions.k8s.io)
Usage:
  kubeapps up FLAG [flags]

Flags:
      --dry-run   Provides output to be submitted to the server
  -h, --help      help for up

ERRO[0011] the server could not find the requested resource (get functions.k8s.io)

Add ability to edit the kubernetes manifests

It would be nice if the user was able to inspect/modify the manifests deployed with the kubeapps up command. A possible solution is to introduce a kubeapps dump command to dump the complete manifest yaml which could be then edited and deployed using kubectl.

This would be helpful in getting kubeapps installed on clusters where the default kubeapps manifest doesn't "just work".

Sample usage:

$ kubeapps dump | kubectl create -f -

Use different debug levels

Right know the logs are a bit confusing. For example:

▶ ./kubeapps up
INFO[0000] Updating namespaces kubeapps
INFO[0000]  Creating non-existent namespaces kubeapps
INFO[0000] Updating namespaces kubeless
INFO[0002] Updating namespaces kubeless
INFO[0002] Updating namespaces kubeapps

Why is it updating a namespace several times?

Also it seems that we can benefit from using different log levels. Right now everything is at the "INFO" log level. A general deployment doesn't require that level of detail so we can use "DEBUG" for most of the current logs and only print it when using --verbose

BTW, a final message once everything is installed like:

... 
[INFO] Finished deployment. Execute `kubectl proxy` and open the Kubeapps Dashboard at <calculated_url>

(or ... Execute "kubeapps dashboard" ... when that command is ready)

Please release windows binary

I have kubeapps running in Ubuntu in WSL on windows but dashboard does not work. I will need a real windows binary for that to work I'm guessing.

Upgrade path

We should provide instructions on not only how to upgrade the client but also the components installed in the cluster. I am assuming that kubeapps up does that at the moment.

Ideally, in my opinion, having a kubeapps status that shows the list of components, its versions and if they are up to date, plus a kubeapps upgrade would be great.

But in the meantime just mentioning that upgrades can be done via 1: Update the client, 2: kubeapps up should be enough.

Kubeapps up fails in a default GKE cluster

Steps to reproduce:

  1. Build a default GKE cluster with gcloud container clusters create my-cluster
  2. Run ./kubeapps up

You get the following error:

ERRO[0009] Error updating clusterroles kubeless-controller-deployer: clusterroles.rbac.authorization.k8s.io "kubeless-controller-deployer" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["create"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["delete"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["update"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["patch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["create"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["delete"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["update"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["patch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["functions"], APIGroups:["k8s.io"], Verbs:["get"]} PolicyRule{Resources:["functions"], APIGroups:["k8s.io"], Verbs:["list"]} PolicyRule{Resources:["functions"], APIGroups:["k8s.io"], Verbs:["watch"]}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

Error: stat /home/user/.kube/config: no such file or directory

After

sudo curl -L https://github.com/kubeapps/installer/releases/download/v0.0.2/kubeapps-linux-amd64 -o /usr/local/bin/kubeapps && sudo chmod +x /usr/local/bin/kubeapps

Then

kubeapps up

But it thrown errors:

user@user:~$ /usr/local/bin/kubeapps up
Error: stat /home/user/.kube/config: no such file or directory

The first time you run kubeapps dashboard the browser opens to a blank

Steps to reproduce:

  1. kubeapps up
  2. kubeapps dahsboard

Expected results:

Opening the browser waits a couple of seconds for the proxy to be ready and then opens the page successfully

Actual results:

Opening the browser tag is too fast and it opens before the proxy is ready. You get a server error and need to refresh the page to get to the dashboard

cycling through `kubeapps up` and `down` leaves roles in bad state

I did a kubeapps up , then a kubeapps down everything seemed to have been deleted.
then did a kubeapps up again, everything re-started, but the kubeless UI got in a bad state and could not read the pods corresponding to a function.

The logs of the kubeless ui showed:

Wed, 29 Nov 2017 23:54:01 GMT app:server Error: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"foo\" is forbidden: User \"system:serviceaccount:kubeapps:kubeless-ui\" cannot get services in the namespace \"default\": Unknown user \"system:serviceaccount:kubeapps:kubeless-ui\"","reason":"Forbidden","details":{"name":"foo","kind":"services"},"code":403}

Wed, 29 Nov 2017 23:54:01 GMT app:server Error: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"foo\" is forbidden: User \"system:serviceaccount:kubeapps:kubeless-ui\" cannot get services in the namespace \"default\": Unknown user \"system:serviceaccount:kubeapps:kubeless-ui\"","reason":"Forbidden","details":{"name":"foo","kind":"services"},"code":403}

Reduce size of binary

spend a bit of time reducing the size of the binary.

not only basic compression , but maybe Go compilation options ? or reducing some of the deps ?

git clone doesn't work on windows

C:\GoPath\src\github.com\kubeapps>git clone https://github.com/kubeapps/kubeapps
Cloning into 'kubeapps'...
remote: Counting objects: 9445, done.
remote: Total 9445 (delta 0), reused 0 (delta 0), pack-reused 9445
Receiving objects: 100% (9445/9445), 16.86 MiB | 42.00 KiB/s, done.
Resolving deltas: 100% (3869/3869), done.
fatal: cannot create directory at 'vendor/github.com/docker/distribution/contrib/docker-integration/generated_certs.d/localregistry:5440': Invalid argument
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry the checkout with 'git checkout -f HEAD'

Reason: distribution/distribution#1690

Somehow increase revision of github.com/docker/distribution will fix the issue, but we don't want to break up the dependencies as that package hasn't been imported directly from glide. It's auto revisioned from other packages (probably client-go or api-machinery)

Controls missing

I've ran into an issue a couple of times in which the in-cluster functionality seemed to be disabled (no control at the top nor on-click deployment in the charts page) until I force-refresh it.

I believe that @nomisbeme ran into this issue as well, could you confirm?

selection_326

Add deployment order

There is a dependency between some of the elements of kubeapps (for example the ratesvc will fail until the MongoDB database is ready). This causes exponentials backoff times. It will speed up the deployment if we implement a basic dependency system in which the deployments that depends on other doesn't start until the previous one finishes.

It will be nice as well to have some kind of progression info.

Right now this is just a wrapper around deployment manifests, I would expect from an installer certain intelligence to know which components should be installed in which order and also give information about the installation progress.

installation options?

Would be nice to offer a --storage-class option? In my particular setup I require a storage class name for Heketi to launch a volume.

Is there a recommended setup/transition for users who are already running Helm/Tiller on their own?

ingress controller created even if already existing

I am trying HEAD on minikube which has the ingress addon enabled and it looks like kubeapps starts another ingress controller

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY     STATUS              RESTARTS   AGE
kube-system   default-http-backend-7658dd6b68-2x6qh       0/1       ContainerCreating   0          1m
kube-system   default-http-backend-lg6vj                  0/1       ContainerCreating   0          1m

kubeapps dashboard cmd misc improvements

If you run kubeapps dashboard before kubeapps has been initialized, the message you get is:

$ kubeapps dashboard
ERRO[0000] nginx ingress controller pod not found, run kubeapps up first 

In my opinion this message should be more user friendly, hiding the implementation details on how we detect it. So just say something like kubeapps not initialized, run kubeapps up first

The second issue is that checking that a pod with label name=nginx-ingress-controller in that namespace might not be enough to decide if kubeapps is initialized and running or not. We could think on having some kind of healthcheck or even relying on the existing ones. That could also help to achieve #45 if we want to.

Windows binary

Kubeapps CLI should have a working binary for Windows

MongoDB is not actually being secured

We're setting the root password and user password, but not setting a MONGODB_USER so the authentication configuration is being ignored and MongoDB allows unrestricted access.

One issue is that, according to the docs, if we create a user and database, the user only has access to this one database. In dashboard we intend to use multiple databases, so we will either need to run a job to setup users and databases after starting MongoDB or simply just connect as the root user.

In the meantime, I will change everything to authenticate correctly as the root user (because that's still better than the current unrestricted access), but we should look into configuring the MongoDB image to create users and databases for each service.

goget doesn't build correctly

If you get the project with goget, you get the following error:

package github.com/kubeapps/installer/generated/statik: cannot find package "github.com/kubeapps/installer/generated/statik" in any of:
	/usr/local/Cellar/go/1.9.2/libexec/src/github.com/kubeapps/installer/generated/statik (from $GOROOT)
	/Users/ara/go/src/github.com/kubeapps/installer/generated/statik (from $GOPATH)

`minikube dashboard` not working

I tried a fresh build from HEAD.
with a fresh minikube.

It took 10 minutes at least for everything to get running.

Then I tried minikube dashboard and it fails, with a port-forward error:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY     STATUS             RESTARTS   AGE
kube-system   default-http-backend-7658dd6b68-2x6qh       1/1       Running            0          14m
kube-system   heapster-b8q9k                              1/1       Running            0          15m
kube-system   influxdb-grafana-krxpp                      2/2       Running            0          14m
kube-system   kube-addon-manager-minikube                 1/1       Running            0          15m
kube-system   kube-dns-6fc954457d-88vd8                   3/3       Running            0          14m
kube-system   kubernetes-dashboard-mlb5c                  1/1       Running            0          15m
kube-system   nginx-ingress-controller-6d5987d948-s86dq   0/1       CrashLoopBackOff   7          14m
kube-system   sealed-secrets-controller-cd4f586fc-lk8xl   1/1       Running            0          14m
kube-system   tiller-deploy-84b97f465c-58flx              1/1       Running            0          14m
kubeapps      kubeapps-dashboard-api-7d49749d58-85gxb     0/1       Running            5          14m
kubeapps      kubeapps-dashboard-api-7d49749d58-p5759     0/1       Running            6          14m
kubeapps      kubeapps-dashboard-ui-7c4ffcb5b5-djkcj      1/1       Running            0          14m
kubeapps      kubeapps-dashboard-ui-7c4ffcb5b5-w9xqh      1/1       Running            0          14m
kubeapps      kubeless-ui-56f98b485b-8mst6                2/2       Running            0          14m
kubeapps      mongodb-787c7b54bb-mkwjk                    1/1       Running            1          14m
kubeless      kafka-0                                     1/1       Running            1          14m
kubeless      kubeless-controller-8599864f8d-5lr5p        1/1       Running            0          14m
kubeless      zoo-0                                       1/1       Running            0          14m
sebgoa@foobar examples (master) $ kubectl logs nginx-ingress-controller-6d5987d948-s86dq -n kube-system
I1126 16:29:15.958342       1 launch.go:113] &{NGINX 0.9.0-beta.15 git-a3e86f2 https://github.com/kubernetes/ingress}
I1126 16:29:15.958398       1 launch.go:116] Watching for ingress class: nginx
I1126 16:29:15.958633       1 launch.go:291] Creating API client for https://10.0.0.1:443
I1126 16:29:16.572738       1 launch.go:304] Running in Kubernetes Cluster version v1.8 (v1.8.0) - git (dirty) commit 0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4 - platform linux/amd64
F1126 16:29:16.574956       1 launch.go:138] no service with name kube-system/default-http-backend found: services "default-http-backend" not found
sebgoa@foobar examples (master) $ kubectl get svc --all-namespaces
NAMESPACE     NAME                        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
default       kubernetes                  ClusterIP   10.0.0.1     <none>        443/TCP             16m
kube-system   heapster                    NodePort    10.0.0.158   <none>        80:32352/TCP        15m
kube-system   kube-dns                    ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP       15m
kube-system   kubernetes-dashboard        NodePort    10.0.0.217   <none>        80:30000/TCP        15m
kube-system   monitoring-grafana          NodePort    10.0.0.250   <none>        80:30002/TCP        15m
kube-system   monitoring-influxdb         ClusterIP   10.0.0.68    <none>        8083/TCP,8086/TCP   15m
kube-system   nginx-ingress               ClusterIP   10.0.0.82    <none>        80/TCP              15m
kube-system   sealed-secrets-controller   ClusterIP   10.0.0.161   <none>        8080/TCP            14m
kube-system   tiller-deploy               ClusterIP   10.0.0.244   <none>        44134/TCP           15m
kubeapps      kubeapps-dashboard-api      ClusterIP   10.0.0.22    <none>        80/TCP              15m
kubeapps      kubeapps-dashboard-ui       ClusterIP   10.0.0.218   <none>        80/TCP              15m
kubeapps      kubeless-ui                 ClusterIP   10.0.0.64    <none>        3000/TCP            14m
kubeapps      mongodb                     ClusterIP   10.0.0.105   <none>        27017/TCP           14m
kubeless      broker                      ClusterIP   None         <none>        9092/TCP            15m
kubeless      kafka                       ClusterIP   10.0.0.176   <none>        9092/TCP            15m
kubeless      zoo                         ClusterIP   None         <none>        9092/TCP,3888/TCP   15m
kubeless      zookeeper                   ClusterIP   10.0.0.127   <none>        2181/TCP            15m

mongodb doesn't start properly

The mongodb container seems to be failing with:

Error executing 'postInstallation': Group '2000' not found

Full output:

$ kubectl log  -n kubeapps -p po/mongodb-77bdcd694b-sl2rt
W1211 15:18:10.287621   70223 cmd.go:392] log is DEPRECATED and will be removed in a future version. Use logs instead.

Welcome to the Bitnami mongodb container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
Send us your feedback at [email protected]

nami    INFO  Initializing mongodb
Error executing 'postInstallation': Group '2000' not found

second `kubeapps up` run should be a no-op

Second kubeapps up run in a row garbage collects most (all?) objects
instead of just being a no-op:

$ ./kubeapps up
[... 1st run ok ...]

$ ./kubeapps up
INFO[0000] Updating namespaces kubeapps
INFO[0000] Updating namespaces kubeless
INFO[0000] Garbage collecting deployments kube-system.sealed-secrets-controller (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kube-system.tiller-deploy (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kubeapps.kubeapps-hub-api (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kubeapps.kubeapps-hub-prerender (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kubeapps.kubeapps-hub-ratesvc (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kubeapps.kubeapps-hub-ui (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kubeapps.mongodb (extensions/v1beta1)
INFO[0000] Garbage collecting deployments kubeless.kubeless-controller (extensions/v1beta1)
INFO[0000] Garbage collecting ingresses kubeapps.kubeapps-hub (extensions/v1beta1)
INFO[0001] Garbage collecting deployments kube-system.sealed-secrets-controller (apps/v1beta1)
INFO[0001] Garbage collecting deployments kube-system.tiller-deploy (apps/v1beta1)
INFO[0001] Garbage collecting deployments kubeapps.kubeapps-hub-api (apps/v1beta1)
INFO[0001] Garbage collecting deployments kubeapps.kubeapps-hub-prerender (apps/v1beta1)
INFO[0001] Garbage collecting deployments kubeapps.kubeapps-hub-ratesvc (apps/v1beta1)
[...]

Full session log: https://gist.github.com/jjo/88687300eb7d4f64c38ce7224172e929

Update client-go to 4.0.0 or greater

In order to use client-go to create a portforward connection, we need the remotecommand or spdy package. remotecommand moved to client-go in kubernetes/client-go@20e59c6#diff-8656ac6cd2859b6ba2961d35c31735b4 and previously this was in kubernetes/kubernetes. This is how kubectl implements portforward today
with the spdy package (https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/portforward.go#L102) and previously with remotecommand (https://github.com/kubernetes/kubernetes/blob/2612e0c78ad18ac87bbd200d547100cf99f36089/pkg/kubectl/cmd/portforward.go#L104).

So, in order to avoid adding kubernetes/kubernetes as a dependency here, we could upgrade client-go. The problem is that this pulls in a newer version of apimachinery which doesn't seem to be compatible with kubecfg. As usual, we're in Kubernetes dependency hell...

There might be another way to implement this, but I couldn't find any other documentation than what kubectl is doing.

For now, we can have the dashboard command execute kubectl port-forward, but ideally we're able to use the client-go library to create this portforward.

support jsonnet manifests

Currently the installer supports yaml only. It should be extended to deploy "application" via jsonnet file

Dev looking to help

Not sure where to post this, I don't see a slack or anything. Windows and visual studio/vscode are my primary development envrionments. I've use a lot of others like xcode, eclipse, and a bunch of others over my 20 years of programming. I've also have pretty deep knowledge of many orchestrators including K8s, swarm, and Service Fabric, tons of container experience as well. I triple boot Windows, Mac, and Linux so I can build and test in any enviornment. I also have access to a lot of Azure resources such as AKS (Managed Kubernetes on Azure), I can pretty much test out anything needed in that environement and cover the costs. I would really love to help on this project if your interested, this seems like a great project.

Thanks.

Kubeapps installs components into different namespaces

I was expecting for kubeapps to install everything into a single name space but:
Kubeless goes into kubeless
Monokular into kubeapps
SealedSecrest into kube-system

I found that a bit messy , specially kube-system as I already had a lot install there and I had to do spot the difference to see what was added after I run kubeapps up

kubeapps-dashboard-ui crash loop on CentOS deployed kubernetes 1.8.5

kubernetes deployed on AWS with kops, and CentOS image was specified for machines per https://github.com/kubernetes/kops/blob/master/docs/images.md

kubectl -n kubeapps logs kubeapps-dashboard-ui-5dc47b8d6d-s4qbk

Welcome to the Bitnami nginx container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginx
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issues
Send us your feedback at [email protected]

nami INFO Initializing nginx
Error executing 'postInstallation': EEXIST: file already exists, symlink '/bitnami/nginx/conf' -> '/opt/bitnami/nginx/conf'

kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-16T03:16:50Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.5", GitCommit:"cce11c6a185279d037023e02ac5249e14daa22bf", GitTreeState:"clean", BuildDate:"2017-12-07T16:05:18Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Ingress controller

Is there any way to configure the ingress controller service to for example be exposed via ELB/NodePort at deployment time? If not, what's the recommended way to do it?

Would it make sense to allow user to choose an existing an existing ingress controller?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.