Giter VIP home page Giter VIP logo

cloud-provider-alibaba-cloud's Introduction

Kubernetes (K8s)

CII Best Practices Go Report Card GitHub release (latest SemVer)


Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for the deployment, maintenance, and scaling of applications.

Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.

Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If your company wants to help shape the evolution of technologies that are container-packaged, dynamically scheduled, and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.


To start using K8s

See our documentation on kubernetes.io.

Take a free course on Scalable Microservices with Kubernetes.

To use Kubernetes code as a library in other applications, see the list of published components. Use of the k8s.io/kubernetes module or k8s.io/kubernetes/... packages as libraries is not supported.

To start developing K8s

The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.

If you want to build Kubernetes right away there are two options:

You have a working Go environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make
You have a working Docker environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make quick-release

For the full story, head over to the developer's documentation.

Support

If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

That said, if you have questions, reach out to us one way or another.

Community Meetings

The Calendar has the list of all the meetings in the Kubernetes community in a single location.

Adopters

The User Case Studies website has real-world use cases of organizations across industries that are deploying/migrating to Kubernetes.

Governance

Kubernetes project is governed by a framework of principles, values, policies and processes to help our community and constituents towards our shared goals.

The Kubernetes Community is the launching point for learning about how we organize ourselves.

The Kubernetes Steering community repo is used by the Kubernetes Steering Committee, which oversees governance of the Kubernetes project.

Roadmap

The Kubernetes Enhancements repo provides information about Kubernetes releases, as well as feature tracking and backlogs.

cloud-provider-alibaba-cloud's People

Contributors

andrewsykim avatar aoxn avatar bswang avatar cheyang avatar crazykev avatar cuericlee avatar damdo avatar ddbmh avatar devkanro avatar elmiko avatar gujingit avatar ialidzhikov avatar jia-jerry avatar jovizhangwei avatar k8s-ci-robot avatar letty5411 avatar lyt99 avatar mel3c avatar mirake avatar mitingjin avatar ringtail avatar sunyuan3 avatar tengattack avatar testwill avatar xh4n3 avatar xichengliudui avatar xuancheng131 avatar yang-wang11 avatar yuzhiquan avatar zyecho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-provider-alibaba-cloud's Issues

创建的service一直处于pending状态

环境:k8s 1.17.5
安装方式: RKE
操作步骤:

  1. 机器hosts添加
    127.0.0.1 cn-beijing.i-2ze67rrrqh0b566qg4ud
  2. 调整kubelet参数
    kubelet: image: "" extra_args: hostname-override: cn-beijing.i-2ze67rrrqh0b566qg4ud provider-id: cn-beijing.i-2ze67rrrqh0b566qg4ud cloud-provider: external
  3. 创建RAM子账号并根据脚本授权,取得秘钥,替换,后来有人说要base64也尝试过
  4. 创建configmap,分配角色,修改了 appsVersion,都改成v1版本
  5. 创建svc
    `
    apiVersion: v1
    kind: Service
    metadata:
    annotations:
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: intranet
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: lb-2ze62jfr5iu0wxwdo2sxq
    #service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
    name: nginx
    namespace: default
    spec:
    ports:
  • port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: nginx
    type: LoadBalancer
    `
    结果:
    IP一直处理pendign状态
    image

不知道要怎么排查了,大侠们帮帮忙

建议可通过添加annotation等方式允许master节点作为LoadBalancer的后端

if _, isMaster := node.Labels[LabelNodeRoleMaster]; isMaster {
return false
}

是否可以允许通过设置annotation的方式,允许master节点也能作为slb的后端节点?

对于一些在阿里云上自建的集群,可以更灵活,比如使用k3s的创建轻量单节点测试环境。

Every node becomes network-unavailable while using cloud-provider-alibaba-cloud

Behaviour

Any pod can not be scheduled to run as 3 node(s) had taint {node.kubernetes.io/network-unavailable: } after adding the --cloud-provider=external --provider-id=xxx to kubelet command.

Node description:

spec:
  providerID: cn-shenzhen.i-wz9idb0tjek3dpel235a
  taints:
  - effect: NoSchedule
    key: node.kubernetes.io/network-unavailable
    timeAdded: "2020-07-16T07:57:45Z"

It adds this taint immediately even though I deleted this taint manually. - --configure-cloud-routes=false is ineffective.

Environment

Kubernetes Version: v1.18.4
cloud-controller-manager Version: registry.cn-shenzhen.aliyuncs.com/acs/cloud-controller-manager-amd64:v1.9.3.239-g40d97e1-aliyun

NetworkUnavailable -> RouteController failed to create a route

I tried deploy aliyun-cloud-provider. The kubernetes nodes consistently come up with NetworkUnavailable. When running kubectl describe pod nginx-example-5b65f94fcc-vmp5p I get the following:

Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  104s (x25 over 3m11s)  default-scheduler  0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

kubectl get node -o yaml | grep taints -A 3

    taints:
    - effect: NoSchedule
      key: node.kubernetes.io/network-unavailable
      timeAdded: 2019-04-23T08:31:10Z
--
    taints:
    - effect: NoSchedule
      key: node.kubernetes.io/network-unavailable
      timeAdded: 2019-04-23T08:31:10Z

Official docker image

Hello,

Do you have plan to have official docker image in Dockerhub or k8s.gcr.io?

Thanks

CCM 同时摘,异步做,导致的连接异常

业务结构: 阿里云SLB + IPVS + POD
在delete pod 时候,我们发现,这个时候访问SLB的请求会有部分connection refused,后续发现在delete命令发送后,IPVS几乎瞬间删除配置, SLB会进行变配,但是需要时间在2秒-10秒左右,导致流量投个SLB传到了ECS上,但是ECS没有映射规则,而refused.

建议delete pod 同步执行,先从SLB摘除ECS,然后再删IPVS。

flag "service-account-private-key-file" does not exist

[root@master cloud-balancer]# kubectl logs -f cloud-controller-manager-cdsdz -n kube-system
W0317 03:42:37.091169 1 options.go:65] add flags error: flag "service-account-private-key-file" does not exist
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0317 03:42:37.141162 1 clientmgr.go:100] alicloud: ak mode to authenticate user. without token and role assume
I0317 03:42:37.141340 1 clientmgr.go:140] wait for token ready
I0317 03:42:37.182267 1 alicloud.go:178] Using vpc region: region=cn-hangzhou, vpcid=vpc-bp18l81vb7ob7zp16glj5
W0317 03:42:37.294399 1 ccm.go:165] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
E0317 03:42:37.294594 1 cloudprovider-alibaba-cloud.go:49] Run CCM error: verify ccm config: create client error: error loading config file "/etc/kubernetes/cloud-controller-manager.conf": yaml: line 16: mapping values are not allowed in this context


1、cloud-controller-manger的配置文件如下:

cat /etc/kubernetes/cloud-controller-manager.conf

kind: Config
contexts:

  • context:
    cluster: kubernetes
    user: system:cloud-controller-manager
    name: system:cloud-controller-manager@kubernetes
    current-context: system:cloud-controller-manager@kubernetes
    users:
  • name: system:cloud-controller-manager
    user:
    tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    apiVersion: v1
    clusters:
  • cluster:
    certificate-authority-data:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1ETXdNVEEwTURJek5sb1hEVE14TURJeU56QTBNREl6Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDJwCmhhOUc3VFNHWmVPc1U2U1Y1eXBJd2JFeFdaMHorVlVRdlgzM2gyek5ZbUtHVDl2ODlpLzdLMHRPSS9Zd3NEK1kKb3ViTlFBMFlNQ1R4ZGc1SjREbXFtY0loN0VRRER4d3NEQ0xMT2o3ZWh6Z3RXbEZ3d1hVNktrb0g4OUV6L3g4YQpFVzMveXhIYzkvbzE4Yi93NGNzWEwzanV1ZGZ4ZnIvNnpnY3UwOURpQzY5bFFxdTZsZ24yeWVwWnJPM1hzUzlwCjhYbVBSTXQrLzR4ZE1WUnNxMnl0QzRXdXVJNVZsODNoemQxSVZ0bUZrbCtOWjMwYllRNlV3Nm96RFVCNERKZ3cKMFZaWTUzN3hPTndGM1puSzNPU213S041a0lZZDg0UXpjaUtMY2JmQUpkMXFYQzJYTW93czRBU1BQdFBVSDlveQpKbDZSWkt1TFZnbFlhYkdHZlBFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHTm9JTk5IVVl2WVZxczJJaERHRlh0OW9DZEkKbTVLWlhVQmxGSWlqY3NHQ1BpK3RCQ2tBWnRPeFNmb3p1eEJ5UnNHeXdGcWtCREZOWDhVM3V5VEhWVXZjdFZMdgp6MWt1RmxVVEtFNlJtTFFPUWpaWVNORERTdWJaTFI3b0pqUmY2UE1aZ2MvcVdicVd4ZmxycCtPTFlFMFVLL2FjClhkOG9rbU5VREhNL2ZMYWsrMjF4QU9YcmYwTzFTQ1RxZENIcmlJSWdNckJtTU5TYVc0VFp3ZGxjaG5QSkV5Q1YKdjFMdE5mRFl1Q3NLS1NJQllzKzVicWxLRFdXajV4VzJTUHVwZEFkNld3VnBFeUtETHJmdndXL3NCZGF5b3JlNwp0N25uRlBJN2xTdWYzU1RmRTVqdjRNRXZvdFpORjdiWHhmcEU2Tlh1OXVnbXgwY00wdENNdER2TTIraz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.16.0.48:6443
    name: kubernetes

2、$ca_data的数据也是正确的:
cat /etc/kubernetes/pki/ca.crt|base64 -w 0
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1ETXdNVEEwTURJek5sb1hEVE14TURJeU56QTBNREl6Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDJwCmhhOUc3VFNHWmVPc1U2U1Y1eXBJd2JFeFdaMHorVlVRdlgzM2gyek5ZbUtHVDl2ODlpLzdLMHRPSS9Zd3NEK1kKb3ViTlFBMFlNQ1R4ZGc1SjREbXFtY0loN0VRRER4d3NEQ0xMT2o3ZWh6Z3RXbEZ3d1hVNktrb0g4OUV6L3g4YQpFVzMveXhIYzkvbzE4Yi93NGNzWEwzanV1ZGZ4ZnIvNnpnY3UwOURpQzY5bFFxdTZsZ24yeWVwWnJPM1hzUzlwCjhYbVBSTXQrLzR4ZE1WUnNxMnl0QzRXdXVJNVZsODNoemQxSVZ0bUZrbCtOWjMwYllRNlV3Nm96RFVCNERKZ3cKMFZaWTUzN3hPTndGM1puSzNPU213S041a0lZZDg0UXpjaUtMY2JmQUpkMXFYQzJYTW93czRBU1BQdFBVSDlveQpKbDZSWkt1TFZnbFlhYkdHZlBFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHTm9JTk5IVVl2WVZxczJJaERHRlh0OW9DZEkKbTVLWlhVQmxGSWlqY3NHQ1BpK3RCQ2tBWnRPeFNmb3p1eEJ5UnNHeXdGcWtCREZOWDhVM3V5VEhWVXZjdFZMdgp6MWt1RmxVVEtFNlJtTFFPUWpaWVNORERTdWJaTFI3b0pqUmY2UE1aZ2MvcVdicVd4ZmxycCtPTFlFMFVLL2FjClhkOG9rbU5VREhNL2ZMYWsrMjF4QU9YcmYwTzFTQ1RxZENIcmlJSWdNckJtTU5TYVc0VFp3ZGxjaG5QSkV5Q1YKdjFMdE5mRFl1Q3NLS1NJQllzKzVicWxLRFdXajV4VzJTUHVwZEFkNld3VnBFeUtETHJmdndXL3NCZGF5b3JlNwp0N25uRlBJN2xTdWYzU1RmRTVqdjRNRXZvdFpORjdiWHhmcEU2Tlh1OXVnbXgwY00wdENNdER2TTIraz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

service-account-private-key-file这个东西是哪里来的?

about `kube-apiserver and kube-controller-manager MUST NOT specify the --cloud-provider flag`

I saw that the project documentation is to configure the api and controller --cloud-provider flag , but the official kubernetes documentation says it can't be configured. Is the version problem?

running-cloud-controller

kube-apiserver and kube-controller-manager MUST NOT specify the --cloud-provider flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed.

Why should node name has to be overriden?

To make the controller works, we need to specify --hostname-override [region].[instanceid]. This is not required in other CCM like AWS or GCP. Why should we put this constraint?

How to get slb loadbalancer id when ccm auto create slb?

hi. if i create service in k8s while ccm auto create slb, can i get service response with loadbalancer id?

`apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx111","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"run":"nginx"},"type":"LoadBalancer"}}
creationTimestamp: "2020-11-06T09:23:36Z"
name: nginx111
namespace: default
resourceVersion: "86036998"
selfLink: /api/v1/namespaces/default/services/nginx111
uid: bfc3cb7b-2011-11eb-a8e5-00163e2cb1df
spec:
clusterIP: 10.21.5.89
externalTrafficPolicy: Cluster
ports:

  • nodePort: 32466
    port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: nginx
    sessionAffinity: None
    type: LoadBalancer
    status:
    loadBalancer:
    ingress:
    • ip: 39.106.68.98`

cannot start cloud-controller-manager

version 1.14.6
error on initialize
cloud-controller-manager-8qgwc 1/1 Running 0 2m12s

kubectl logs cloud-controller-manager-7mqng -n kube-system
Error from server: no preferred addresses found; known addresses: []

vpc创建路由失败

I1019 08:03:32.355043       1 controller.go:368] Creating route for node cn-beijing.i-2zeaxxxxxxwimgxn 10.244.0.0/20 with hint cn-beijing.i-2zeaxxxxxximgxn 
I1019 08:03:32.355061       1 alicloud.go:556] Alicloud.CreateRoute(", &{Name: TargetNode:cn-beijing.i-2zexxxxx3wimgxn DestinationCIDR:10.244.0.0/20 Blackhole:false}") 
I1019 08:03:32.514105       1 routes.go:193] CreateRoute:[vtb-2zeiksxxxxxxnzifrf] start to create route, 10.244.0.0/20 -> i-2zeagh7rzys1k3wimgxn 
E1019 08:03:32.902558       1 controller.go:376] Backoff creating route: WaitCreate: ceate route for table vtb-2zeixxxxxxrenzifrf error, Aliyun API Error: RequestId: 28E4BFA8-A9DB-46DA-B778-00D4DA256D3B Status Code: 400 Code: OperationFailed.InvalidNexthop Message: vpc multi scope route must has a enable nexthop. 

相同问题 #77

按照工单的技术说,创建route不应该带区域cn-beijing前缀?

alibaba account for minikube

Hello I am one of the minikube maintainers, and we have many users in china who do not have access to gcs buckets (where we store our preloaded images) and our chinnese users have been using work arrounds to download minikube binaries from alibaba from user-managed accounts

we would like to publish minikube binaries and preloaded images to alibaba account directly.
we also like to potentially run integration test in alibaba VMs as well.

is there a way we could get an alibaba account ?

failed to initialize a master node with aliyun-cloud-provider

hi guys,

I wanna create a highly-available kubernetes master cluster with aliyun-cloud-provider, but kubeadm hangs and report:

sudo kubeadm init --config=kubeadm-config.yaml
[init] using Kubernetes version: v1.12.7
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh localhost] and IPs [192.168.100.8 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.100.8 192.168.101.55 192.168.100.8 192.168.101.53 192.168.100.9 192.168.101.55]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Adding extra host path mount "localtime" to "kube-apiserver"
[controlplane] Adding extra host path mount "localtime" to "kube-controller-manager"
[controlplane] Adding extra host path mount "localtime" to "kube-scheduler"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

kubelet logs as follow:

sudo journalctl -xeu kubelet
Apr 16 14:05:59 hb3-test-dev-001 kubelet[21490]: E0416 14:05:59.654285   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:05:59 hb3-test-dev-001 kubelet[21490]: E0416 14:05:59.754451   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:05:59 hb3-test-dev-001 kubelet[21490]: E0416 14:05:59.854657   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:05:59 hb3-test-dev-001 kubelet[21490]: E0416 14:05:59.954827   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.055024   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.155198   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.255358   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.355559   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.455724   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.555911   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.656133   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.756317   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.856469   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found
Apr 16 14:06:00 hb3-test-dev-001 kubelet[21490]: E0416 14:06:00.956643   21490 kubelet.go:2236] node "cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh" not found

kubeadm config details as follow:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
bootstrapTokens:
- token: feec19.a15e50482a60eb1b
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: external
  name: cn-zhangjiakou.i-8vb65t2rftc0vf9zf5rh
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: v1.12.7
controlPlaneEndpoint: 192.168.101.55:6443    # intranet SLB address
apiServerCertSANs:
  - 192.168.100.8                                                 # current node
  - 192.168.101.53    
  - 192.168.100.9
  - 192.168.101.55
apiServerExtraArgs:
  cloud-provider: external
  apiserver-count: "3"
clusterName: kubernetes
controllerManagerExtraArgs:
  cloud-provider: external
  horizontal-pod-autoscaler-use-rest-clients: "false"
  node-cidr-mask-size: "20"
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 172.16.0.0/20
apiServerExtraVolumes:
- hostPath: /etc/localtime
  mountPath: /etc/localtime
  name: localtime
controllerManagerExtraVolumes:
- hostPath: /etc/localtime
  mountPath: /etc/localtime
  name: localtime
schedulerExtraVolumes:
- hostPath: /etc/localtime
  mountPath: /etc/localtime
  name: localtime

gometalinter fix

gometalinter --disable-all --skip vendor -E ineffassign -E misspell -d ./...

  • alicloud_test.go
  • alicloud.go
  • clientmgr.go
  • controller
  • framework.go
  • instance_test.go
  • instances.go
  • instances_mock.go
  • listener_test.go
  • listeners.go
  • loadbalancer.go
  • loadbalancer_mock.go
  • loadbalancer_test.go
  • options.go
  • privatezone.go
  • routes.go
  • routes_mock.go
  • routes_test.go
  • utils
  • utils.go
  • vgroups.go

无法创建 loadbalancer

2m48s Warning CreatingLoadBalancerFailed service/hello Error creating load balancer (will retry): failed to ensure load balancer for service jet/hello: Aliyun API Error: RequestId: D91A5516-2523-41AC-B49A-11E757B275C7 Status Code: 400 Code: BackendServer.ServerIdEmpty Message: The specified ServerId can not be empty.

[bug]阿里云kubernetes版不检查loadbalancer service port,导致流量被异常转发

背景信息

registry-vpc.cn-shenzhen.aliyuncs.com/acs/cloud-controller-manager-amd64:v1.9.3.106-g3f39653-aliyun

bug

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/alicloud-loadbalancer-id: lb-a
  name: a
spec:
  externalTrafficPolicy: Local
  ports:
  - port: 9200
    protocol: TCP
    targetPort: restful
  selector:
    app: elasticsearch
    elasticsearch-role: data
  sessionAffinity: None
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/alicloud-loadbalancer-id: lb-a
  name: b
spec:
  externalTrafficPolicy: Local
  ports:
  - port: 9200
    protocol: TCP
    targetPort: restful
  selector:
    app: elasticsearch
    elasticsearch-role: master
  sessionAffinity: None
  type: LoadBalancer  

如上所属,a,b selector虽然不同,但绑定了同样的lb,使用了同样的svc port.

假设slb ip为0.0.0.1,a对应的SLB监听端口为1,也就是说通过0.0.0.1:1能够访问到kubernetes内部的a服务;同理,通过0.0.0.1:2访问b服务.

但是实际上,因为二义性的这个bug,访问0.0.0.1:1有可能访问到b服务.

问题的关键

cloud-provider-alibaba-cloud 这个组件没有检查已有的端口冲突,导致了有问题的svc留存.

slb 不能自动挂载后端服务器

slb 可以正常创建,但是不能挂载后端服务器 controller 报错如下,请问需要怎么解决呢 谢谢
/Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cn-beijing.iz2zeiaj62cegx74yfmgamz", UID:"cn-beijing.iz2zeiaj62cegx74yfmgamz", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateRoute' Could not create route ec932f20-9bab-11e9-bf08-00163e0a652b 10.42.2.0/24 for node cn-beijing.iz2zeiaj62cegx74yfmgamz after 65.769913ms: instance not found

部署后启动异常

I0323 06:47:40.990888 1 clientmgr.go:140] wait for token ready
I0323 06:47:41.018636 1 alicloud.go:178] Using vpc region: region=cn-shanghai, vpcid=vpc-uf6bzcnx08y2ntl0grnf9
E0323 06:47:41.132691 1 cloudprovider-alibaba-cloud.go:49] Run CCM error: verify ccm config: cloud provider could not be initialized: could not init cloud provider "alicloud": set vpc info error: alicloud: multiple vpc found by id[vpc-uf6bzcnx08y2ntl0grnf9], length(vpcs)=0

Fail to update existing Loadbalancer

When I update CCM to latest version, some times I found old loadbalancer's listeners are gone. Even I tried to delete the pod, the listeners are not back. The old listeners are strange looks like tcp_443 Errors were reported as:

I1111 10:08:51.688081       1 vgroups.go:317] ensure vserver group: 2 vgroup need to be processed.
I1111 10:08:51.688118       1 vgroups.go:592] [Local] mode service: k8s/32198/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud
I1111 10:08:51.688138       1 utils.go:155] [kube-system/addons-nginx-ingress-controller]: reconcile , Subset index=0, port len=2, ready len=0, not ready len=1
I1111 10:08:51.750642       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]update: backend vgroupid [rsp-gw8wvy5ro36gu]
I1111 10:08:51.814525       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]update: apis[[]], node[[]]
I1111 10:08:51.814554       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]update: no backend need to be added for vgroupid [rsp-gw8wvy5ro36gu]
I1111 10:08:51.814566       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]EnsureGroup: id=[rsp-gw8wvy5ro36gu], Name:[k8s/32198/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud], LoadBalancerId:[lb-gw8rcl022hklkdfdisufc]
I1111 10:08:51.814577       1 vgroups.go:592] [Local] mode service: k8s/30917/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud
I1111 10:08:51.814592       1 utils.go:155] [kube-system/addons-nginx-ingress-controller]: reconcile , Subset index=0, port len=2, ready len=0, not ready len=1
I1111 10:08:51.876629       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]update: backend vgroupid [rsp-gw8q0r7lf3xtz]
I1111 10:08:51.949083       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]update: apis[[]], node[[]]
I1111 10:08:51.949366       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]update: no backend need to be added for vgroupid [rsp-gw8q0r7lf3xtz]
I1111 10:08:51.949401       1 vgroups.go:33] [kube-system/addons-nginx-ingress-controller]EnsureGroup: id=[rsp-gw8q0r7lf3xtz], Name:[k8s/30917/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud], LoadBalancerId:[lb-gw8rcl022hklkdfdisufc]
I1111 10:08:51.949432       1 log.go:14] [kube-system/addons-nginx-ingress-controller]not user defined loadbalancer[lb-gw8rcl022hklkdfdisufc], start to apply listener.
I1111 10:08:51.949442       1 listeners.go:64] transfor protocol, empty annotation 443/TCP
I1111 10:08:51.949458       1 listeners.go:64] transfor protocol, empty annotation 80/TCP
W1111 10:08:51.949466       1 listeners.go:538] alicloud: error parse listener description[]. ListenerName Format Error: k8s/${port}/${service}/${namespace}/${clusterid} format is expected. Got []
W1111 10:08:51.949472       1 listeners.go:538] alicloud: error parse listener description[]. ListenerName Format Error: k8s/${port}/${service}/${namespace}/${clusterid} format is expected. Got []
I1111 10:08:51.949480       1 log.go:14] [kube-system/addons-nginx-ingress-controller]found listener with port & protocol match, do update k8s/80/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud
I1111 10:08:51.949500       1 log.go:14] [kube-system/addons-nginx-ingress-controller]found listener with port & protocol match, do update k8s/443/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud
I1111 10:08:51.949506       1 log.go:14] [kube-system/addons-nginx-ingress-controller]ensure listener: 2 updates for lb-gw8rcl022hklkdfdisufc
I1111 10:08:51.949514       1 listeners.go:233] apply UPDATE listener for k8s/80/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud with trans protocol tcp
I1111 10:08:52.014595       1 log.go:14] [kube-system/addons-nginx-ingress-controller]tcp listener 80 status is running.
I1111 10:08:52.014632       1 listeners.go:274] found: key=k8s/30917/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud, groupid=rsp-gw8q0r7lf3xtz, try use vserver group mode.
I1111 10:08:52.014643       1 log.go:14] [kube-system/addons-nginx-ingress-controller]tcp listener did not change, skip [update], port=[80], nodeport=[30917]
I1111 10:08:52.014650       1 listeners.go:233] apply UPDATE listener for k8s/443/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud with trans protocol tcp
I1111 10:08:52.087701       1 log.go:14] [kube-system/addons-nginx-ingress-controller]tcp listener 443 status is running.
I1111 10:08:52.087729       1 listeners.go:274] found: key=k8s/32198/addons-nginx-ingress-controller/kube-system/shoot--garden--alicloud, groupid=rsp-gw8wvy5ro36gu, try use vserver group mode.
I1111 10:08:52.087737       1 log.go:14] [kube-system/addons-nginx-ingress-controller]tcp listener did not change, skip [update], port=[443], nodeport=[32198]
I1111 10:08:52.247080       1 log.go:14] [kube-system/addons-nginx-ingress-controller]alicloud: fallback to find loadbalancer by tags [[{"TagKey":"kubernetes.do.not.delete","TagValue":"a3c731f7d45c411e99f509624265875a"}]]
I1111 10:08:52.532770       1 log.go:14] [kube-system/addons-nginx-ingress-controller]port 80 has legacy listener, apply default backend server group. []
I1111 10:08:52.532800       1 log.go:14] [kube-system/addons-nginx-ingress-controller]update default backend server group```

部署后启动异常

I0323 06:47:40.990888 1 clientmgr.go:140] wait for token ready
I0323 06:47:41.018636 1 alicloud.go:178] Using vpc region: region=cn-shanghai, vpcid=vpc-uf6bzcnx08y2ntl0grnf9
E0323 06:47:41.132691 1 cloudprovider-alibaba-cloud.go:49] Run CCM error: verify ccm config: cloud provider could not be initialized: could not init cloud provider "alicloud": set vpc info error: alicloud: multiple vpc found by id[vpc-uf6bzcnx08y2ntl0grnf9], length(vpcs)=0

failed to update lock

❯ oc version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-10-06T23:21:44Z", GoVersion:"go1.13.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+a8d983c", GitCommit:"a8d983c", GitTreeState:"clean", BuildDate:"2019-12-23T12:16:26Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

CCM日志显示
failed to update lock: operation cannot be fulfilled on endpoints "ccm": the object has been modified; please apply your changes to the latest version and try again

Taints:node.kubernetes.io/network-unavailable:NoSchedule

`Name: kube-node-02
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kube-node-02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 172.16.99.94/20
projectcalico.org/IPv4IPIPTunnelAddr: 192.168.236.64
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 03 Apr 2020 14:02:50 +0800
Taints: node.kubernetes.io/network-unavailable:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kube-node-02
AcquireTime:
RenewTime: Thu, 06 Aug 2020 15:46:49 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


NetworkUnavailable True Thu, 28 May 2020 11:48:00 +0800 Thu, 28 May 2020 11:48:00 +0800 NoRouteCreated RouteController failed to create a route
MemoryPressure False Thu, 06 Aug 2020 15:44:36 +0800 Sun, 05 Apr 2020 13:32:12 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 06 Aug 2020 15:44:36 +0800 Sun, 05 Apr 2020 13:32:12 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 06 Aug 2020 15:44:36 +0800 Sun, 05 Apr 2020 13:32:12 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 06 Aug 2020 15:44:36 +0800 Sun, 05 Apr 2020 13:32:12 +0800 KubeletReady kubelet is posting ready status
Addresses:
`

我在自建的集群中安装CCM后又删除,其中一个节点的污点无法删除,删除之后又会自动添加,这个问题怎么解决?

Error: host IP unknown; known addresses: [] for calico

version 1.13.10
error on intitializing:

kube-system   pod/calico-kube-controllers-5945d76f69-fgqzn                     0/1     Pending                      0          3m9s
kube-system   pod/calico-node-hf4q9                                            0/1     CreateContainerConfigError   0          3m17s
kube-system   pod/calico-node-mskm6                                            0/1     CreateContainerConfigError   0          3m17s
kube-system   pod/calico-node-w5gl8                                            0/1     CreateContainerConfigError   0          3m17s
kube-system   pod/coredns-6b57ffd7cc-bxz7m                                     0/1     Pending                      0          2m20s
kube-system   pod/dns-autoscaler-69778967c8-gh5rb                              0/1     Pending                      0          2m16s
kube-system   pod/kube-apiserver-k8s-01            1/1     Running                      0          4m48s
kube-system   pod/kube-apiserver-k8s-02            1/1     Running                      0          5m38s
kube-system   pod/kube-apiserver-k8s-03            1/1     Running                      0          4m49s
kube-system   pod/kube-apiserver-k8s-01    1/1     Running                      0          4m48s
kube-system   pod/kube-apiserver-k8s-02   1/1     Running                      0          5m37s
kube-system   pod/kube-apiserver-k8s-03   1/1     Running                      0          4m49s
kube-system   pod/kube-proxy-dlm7q                                             1/1     Running                      0          4m18s
kube-system   pod/kube-proxy-fdkrr                                             1/1     Running                      0          4m18s
kube-system   pod/kube-proxy-mjw52                                             1/1     Running                      0          4m18s
kube-system   pod/kube-apiserver-k8s-01            1/1     Running                      0          4m48s
kube-system   pod/kube-apiserver-k8s-02            1/1     Running                      0          5m46s
kube-system   pod/kube-apiserver-k8s-03           1/1     Running                      0          4m49s

When describe calico node

find error: Error: host IP unknown; known addresses: []

I want to know is there any restriction for calico version?

InvalidRegionId.NotFound

我在创建好daemonset,然后根据文档创建测试例子nginx-example之后失败,得到如下报错
I0818 10:15:46.260798 1 alicloud.go:167] Alicloud.EnsureLoadBalancer(kubernetes, default/nginx-example, cn-hangzhou, [10.111.163.38 10.111.163.29 10.111.163.30 10.111.163.37])
I0818 10:15:46.260827 1 instances.go:83] alicloud: slb backend server label does not specified, skip filter nodes by label.
I0818 10:15:46.260840 1 alicloud.go:173] alicloud: ensure loadbalancer with final nodes list , [10.111.163.38 10.111.163.29 10.111.163.30 10.111.163.37]
I0818 10:15:46.260885 1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-example", UID:"dc7974f3-c19d-11e9-ad4f-00163e106bd5", APIVersion:"v1", ResourceVersion:"20107735", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
debug: find endpoint from local cache: https://ecs-cn-hangzhou.aliyuncs.com
E0818 10:15:46.301369 1 instances.go:187] alicloud: calling DescribeInstances error. region=10, instancename=, message=[Aliyun API Error: RequestId: 302DE4A6-FB69-4FCC-A9E9-6BA43217F8B9 Status Code: 404 Code: InvalidRegionId.NotFound Message: The specified RegionId does not exist.]
I0818 10:15:46.301442 1 service_controller.go:431] next retry delay is 80000000000
E0818 10:15:46.301451 1 service_controller.go:1123] Failed to process service default/nginx-example. Retrying in 1m20s: failed to ensure load balancer for service default/nginx-example: Aliyun API Error: RequestId: 302DE4A6-FB69-4FCC-A9E9-6BA43217F8B9 Status Code: 404 Code: InvalidRegionId.NotFound Message: The specified RegionId does not exist.
I0818 10:15:46.301463 1 service_controller.go:1097] Finished syncing service "default/nginx-example" (40.768884ms)

I0818 10:15:46.301534 1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-example", UID:"dc7974f3-c19d-11e9-ad4f-00163e106bd5", APIVersion:"v1", ResourceVersion:"20107735", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-example: Aliyun API Error: RequestId: 302DE4A6-FB69-4FCC-A9E9-6BA43217F8B9 Status Code: 404 Code: InvalidRegionId.NotFound Message: The specified RegionId does not exist.

alicloud: unable to split instanceid and region from nodename

我在ECS上创建了K8s集群,现在需要使用SLB, 应该怎样操作?必须重置集群吗?
I created a k8s cluster at Alibaba Cloud ECS. Now I need to use SLB. What should I do?

也就是节点名称必然是 ${REGION_ID}.${INSTANCE_ID} 这样吗?
hostname must be "${REGION_ID}.${INSTANCE_ID}" at any node?

日志:
service-controller Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx: alicloud: unable to split instanceid and region from nodename, error unexpected nodename=kube-node-01

kubelet.service
ExecStart=/usr/bin/kubelet --hostname-override=cn-hangzhou.i-bp11bzoz5i207p0g \ --provider-id=cn-hangzhou.i-bp11bzoz5i20y3r0g \ --cloud-provider=external

NetworkUnavailable >>>> RouteController failed to create a route

启动参数已经配置了
- command:
- /cloud-controller-manager
- --leader-elect=false
- --cloud-provider=alicloud
- --allow-untagged-cloud=true
- --cluster-cidr=10.228.0.0/16
- --allocate-node-cidrs=false
- --configure-cloud-routes=false
- --route-reconciliation-period=30s
- --use-service-account-credentials=true
但是还是一直提示route没有创建,给node节点打上network unavailable的污点。

Failure creating route table entries on VPC with multiple route tables

It seems that AlibabaCloud Cloud Controller Manager can't work on VPC that contains more than one route tables. Here's the error log:

E0628 04:58:58.708289       1 route_controller.go:137] Couldn't reconcile node routes: RouteTables: alicloud: multiple route table or no route table found in vpc vpc-k1armXXXXXXXXXXXXXXXX, [[vtb-k1ageXXXXXXXXXXXXXXXX vtb-k1atyXXXXXXXXXXXXXXXX]]

CCM version used is v1.9.3.106-g3f39653-aliyun.

It seems what we can work around this by configuring the cloud-config ConfigMap on the kube-system Namespace. However, this ConfigMap does not exist on Managed Kubernetes clusters, thus routing between pods located in different nodes won't work.

Is there any way to fix this on Managed Kubernetes clusters?

Load balancer bandwidth will not change when new value equals default value

How to reproduce it:
Step 1. create a service type Loadbalancer with an annotation
service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth=20
Step 2. change the service with new annotation
service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth=100 (100 is default bandwidth value)

What you expected to happen:
slb's bandwidth need to be changed to 100, but nothing changed.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Fix gometalinter error

Fix gometaliner error.

gometalinter --disable-all --skip vendor -E goconst -E gofmt -E ineffassign -E goimports -E golint -E misspell -E vet -d ./...

是否支持pv?

看代码里 目前好像没有创建 pv 的功能 目前是不支持吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.