Giter VIP home page Giter VIP logo

googlecloudplatform / kubernetes-engine-samples Goto Github PK

View Code? Open in Web Editor NEW
1.2K 75.0 1.2K 6.6 MB

Sample applications for Google Kubernetes Engine (GKE)

Home Page: https://cloud.google.com/kubernetes-engine/docs/tutorials/

License: Apache License 2.0

JavaScript 1.68% PHP 0.23% HTML 0.35% Shell 3.46% Python 11.06% Go 3.05% Dockerfile 3.17% C# 0.29% Java 0.29% Ruby 0.02% HCL 70.91% Procfile 0.01% Smarty 0.41% PureBasic 2.85% Jupyter Notebook 2.21%
samples

kubernetes-engine-samples's Introduction

kubernetes-engine-samples's People

Contributors

aburhan avatar ahmetb avatar ahrarmonsur avatar askmeegs avatar bourgeoisor avatar brv158 avatar charlieyu1996 avatar daniel-sanche avatar dependabot[bot] avatar ganochenkodg avatar jeffmendoza avatar kenthua avatar kurtisvg avatar martinmaly avatar mathieu-benoit avatar melissakendall avatar minherz avatar nimjay avatar pjh avatar prajaktaw-google avatar pwschuurman avatar rbarberop avatar renovate-bot avatar shabirmean avatar shannonxtreme avatar theemadnes avatar therealspaceship avatar thiagotnunes avatar thompsonmax avatar xtineskim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-engine-samples's Issues

SIGTERM not handled by direct-to-sd custom metrics example

Hi Guys/Girls! =)

I was trying to use the custom-metrics-autoscaling/direct-to-sd/ example in my cluster but I notice that it is not handling SIGTERM (and other signals...) when we terminate a pod. I made this change in a fork and now it's seems working properly: alissonperez@ee577db

Please, take a look, if you agree I can open a PR with these changes!

Thank you!

[cloudsql] MountVolume.SetUp failed for volume "cloudsql-instance-credentials"

I'm getting this error when deploying using Helm.

MountVolume.SetUp failed for volume "cloudsql-instance-credentials" : secrets "cloudsql-instance-credentials" not found

I have created service account and db credentials and havent faced any errors.

kubectl get secrets
NAME                            TYPE                                  DATA      AGE
cloudsql-db-credentials         Opaque                                2         25m
cloudsql-instance-credentials   Opaque                                1         29m

My cloudsql deployment is exactly same as in examples.

Can't connect to application using external IP

In following Kubernetes Engine's quickstart, I found that I'm not able to connect to the application using its external IP address.

Steps to reproduce:

  1. kubectl run hello-web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
  2. kubectl expose deployment hello-web --type="LoadBalancer"
  3. Attempt to visit the app's external IP addresse, gathered from kubectl get service hello-web
  4. Observe that the site times out.

I'm not sure if this is an issue with how the application is written or if it's an issue with how its Service is configured. Also, I haven't tested from another environment, so I'd be interested to learn whether another user can reproduce.

Documentation issue

in url (https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps), the name of the pods are incorrect and lead to confusion.
Here is a manifest for a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: metrics
department: sales
replicas: 3
template:
metadata:
labels:
app: metrics
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
Copy the manifest to a file named my-deployment.yaml, and create the Deployment:

kubectl apply -f my-deployment.yaml
Verify that three Pods are running:

kubectl get pods
The output shows three running Pods:

NAME READY STATUS RESTARTS AGE
service-how-to-76699757f9-h4xk4 1/1 Running 0 4s
service-how-to-76699757f9-tjcfq 1/1 Running 0 4s
service-how-to-76699757f9-wt9d8 1/1 Running 0

In

I can connect my cloud sql instance from cloudsql-proxy

Hi guys,

I follow the official tutorial about using cloudsql proxy to connect my application and my app can't connect the database.

When I try to use the following command:

root@web-app-689b6bd75d-bwmxj:/var/www# mysql -h 127.0.0.1 -p 3306 -u proxyuser -p

And add my proxyuser password, or try to use another user added manually, I am getting the following error:

ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0

This is my yaml deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: app
          image: gcr.io/potent-zodiac-172621/teste-app:latest
          ports:
            - containerPort: 80
          env:
            - name: DB_HOST
              value: 127.0.0.1
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: password
            - name: DB_DATABASE
              value: admin_teste
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy",
                    "-instances=potent-zodiac-172621:southamerica-east1:teste-instance=tcp:3306",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials

It is important to note, that the MySQL port is hardcoded using the value 3306 in the application.

Could someone help me?

Mention --ip_address_types Cloud SQL proxy flag

@kurtisvg should we mention the --ip_address_types flag in the deployment yamls:

https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/cloudsql/postgres_deployment.yaml
https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/cloudsql/mysql_wordpress_deployment.yaml

Perhaps in the command args array, add something like:

# Uncomment below to force the pod to use the private IP address of the Cloud SQL instance.
# The instance must have a private IP on the same network and in the same region.
# "-ip_address_types=PRIVATE",

[cloudsql] Error 403: The client is not authorized to make this request., notAuthorized with Project > Editor role

I've attempting to perform a sample WordPress deployment using Google Cloud SQL by following [1].

The original documentation I've followed is [2]. The doc states that even if I don't possess Cloud SQL > Cloud SQL Client role, alternatively I could use the Editor role to perform this deployment [3]. I assume the Editor role when checked in IAM.

Yet, I am experiencing the following error which has already been pointed out previously in [4].

Any help or suggestions are appreciated.

2018/05/12 14:45:34 couldn't connect to "<connectionName>": ensure that the account has access to "<connectionName>" (and make sure there's no typo in that name). Error during createEphemeral for <connectionName>: googleapi: Error 403: The client is not authorized to make this request., notAuthorized

[1]: Using Google Cloud SQL from a WordPress Deployment

[2]: Connecting from Kubernetes Engine
[3]: Create a service account
[4]: [cloudsql] Error 403: The client is not authorized to make this request., notAuthorized

Provide a sample using Cloud Pub/Sub Notifications for Cloud Storage

Hello, is it possible to provide a sample using Cloud Pub/Sub Notifications for Cloud Storage ?

โœ… The quotes app deals with storing and retrieving objects from Cloud Storage.
โœ… The pub/sub app allows to publish messages, and then to receive the published messages from Cloud Pub/Sub.

But how to set up and use Cloud Pub/Sub Notifications for Cloud Storage using Service Catalog โ“ ๐Ÿค”

Is there any additional parameters to set in the cloud-storage ServiceInstance manifest

apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
  name: storage-instance
  namespace: storage-quotes
spec:
  clusterServiceClassExternalName: cloud-storage
  clusterServicePlanExternalName: beta
  parameters:
    location: us-central1
    bucketId: quotes-bucket

and/or cloud-pubsub ServiceInstance manifest

apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
  name: pubsub-instance
  namespace: pubsub
spec:
  clusterServiceClassExternalName: cloud-pubsub
  clusterServicePlanExternalName: beta
  parameters:
    topicId: quotes-bucket-topic
    ## what kind of additional parameters can go here ?

in order to add a notification configuration to a bucket that sends notifications to a pub/sub topic โ“

whereami app for GKE networking examples

@theemadnes and I would like to submit https://github.com/theemadnes/gke-whereami to this repo so it can exist in gcr.io/google-samples and be referenced by GCP documentation as examples.

context

We are expanding our examples of GKE and Anthos Ingress, Service, and general networking example/recipes for reference from GKE documentation (similar to what was done for network policy). We have recipes for internal Ingress, external Ingress, internal L4 load balancing, custom HTTP headers, sticky sessions, weight-based load balancing, multi-cluster ingress, custom gRPC health checks .... and more.

challenge

We would like to use a common app for all examples. The challenge is that we require an app that replies with enough information about itself to demonstrate whatever key feature that's being focused on. The current apps in gcr.io/google-samples are either enourmously complex (looking at you hipster shop) or they don't provide enough response to be useful (hello-app).

whereami is a single Pod Flask app (easy to understand and deploy for the user) that provides the following as a response:

# easy to use
curl $VIP
{"cluster_name":"blog","node_name":"gke-blog-default-pool-9980a7d5-fihi.c.church-243723.internal","pod_ip":"10.4.1.8","pod_name":"whereami-64b9cc85b-g24tx","pod_name_emoji":"๐Ÿ‘จโ€โš•๏ธ","pod_namespace":"default","pod_service_account":null,"project_id":"church-243723","timestamp":"2020-07-14T01:52:31","zone":"us-central1-c"}

# and easy to pull out just the info you need
curl -s $VIP | jq .pod_name_emoji -r
๐Ÿ‘จโ€โš•๏ธ

This makes it easy for demos to show scaling, proximity-based load balancing between regions, sticky session etc and the simplicity of just a JSON response also makes it great for documentation example (no pictures required). Would it be possible to get whereami imported into this repo and pushed to google-samples so it can be referenced and used by GKE documentation?

cc @askmeegs

Add jq, tcpdump, and iputils to whereami container image

Taking https://github.com/nicolaka/netshoot as an example, this container has a bunch of networking tools built into the image. While I don't think we need all of these I do think that jq, tcpdump, and iputils (for ping) would be very helpful.

I realized this when I wanted to do the following from one whereami pod to another whereami pod:

kubectl exec -it foo-5cc954d898-7q7r5 -- sh

$ curl foo-svc:80 | jq '.pod_name'
sh: 1: jq: not found

@theemadnes @nicolaka

Merge mysql wordpress deployment files

Deployment files:

The above 2 files are almost identical except for the fact that the latter is linked to this tutorial and uses persistent disks.
Some consideration needs to happen to assess how to best merge these or keep only one of them somehow to avoid this duplication.

direct-to-sd doesn't build

When trying to modify and docker build the custom metrics autoscaling direct to Stackdriver example, the build fails with no buildable Go source files in /go/src/google.golang.org/api/transport/http/internal/propagation

$ docker build -t direct-to-sd .
Sending build context to Docker daemon  12.29kB
Step 1/9 : FROM golang:1.7.3
 ---> ef15416724f6
Step 2/9 : ADD . /go/src/direct-to-sd
 ---> Using cache
 ---> 425ccb5c64d0
Step 3/9 : RUN go get cloud.google.com/go/compute/metadata
 ---> Using cache
 ---> 4e85f56c9172
Step 4/9 : RUN go get golang.org/x/oauth2
 ---> Using cache
 ---> 0651c69fb6f1
Step 5/9 : RUN go get google.golang.org/api/monitoring/v3
 ---> Running in 896d9b622d40
src/google.golang.org/api/transport/http/dial.go:30:2: no buildable Go source files in /go/src/google.golang.org/api/transport/http/internal/propagation
The command '/bin/sh -c go get google.golang.org/api/monitoring/v3' returned a non-zero code: 1

I'm getting a volume mount error for Cloud SQL set-up

Hi there, I've followed these instructions step-by-step and have cross-references the configuration in this codebase, but I'm getting the following error when trying to start my container:

MountVolume.SetUp failed for volume "kubernetes.io/secret/2abd4716-bb10-11e7-82b8-42010a84010c-cloudsql-instance-credentials" (spec.Name: "cloudsql-instance-credentials") pod "2abd4716-bb10-11e7-82b8-42010a84010c" (UID: "2abd4716-bb10-11e7-82b8-42010a84010c") with: secrets "cloudsql-instance-credentials" not found

Publics IP change every few minutes

I'm using Kubernetes Engine and I have multiples nginx ingress controllers and they public IP change all time. I can't use it in production because this is not the correct behaviour.

Is it normal? Can you help me?

cloud-sql-postgres ServiceClass support

Is there any prioritization for including Cloud SQL Postgres as part of the default service classes? If not, I am curious what would be required to add Postgres support. The CloudFoundry project has had Postgres support for quite a while now, but it is not clear from documentation on how to build your own service broker/catalog from source. Any suggestions or tips would be greatly appreciated, thank you!

Invalid JSON File (credentials.json)

I've been following your instructions, but keep getting the following error when applying the deployment for the cloudsql proxy:

"2017/12/18 20:56:29 invalid json file "/secrets/cloudsql/credentials.json": invalid character '\n' in string literal"
-> created the secret with JSON format as you describe.

Is this a common issue? Couldn't find anything about it...

Whereami not displaying istio headers properly

Hi,

I deployed whereami:v1.1.1 today behind istio. Enabled headers but i don't see them in the output

My deployment manifest and configMap look liek this

apiVersion: apps/v1
kind: Deployment
metadata:
  name: whereami
spec:
  replicas: 2
  selector:
    matchLabels:
      app: whereami
  template:
    metadata:
      labels:
        app: whereami
    spec:
      containers:
      - name: whereami
        image: gcr.io/google-samples/whereami:v1.1.1
        ports:
          - name: http
            containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
        env:
          - name: METADATA #An arbitrary metadata field that can be used to label JSON responses
            valueFrom:
              configMapKeyRef:
                name: whereami-configmap
                key: METADATA
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: whereami-configmap
data:
  METADATA: "echo_headers_enabled"
  ECHO_HEADERS: "True"

The output is missing the headers

โžœ  istio-ingress-e2e-tls git:(master) โœ— curl https://gke1.abdel.cloud -s | jq .
{
  "cluster_name": "gke-1",
  "host_header": "gke1.abdel.cloud",
  "metadata": "echo_headers_enabled",
  "node_name": "gke-gke-1-default-pool-94ebed87-q2z9.c.lbg-project-278414.internal",
  "pod_name": "whereami-8454b9bbf8-5czfx",
  "pod_name_emoji": "๐Ÿ‘ฉโ€โค๏ธโ€๐Ÿ‘ฉ",
  "project_id": "lbg-project-278414",
  "timestamp": "2020-12-02T09:46:44",
  "zone": "us-west1-a"
}```

Even if the metatadata is set echo_headers_enabled

What i expect is something similar to the echoserver, deployed the same way, the output looks like

curl https://gke1.abdel.cloud -s
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://gke1.abdel.cloud:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=/
content-length=0
host=gke1.abdel.cloud
user-agent=curl/7.72.0
via=1.1 google
x-b3-parentspanid=f43fa3a13fd64a8c
x-b3-sampled=1
x-b3-spanid=67c62c3cedef3d7b
x-b3-traceid=1b2af99f21ce5da6f43fa3a13fd64a8c
x-client-data=CgSL6ZsV
x-cloud-trace-context=7a4858982a3a0949ff73adfc32cc202e/4550108360082774779
x-envoy-attempt-count=1
x-envoy-external-address=130.211.2.42
x-forwarded-client-cert=By=spiffe://cluster.local/ns/default/sa/default;Hash=15b8e7eb324184a1aa7ae9598db6e500eaedee9caa7e5da6120499c3dbd95775;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account
x-forwarded-for=80.216.72.128, 34.120.200.50,130.211.2.42
x-forwarded-proto=https
x-request-id=136c88d9-cbe2-95d3-afd3-e0084eda08e6
BODY:

GKE nodes restarting due to soft lockup - CPU

Nodes on the GKE cluster are restarting with error watchdog: BUG: soft lockup - CPU#2 stuck for XXs.

The above error will trigger the kernel panic Kernel panic - not syncing: softlockup: hung tasks and then the node reboots after 10sec.

GKE version: v1.14.10-gke.37

PR to allow cloud sql with Persistent disks

please see PR #96

This is for an update asked by PM director to allow this tutorial to use cloud-sql. we are adding a file, not changing existing ones. this is time sensitive so please review.
please contact panto@ ferrarim@ aminamansour@

Multi-Attach error

Hello, I am new for kubernetes.
I might find a problem and fix it here.
By Contributing A Patch, I open Issue first.

Problem

I got a Multi-Attach error at Step 7: Updating application images at Using Persistent Disks with WordPress and MySQL.

This is the console log. You can see Multi-Attach error at the Message column.

$kubectl apply -f wordpress.yaml
deployment.apps/wordpress configured

$kubectl get deployment
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mysql       1         1         1            1           1h
wordpress   1         2         1            1           47m

$kubectl get replicasets
NAME                   DESIRED   CURRENT   READY   AGE
mysql-5bfd5f74dd       1         1         1       1h
wordpress-78c9b8d684   1         1         1       48m
wordpress-8648c887dc   1         1         0       4m

$kubectl describe pod wordpress
~~OMISSION~~
Name:               wordpress-8648c887dc-2hcxf
~~OMMISION~~
Containers:
  wordpress:
    Container ID:
    Image:          wordpress:5.1.1
~~OMMISION~~
Events:
  Type     Reason              Age                 From                                                          Message
  ----     ------              ----                ----                                                          -------
  Normal   Scheduled           9m9s                default-scheduler                                             Successfully assigned default/wordpress-8648c887dc-2hcxf to gke-persistent-disk-tuto-default-pool-f0cd3ac1-htwh
  Warning  FailedAttachVolume  9m9s                attachdetach-controller                                       Multi-Attach error for volume "pvc-740f57ab-496c-11e9-aa81-42010a92023f" Volume is already used by pod(s) wordpress-78c9b8d684-vr9zd
  Warning  FailedMount         21s (x4 over 7m6s)  kubelet, gke-persistent-disk-tuto-default-pool-f0cd3ac1-htwh  Unable to mount volumes for pod "wordpress-8648c887dc-2hcxf_default(2a78750e-4986-11e9-aa81-42010a92023f)": timeout expired waiting for volumes to attach or mount for pod "default"/"wordpress-8648c887dc-2hcxf". list of unmounted volumes=[wordpress-persistent-storage]. list of unattached volumes=[wordpress-persistent-storage default-token-9swck]

The cause

The accessModes of wordpress-volumeclaim is ReadWriteOnce.
But, k8s try to connect by both of containers.

Solution

Change the strategy of rolling update.
please see the code.

[Wordpress Setup] Trouble setting up persistent disks

I am trying to set up a Wordpress website using Kubernetes Engine. I am following the official documentation. When trying to do kubectl apply -f $WORKING_DIR/wordpress-volumeclaim.yaml, I get the following error:

error: unable to recognize "/home/username/kubernetes-engine-samples/wordpress-persistent-disks/wordpress-volumeclaim.yaml": Get https://34.68.148.241/api?timeout=32s: x509: certificate signed by unknown authority

I am not sure what is happening nor does the error make any sense. I am trying this on a fresh GCP account and hence not sure what this IP address is also. This is definitely not the cluster ip as I can check it in the GCP UI. The command I used to create the cluster is as below:

 gcloud beta container --project "test-project" clusters create "wordpress" --zone "us-central1-a" --no-enable-basic-auth --cluster-version "1.14.7-gke.14" --machine-type "f1-micro" --image-type "COS" --disk-type "pd-standard" --disk-size "30" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-stackdriver-kubernetes --enable-ip-alias --network "projects/savewiz-qa/global/networks/default" --subnetwork "projects/savewiz-qa/regions/us-central1/subnetworks/default" --default-max-pods-per-node "110" --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair

Any guidance or support documents to troubleshoot this as I am new to Kubernetes as well? I am not on paid support to answer my queries via tickets.

Issue while connecting to mysql instance from application deployed on azure cluster

I tried to follow the instruction with the sample example cloudsql-mysql.
It works perfectly if the cluster is a GCP cluster and the service broker is gcp- broker.
However if I try the same on an azure cluster following is the issue faced.

When I was trying the step post deployment to curl the URL , it gave time out error.
Then I tried
kubectl get pods -n cloud-mysql

which gave
NAME READY STATUS RESTARTS AGE musicians-586557cdf5-ksv9z 1/2 CrashLoopBackOff 3 8m26s
Logs give the below details.
kubectl logs musicians-586557cdf5-ksv9z -n cloud-mysql --all-containers=true

Output
2019/03/19 10:35:37 DB_PASSWORD environment variable unspecified or '' [mysql] 2019/03/19 10:35:37 packets.go:36: unexpected EOF [mysql] 2019/03/19 10:35:37 packets.go:36: unexpected EOF [mysql] 2019/03/19 10:35:37 packets.go:36: unexpected EOF 2019/03/19 10:35:37 Failed to create a database: driver: bad connection 2019/03/19 09:54:10 using credential file for authentication; email=cloudsql-user-service-account@kubeservicebroker27.iam.gserviceaccount.com 2019/03/19 09:54:10 Listening on 127.0.0.1:3306 for kubeservicebroker27:us-central1:musicians-20326 2019/03/19 09:54:10 Ready for new connections 2019/03/19 09:59:30 New connection for "kubeservicebroker27:us-central1:musicians-20326" 2019/03/19 09:59:31 couldn't connect to "kubeservicebroker27:us-central1:musicians-20326": ensure that the account has access to "kubeservicebroker27:us-central1:musicians-20326" (and make sure there's no typo in that name). Error during createEphemeral for kubeservicebroker27:us-central1:musicians-20326: googleapi: Error 403: The client is not authorized to make this request., notAuthorized 2019/03/19 09:59:31 New connection for "kubeservicebroker27:us-central1:musicians-20326" 2019/03/19 09:59:31 Throttling refreshCfg(kubeservicebroker27:us-central1:musicians-20326): it was only called 713.896ยตs ago 2019/03/19 09:59:31 couldn't connect to "kubeservicebroker27:us-central1:musicians-20326": ensure that the account has access to "kubeservicebroker27:us-central1:musicians-20326" (and make sure there's no typo in that name). Error during createEphemeral for kubeservicebroker27:us-central1:musicians-20326: googleapi: Error 403: The client is not authorized to make this request., notAuthorized

Can somebody please help what am I missing?

How to run elasticsearch pod in gke autopilot mode

Hi there, Can some please tell me, why the pod deployed through GKE autopilot mode throws this error, when describe events suggests that init containers were successfully created, started, and have done the task of passing "sysctl -w vm.max_map_count=262144".

max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

gke cluster throws me below stack-trace error for the deployed pod in GKE autopilot mode. These are logs of elasticsearch-pod-0

ubuntu@ubuntu$ kubectl logs  elasticsearch-pod-0

2021-03-04T18:57:19,302][INFO ][o.e.n.Node               ] [] initializing ...
[2021-03-04T18:57:19,520][INFO ][o.e.e.NodeEnvironment    ] [gbOK4rP] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb)]], net usable_space [9.7gb], net total_space [9.7gb], spins? [unknown], types [ext4]
[2021-03-04T18:57:19,521][INFO ][o.e.e.NodeEnvironment    ] [gbOK4rP] heap size [1.9gb], compressed ordinary object pointers [true]
[2021-03-04T18:57:19,523][INFO ][o.e.n.Node               ] [gbOK4rP] node name [gbOK4rP] derived from node ID; set [node.name] to override
[2021-03-04T18:57:19,525][INFO ][o.e.n.Node               ] [gbOK4rP] version[5.0.0], pid[1], build[253032b/2016-10-26T05:11:34.737Z], OS[Linux/5.4.49+/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111/25.111-b14]
[2021-03-04T18:57:22,709][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [aggs-matrix-stats]
[2021-03-04T18:57:22,710][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [ingest-common]
[2021-03-04T18:57:22,710][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [lang-expression]
[2021-03-04T18:57:22,710][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [lang-groovy]
[2021-03-04T18:57:22,772][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [lang-mustache]
[2021-03-04T18:57:22,772][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [lang-painless]
[2021-03-04T18:57:22,772][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [percolator]
[2021-03-04T18:57:22,773][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [reindex]
[2021-03-04T18:57:22,773][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [transport-netty3]
[2021-03-04T18:57:22,773][INFO ][o.e.p.PluginsService     ] [gbOK4rP] loaded module [transport-netty4]
[2021-03-04T18:57:22,774][INFO ][o.e.p.PluginsService     ] [gbOK4rP] no plugins loaded
[2021-03-04T18:57:23,193][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2021-03-04T18:57:31,491][INFO ][o.e.n.Node               ] [gbOK4rP] initialized
[2021-03-04T18:57:31,492][INFO ][o.e.n.Node               ] [gbOK4rP] starting ...
[2021-03-04T18:57:32,001][INFO ][o.e.t.TransportService   ] [gbOK4rP] publish_address {10.42.0.130:9300}, bound_addresses {[::]:9300}
[2021-03-04T18:57:32,008][INFO ][o.e.b.BootstrapCheck     ] [gbOK4rP] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
[2021-03-04T18:57:32,086][INFO ][o.e.n.Node               ] [gbOK4rP] stopping ...
[2021-03-04T18:57:32,184][INFO ][o.e.n.Node               ] [gbOK4rP] stopped
[2021-03-04T18:57:32,184][INFO ][o.e.n.Node               ] [gbOK4rP] closing ...
[2021-03-04T18:57:32,208][INFO ][o.e.n.Node               ] [gbOK4rP] closed

Here's events of elasticsearch deployed pod
kubectl describe pod/elasticsearch-0

ubuntu@ubuntu$ kubectl describe pod/elasticsearch-0

Name:         elasticsearch-0
Namespace:    default
Priority:     0
Node:         gk3-autopilot-cluster-1-nap-1535p6rr-5687b4d4-2w8t/10.142.0.22
Start Time:   Fri, 05 Mar 2021 00:25:11 +0530
Labels:       app=elasticsearch
              controller-revision-hash=elasticsearch-6c57546b4b
              statefulset.kubernetes.io/pod-name=elasticsearch-0
Annotations:  seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:       Running
IP:           10.42.0.130
IPs:
  IP:           10.42.0.130
Controlled By:  StatefulSet/elasticsearch
Init Containers:
  increase-vm-max-map:
    Container ID:  containerd://bfda3431afee52e71a789cb7d6f612f4bb2ea5d81f9cb74bdc0733d8aa64a29f
    Image:         busybox
    Image ID:      docker.io/library/busybox@sha256:c6b45a95f932202dbb27c31333c4789f45184a744060f6e569cc9d2bf1b9ad6f
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      sysctl
      -p
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 05 Mar 2021 00:25:30 +0530
      Finished:     Fri, 05 Mar 2021 00:25:30 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             3000Mi
    Requests:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             3000Mi
    Environment:          <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7f8t (ro)
Containers:
  elasticsearch:
    Container ID:   containerd://ea6d1f14ec03b683fab51feb0cb0e540f7299250d973fcfc07301bd51128f865
    Image:          elasticsearch:5.0.0
    Image ID:       docker.io/library/elasticsearch@sha256:29f6b68a0088238f4a108e6c725163130e382a0f34ed62159c82e3961f0639fa
    Ports:          9200/TCP, 9300/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Terminated
      Reason:       Error
      Exit Code:    78
      Started:      Fri, 05 Mar 2021 00:27:15 +0530
      Finished:     Fri, 05 Mar 2021 00:27:32 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    78
      Started:      Fri, 05 Mar 2021 00:26:31 +0530
      Finished:     Fri, 05 Mar 2021 00:26:49 +0530
    Ready:          False
    Restart Count:  3
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             3000Mi
    Requests:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             3000Mi
    Environment Variables from:
      elastic-config  ConfigMap  Optional: false
    Environment:         <none>
    Mounts:
      /usr/share/elasticsearch/config/elasticsearch.yml from elasticsearch-configfile (rw,path="elasticsearch.yml")
      /usr/share/elasticsearch/data from elastic-pvc (rw,path="elastic-data")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7f8t (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  elastic-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elastic-pvc-elasticsearch-0
    ReadOnly:   false
  elasticsearch-configfile:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      elasticsearch-config
    Optional:  false
  kube-api-access-p7f8t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                    From                                   Message
  ----     ------                  ----                   ----                                   -------
  Normal   TriggeredScaleUp        3m43s                  cluster-autoscaler                     pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/lexical-pattern-305514/zones/us-east1-c/instanceGroups/gk3-autopilot-cluster-1-nap-1535p6rr-5687b4d4-grp 0->1 (max: 1000)}]
  Warning  FailedScheduling        3m34s (x3 over 4m39s)  gke.io/optimize-utilization-scheduler  0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory.
  Warning  FailedScheduling        3m2s (x2 over 3m2s)    gke.io/optimize-utilization-scheduler  0/3 nodes are available: 1 Insufficient ephemeral-storage, 2 Insufficient cpu, 2 Insufficient memory.
  Warning  FailedScheduling        2m42s (x3 over 2m54s)  gke.io/optimize-utilization-scheduler  0/3 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 2 Insufficient cpu, 2 Insufficient memory.
  Normal   Scheduled               2m32s                  gke.io/optimize-utilization-scheduler  Successfully assigned default/elasticsearch-0 to gk3-autopilot-cluster-1-nap-1535p6rr-5687b4d4-2w8t
  Normal   SuccessfulAttachVolume  2m17s                  attachdetach-controller                AttachVolume.Attach succeeded for volume "pvc-d9ab9683-a408-421f-96f3-dd4eba1b871d"
  Normal   Pulling                 2m13s                  kubelet                                Pulling image "busybox"
  Normal   Pulled                  2m13s                  kubelet                                Successfully pulled image "busybox"
  Normal   Created                 2m13s                  kubelet                                Created container increase-vm-max-map
  Normal   Started                 2m12s                  kubelet                                Started container increase-vm-max-map
  Normal   Pulling                 2m12s                  kubelet                                Pulling image "elasticsearch:5.0.0"
  Normal   Pulled                  2m4s                   kubelet                                Successfully pulled image "elasticsearch:5.0.0"
  Normal   Created                 27s (x4 over 2m1s)     kubelet                                Created container elasticsearch
  Normal   Started                 27s (x4 over 2m1s)     kubelet                                Started container elasticsearch
  Normal   Pulled                  27s (x3 over 103s)     kubelet                                Container image "elasticsearch:5.0.0" already present on machine
  Warning  BackOff                 10s (x4 over 86s)      kubelet                                Back-off restarting failed container


Here's the statefulset of elasticsearch yaml resource.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
spec:
  selector:
    matchLabels:
      app: elasticsearch
  serviceName: "elasticsearch"
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:5.0.0
        resources:
          limits:
            memory: "4000Mi"
            cpu: 1000m
          requests:
            memory: "3000Mi"
            cpu: 600m
        imagePullPolicy: IfNotPresent
        envFrom:
          - configMapRef:
              name: fx-elastic-config
        ports:
        - containerPort: 9200
          name: http-port
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: elastic-pvc
          mountPath: /usr/share/elasticsearch/data
          subPath: elastic-data
        - mountPath: "/usr/share/elasticsearch/config/elasticsearch.yml"
          subPath: elasticsearch.yml
          name: elasticsearch-configfile
             
      volumes: 
        - name: elasticsearch-configfile
          configMap:
            name: elasticsearch-config
           
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: elastic-data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'sysctl', '-w', 'vm.max_map_count=262144']
        command: ["sh", "-c", "echo", "vm.max_map_count=262144", ">>", "/etc/sysctl.conf"]
        command: ["sh", "-c", "sysctl", "-p"]
        securityContext:
          privileged: false

      - name: increase-fd-ulimit
        image: busybox
        command: ['sh', '-c', 'ulimit -n 65536']
        securityContext:
          privileged: false
       
       
  volumeClaimTemplates:
  - metadata:
      name: elastic-pvc
#      annotations:
#        ...
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "premium-rwo"
      resources:
        requests:
          storage: 10Gi
---

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  type: ClusterIP  
  selector:
   app: elasticsearch
  ports:
    - port: 9200
      targetPort: 9200
      name: elasticsearch-http
    - port: 9300
      targetPort: 9300


Same elasticsearch statefulset k8s resource file, when deployed in GKE standard mode, runs successfully without any error.

Can some please tell me why this kind of behaviour and how to fix this issue?

[cloudsql] Error 403: The client is not authorized to make this request., notAuthorized

I followed the tutorial connect-container-engine and also stumbled on this directory. Unfortunately, I can't get it to work as the logs show

2017-03-07T13:53:46.175990859Z 2017/03/07 13:53:46 New connection for "ory-cloud:europe-west1:ory-cloud-platform-sql"
2017-03-07T13:53:46.176035707Z 2017/03/07 13:53:46 Thottling refreshCfg(ory-cloud:europe-west1:ory-cloud-platform-sql): it was only called 15.001350074s ago
2017-03-07T13:53:46.176097862Z 2017/03/07 13:53:46 couldn't connect to "ory-cloud:europe-west1:ory-cloud-platform-sql": ensure that the account has access to "ory-cloud:europe-west1:ory-cloud-platform-sql" (and make sure there's no typo in that name). Error during createEphemeral for ory-cloud:europe-west1:ory-cloud-platform-sql: googleapi: Error 403: The client is not authorized to make this request., notAuthorized

I confirmed that the name ory-cloud:europe-west1:ory-cloud-platform-sql is the right one:

grafik

I additionally checked that the client has the right privileges:

grafik

HTTP/2 issue for hello-app-tls

Hi,

There seems to be an issue with using HTTP/2 for the hello-app-tls application. I'm getting a 404 response.

HTTPS is working fine.

Any ideas?

$ curl -v --insecure https://35.190.89.42/
*   Trying 35.190.89.42...
* TCP_NODELAY set
* Connected to 35.190.89.42 (35.190.89.42) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=self-signed.ignore
*  start date: Jun  7 10:38:17 2018 GMT
*  expire date: May 12 10:38:17 2023 GMT
*  issuer: CN=self-signed.ignore
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x559f5e676dc0)
> GET / HTTP/1.1
> Host: 35.190.89.42
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 404
< date: Thu, 07 Jun 2018 11:10:55 GMT
< content-length: 21
< content-type: text/plain; charset=utf-8
< via: 1.1 google
< alt-svc: clear
<
* Curl_http_done: called premature == 0
* Connection #0 to host 35.190.89.42 left intact

dev shm size option works in google kubernetes engine 1.12 but doesn't work on higher versions

Sample Deployment File:
apiVersion: v1
kind: Pod
metadata:
name: myvolumes-pod1
spec:
containers:

  • image: alpine
    imagePullPolicy: IfNotPresent
    name: myvolumes-container-1

    command: ['sh', '-c', 'echo The Bench Container 1 is Running ; sleep 3600']

    volumeMounts:

    • mountPath: /dev/shm
      name: dshm
  • image: alpine
    imagePullPolicy: IfNotPresent
    name: myvolumes-container-2

    command: ['sh', '-c', 'echo The Bench Container 2 is Running ; sleep 3600']

    volumeMounts:

    • mountPath: /dev/shm
      name: dshm
  • image: alpine
    imagePullPolicy: IfNotPresent
    name: myvolumes-container-3

    command: ['sh', '-c', 'echo The Bench Container 3 is Running ; sleep 3600']

    volumeMounts:

    • mountPath: /dev/shm
      name: dshm

volumes:

  • name: dshm
    emptyDir:
    medium: Memory
    sizeLimit: "1024Mi"

Need to increase /dev mount point size along with /dev/shm size.

GKE Cluster hangs

This is very weird since we have been using kubernetes for very long time we had never encountered this issue before hand. Today, as i was making multiple node pools and downgrade one default node pool so that all the application pods transfer to new node pool. Sadly, i configured small disk size which gave pressure to the VM when i try to schedule new pod to it. and the cluster starts to hang as the default pool starts to autoscale itself in loop, throws unknown error at a time and continuously doing this again and again. Can some one from the gke developers help me on this???

missing --zone in "gcloud container clusters create" command

When creating the cluster for the persistent storage example the command in the doc is:
gcloud container clusters create persistent-disk-tutorial --num-nodes=3

I received an error telling me that I had to specify a zone, so I ran this (successfully):
gcloud container clusters create persistent-disk-tutorial --num-nodes=3 --zone=us-central1-a

Have a good day, and thanks for the examples!

Support gRPC

We should add gRPC support to the whereami binary (including support for the standard gRPC health check API)

Any way to download plaintext logs?

We are running a suite of tests on GKE. We are using the default logging location (Stackdriver).

I want to get only the text payloads from my logs, e.g. the output of this command:
gcloud logging read 'resource.type=k8s_container resource.labels.project_id=xl-ml-test resource.labels.location=us-central1-b resource.labels.cluster_name=xl-ml-test resource.labels.namespace_name=automated resource.labels.pod_name:pt-nightly-resnet50-convergence-v3-8-manual-gd258 timestamp>="2020-01-01T00:00:00Z"' --limit 10000000000000 --order asc --format "value(textPayload)" > log.txt && sed -i '/^$/d' log.txt

I can generate URLs to see those same logs in Stackdriver Logging UI but there's no way to download them.

How would you recommend we structure our GKE logging to make it easier to access plaintext logs from our containers?

html output

if you try to check the output of the app in a browser it's not very readable since it's JSON.

Would it possible to have an html output on a different path than / for instance /html ?

Thanks

Ingress apply command wrong for hello-app-tls

The kubectl apply ingress command is wrong in the hello-app-tls' README.md.
kubectl apply -f manifests/helloweb-service.yaml should be
kubectl apply -f manifests/helloweb-ingress-tls.yaml

Can I run the cloudsql-proxy as a daemon set?

I tried to optimize the resource needed. The sidecar-container pattern creates quite a lot of redundant cloudsql-proxy pods. I came cross the idea of trying to run the cloudsql-proxy in a service described in the other issue.

I wonder if we can run cloudsql proxy as daemon-set. I suppose the pod can share the same socket created by the cloudsql proxy. I tried to ask in SO with the tags suggest in the previous and there is no response so I create this issue.

Unable to delete Kubernetes Cluster

Hi team,

As an owner of the project on GCP, I'm trying to delete Kubernetes cluster but facing below permission issues. Could anyone please help me resolve this issue quickly? Thanks.

(1) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gcp-api-demo/global/firewalls/gke-bootcamp-a99e9ef1-ssh' (2) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gcp-api-demo/global/firewalls/gke-bootcamp-a99e9ef1-vms' (3) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gcp-api-demo/global/firewalls/gke-bootcamp-a99e9ef1-all' (4) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gcp-api-demo/zones/us-west1-a/instanceGroupManagers/gke-bootcamp-default-pool-d420c78b-grp' (5) (1) Google Compute Engine: Required 'compute.routes.list' permission for 'projects/gcp-api-demo' (2) Google Compute Engine: Required 'compute.projects.get' permission for 'projects/gcp-api-demo'

Node labels get deleted when a node gets re-created

We bound our pods with nodes with node labels. For the past couple of days few of the pods were not getting scheduled and then we found that 4 of our K8S nodes had age=4days (others had 20 days). When we described the new nodes we found that the labels that we had set were not present. This can lead to unwanted outages of services.

frontend service: Error syncing load balancer

I am running guestbook example on a Google Container Engine installation (tutorial)
In creating frontend service, I got an error:
Error syncing load balancer: failed to ensure load balancer (failed to create forwarding rule for load balancer)


NAME       TYPE                CLUSTER-IP      EXTERNAL-IP            PORT(S)        AGE
frontend   LoadBalancer   X.X.X.X              <pending>           80:31463/TCP.     11m

External IP is pending.

Error in Kubernetes Engine > Services:

Error syncing load balancer: failed to ensure load balancer: failed to create forwarding rule for load balancer (a5ec51ef24a914395ac55474dbf1a76b(default/frontend)): googleapi: Error 412: Constraint constraints/compute.restrictLoadBalancerCreationForTypes violated for projects/test-03-10-05-2020. Forwarding Rule projects/test-03-10-05-2020/regions/us-east1/forwardingRules/a5ec51ef24a914395ac55474dbf1a76b of type EXTERNAL_NETWORK_TCP_UDP is not allowed., conditionNotMet

Autoscale of pubsub subscriber is failing with error

Followed the article to scale the subscriber : https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#pubsub_8 however the HPA is failing with mentioned error. I am using a private cluster (1.15.12-gke.2) which should not be an issue as per my understanding.

ScalingActive False FailedGetExternalMetric the HPA was unable to compute the replica count: unable to get external metric default/pubsub.googleapis.com|subscription|num_undelivered_messages/&LabelSelector{MatchLabels:map[string]string{resource.labels.subscription_id: echo-read,},MatchExpressions:[],}: unable to fetch metrics from external metrics API: pubsub.googleapis.com|subscription|num_undelivered_messages.external.metrics.k8s.io is forbidden: User "system:serviceaccount:kube-system:horizontal-pod-autoscaler" cannot list resource "pubsub.googleapis.com|subscription|num_undelivered_messages" in API group "external.metrics.k8s.io" in the namespace "default"

  • I could see the metrics in stackdriver however hpa seems to be failing.
  • tried adding cluserrole binding for the given user however no success :(

POSTGRES_DB_PASSWORD to POSTGRES_PASSWORD

With Cloud SQL Database version is PostgreSQL 11, the code line throws error
You must specify POSTGRES_PASSWORD to a non-empty value for the superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".

code fix proposed to change from POSTGRES_DB_PASSWORD to POSTGRES_PASSWORD, POSTGRES_DB_USER to POSTGRES_USER

        - name: POSTGRES_DB_HOST
          value: 127.0.0.1:5432
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              key: username
              name: cloudsql-user-credentials
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: cloudsql-user-credentials

Can I run the cloudsql-proxy as a standalone service?

Hi,
I'm trying to connect my GKE to my Cloud SQL instance.
The docs speak about a sidecar-container using the b.gcr.io/cloudsql-docker/gce-proxy:1.05 image to proxy the requests from 127.0.0.1 through to my Cloud SQL instance - this works like a charm.

But, I'm asking myself if it would be more efficient, if I had one pod behind a service doing the proxying for all other pods instead of having one sidecar container for every pod that wants to access the database?

Tried setting that up, but the IP of the sqlproxy-service isn't allowed to access the Cloud SQL instance:

connect ECONNREFUSED <ip of proxy service>:3306

Is there a way around this or should I just stick to the one sidecar-container in every pod approach?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.