Giter VIP home page Giter VIP logo

examples's Introduction

Kubernetes Examples

This directory contains a number of examples of how to run real applications with Kubernetes.

Refer to the Kubernetes documentation for how to execute the tutorials.

Maintained Examples

Maintained Examples are expected to be updated with every Kubernetes release, to use the latest and greatest features, current guidelines and best practices, and to refresh command syntax, output, changed prerequisites, as needed.

Name Description Notable Features Used Complexity Level
Guestbook PHP app with Redis Deployment, Service Beginner
Guestbook-Go Go app with Redis Deployment, Service Beginner
WordPress WordPress with MySQL Deployment, Persistent Volume with Claim Beginner
Cassandra Cloud Native Cassandra Daemon Set, Stateful Set, Replication Controller Intermediate

Note: Please add examples that are maintained to the list above.

See Example Guidelines for a description of what goes in this directory, and what examples should contain.

Contributing

Please see CONTRIBUTING.md for instructions on how to contribute.

examples's People

Contributors

a-robinson avatar ahmetb avatar artfulcoder avatar bgrant0607 avatar brendandburns avatar chrislovecnm avatar dchen1107 avatar deads2k avatar derekwaynecarr avatar eparis avatar erictune avatar humblec avatar j3ffml avatar jayunit100 avatar k8s-ci-robot avatar lavalamp avatar mattf avatar mikedanese avatar mwielgus avatar nikhiljindal avatar roberthbailey avatar rootfs avatar satnam6502 avatar sebgoa avatar smarterclayton avatar sttts avatar sub-mod avatar thockin avatar vmarmol avatar zmerlynn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

examples's Issues

when create examples redis cluster

hi,when i create redis cluster use step like this:

kubectl scale rc redis --replicas=3
kubectl scale rc redis-sentinel --replicas=3
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.

but it create two additional replicas of the redis server and three additional replicas of the redis sentinel.

CrashLoopBackOff for mysql-deployment.yaml

Following instructions to create wordpress-mysql statefulset. The only difference from the instruction is that I created persistentvolume with NFS. I can see the persistentvolumeclaim successfully bound to the persistentvolumes. But 'describe pod' shows CrashLoopBackOff. The following is the console output:

Name:		wordpress-mysql-1894417608-vljsm
Namespace:	default
Node:		rh4/8.0.0.8
Start Time:	Thu, 27 Jul 2017 14:59:02 +0000
Labels:		app=wordpress
		pod-template-hash=1894417608
		tier=mysql
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wordpress-mysql-1894417608","uid":"2071b96d-72dc-11e7-aafb-001dd...
Status:		Running
IP:		10.233.109.6
Controllers:	ReplicaSet/wordpress-mysql-1894417608
Containers:
  mysql:
    Container ID:	docker://586fb71c34351a58504629aa59457620c418273bb8e00751f02e9095bdf7ae45
    Image:		mysql:5.6
    Image ID:		docker-pullable://mysql@sha256:2897982d4c086b03586a1423d0cbf33688960ef7534b7bb51b9bcfdb6c3597e7
    Port:		3306/TCP
    State:		Waiting
      Reason:		CrashLoopBackOff
    Last State:		Terminated
      Reason:		Error
      Exit Code:	1
      Started:		Thu, 27 Jul 2017 15:25:23 +0000
      Finished:		Thu, 27 Jul 2017 15:25:24 +0000
    Ready:		False
    Restart Count:	10
    Environment:
      MYSQL_ROOT_PASSWORD:	<set to the key 'password.txt' in secret 'mysql-pass'>	Optional: false
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n94rj (ro)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  mysql-persistent-storage:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	mysql-pv-claim
    ReadOnly:	false
  default-token-n94rj:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-n94rj
    Optional:	false
QoS Class:	BestEffort
Node-Selectors:	<none>
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath		Type		Reason		Message
  ---------	--------	-----	----			-------------		--------	------		-------
  29m		29m		1	default-scheduler				Normal		Scheduled	Successfully assigned wordpress-mysql-1894417608-vljsm to rh4
  29m		29m		1	kubelet, rh4		spec.containers{mysql}	Normal		Created		Created container with docker id 9f305c7fe737; Security:[seccomp=unconfined]
  29m		29m		1	kubelet, rh4		spec.containers{mysql}	Normal		Started		Started container with docker id 9f305c7fe737
  29m		29m		1	kubelet, rh4		spec.containers{mysql}	Normal		Created		Created container with docker id ba26b6fb0afc; Security:[seccomp=unconfined]
  29m		29m		1	kubelet, rh4		spec.containers{mysql}	Normal		Started		Started container with docker id ba26b6fb0afc
  29m		29m		2	kubelet, rh4					Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysql pod=wordpress-mysql-1894417608-vljsm_default(2074c5c0-72dc-11e7-aafb-001dd8008f63)"

  28m	28m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id d76e629b9548
  28m	28m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id d76e629b9548; Security:[seccomp=unconfined]
  28m	28m	2	kubelet, rh4				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 20s restarting failed container=mysql pod=wordpress-mysql-1894417608-vljsm_default(2074c5c0-72dc-11e7-aafb-001dd8008f63)"

  28m	28m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id fcae4f1e7f86
  28m	28m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id fcae4f1e7f86; Security:[seccomp=unconfined]
  28m	28m	3	kubelet, rh4				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 40s restarting failed container=mysql pod=wordpress-mysql-1894417608-vljsm_default(2074c5c0-72dc-11e7-aafb-001dd8008f63)"

  27m	27m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id 5ab0b825b247
  27m	27m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id 5ab0b825b247; Security:[seccomp=unconfined]
  27m	26m	6	kubelet, rh4				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=mysql pod=wordpress-mysql-1894417608-vljsm_default(2074c5c0-72dc-11e7-aafb-001dd8008f63)"

  26m	26m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id 1fd6c0d3ac9d; Security:[seccomp=unconfined]
  26m	26m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id 1fd6c0d3ac9d
  26m	23m	13	kubelet, rh4				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=mysql pod=wordpress-mysql-1894417608-vljsm_default(2074c5c0-72dc-11e7-aafb-001dd8008f63)"

  23m	23m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id 10d68e7ac5b3; Security:[seccomp=unconfined]
  23m	23m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id 10d68e7ac5b3
  18m	18m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id 06848eb5a40d
  18m	18m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id 06848eb5a40d; Security:[seccomp=unconfined]
  13m	13m	1	kubelet, rh4	spec.containers{mysql}	Normal	Started		Started container with docker id b685ed1a8f06
  13m	13m	1	kubelet, rh4	spec.containers{mysql}	Normal	Created		Created container with docker id b685ed1a8f06; Security:[seccomp=unconfined]
  8m	2m	2	kubelet, rh4	spec.containers{mysql}	Normal	Started		(events with common reason combined)
  8m	2m	2	kubelet, rh4	spec.containers{mysql}	Normal	Created		(events with common reason combined)
  29m	2m	11	kubelet, rh4	spec.containers{mysql}	Normal	Pulled		Container image "mysql:5.6" already present on machine
  23m	1s	109	kubelet, rh4				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=mysql pod=wordpress-mysql-1894417608-vljsm_default(2074c5c0-72dc-11e7-aafb-001dd8008f63)"

  29m	1s	135	kubelet, rh4	spec.containers{mysql}	Warning	BackOff	Back-off restarting failed docker container

Cassandra image v12 still has Cassandra 3.9

Running nodetool version against gcr.io/google-samples/cassandra:v12 gets me ```
nodetool version
ReleaseVersion: 3.9


I'm wondering if perhaps a bad build was put up and then it hasn't been refreshed?

Elasticsearch production cluster "403: cannot get endpoints" errors

Following the docs for creating a production cluster, I get the following 403 endpoints errors from the master, client, and data pod logs:

[2017-10-16 20:59:07,180][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Annihilus] Exception caught during discovery javax.ws.rs.WebApplicationException : HTTP 403 User "system:serviceaccount:default:elasticsearch" cannot get endpoints in the namespace "default".
javax.ws.rs.WebApplicationException: HTTP 403 User "system:serviceaccount:default:elasticsearch" cannot get endpoints in the namespace "default".
	at io.fabric8.kubernetes.api.ExceptionResponseMapper.fromResponse(ExceptionResponseMapper.java:25)
	at io.fabric8.kubernetes.api.ExceptionResponseMapper.fromResponse(ExceptionResponseMapper.java:16)
	at org.apache.cxf.jaxrs.client.ClientProxyImpl.checkResponse(ClientProxyImpl.java:302)
	at org.apache.cxf.jaxrs.client.ClientProxyImpl.handleResponse(ClientProxyImpl.java:725)
	at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:683)
	at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
	at com.sun.proxy.$Proxy29.endpointsForService(Unknown Source)
	at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.getNodesFromKubernetesSelector(K8sUnicastHostsProvider.java:123)
	at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:106)
	at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:313)
	at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:219)
	at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:146)
	at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:124)
	at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:1007)
	at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:361)
	at org.elasticsearch.discovery.zen.ZenDiscovery.access$6100(ZenDiscovery.java:86)
	at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1384)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

This seems related to #105, but this cluster doesn't use RBAC. I decided to go ahead and try applying the rbac.yaml anyways and now everything is working happily!

kubectl create -f staging/elasticsearch/rbac.yaml
[2017-10-16 21:04:27,542][INFO ][cluster.service          ] [X-Cutioner] detected_master [Magik][lL30c9mgShuRPZuJZXMB7Q][es-master-0blnl][inet[/10.244.1.195:9300]]{data=false, master=true}, added {[Magik][lL30c9mgShuRPZuJZXMB7Q][es-master-0blnl][inet[/10.244.1.195:9300]]{data=false, master=true},}, reason: zen-disco-receive(from master [[Magik][lL30c9mgShuRPZuJZXMB7Q][es-master-0blnl][inet[/10.244.1.195:9300]]{data=false, master=true}])
[2017-10-16 21:04:27,560][INFO ][cluster.service          ] [X-Cutioner] added {[Annihilus][7pHiQCwuSX6tD4t_TqG1Lg][es-client-6kxsj][inet[/10.244.1.196:9300]]{data=false, master=false},}, reason: zen-disco-receive(from master [[Magik][lL30c9mgShuRPZuJZXMB7Q][es-master-0blnl][inet[/10.244.1.195:9300]]{data=false, master=true}])

This leads me to suspect that the current production cluster example may be broken.

Questions

  1. Why does applying rbac.yaml fix the production cluster example when RBAC is not installed on the cluster?

  2. Should the application of rbac.yaml be added to the production cluster example?

Thank you!
Jay

Guestbook tutorial doesn't work

https://kubernetes.io/docs/tutorials/stateless-application/guestbook/

Following the guestbook tutorial line for line results in the following error:

<b>Fatal error</b>:  Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:168
Stack trace:
#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(97): Predis\Connection\AbstractConnection-&gt;onConnectionError('php_network_get...', 0)
#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(58): Predis\Connection\StreamConnection-&gt;tcpStreamInitializer(Object(Predis\Connection\Parameters))
#2 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(94): Predis\Connection\StreamConnection-&gt;createResource()
#3 /usr/local/lib/php/Predis/Connection/StreamConnection.php(158): Predis\Connection\AbstractConnection-&gt;connect()
#4 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(193): Predis\Connection\StreamConnection-&gt;connect()
#5 /usr/local/lib/php/Predis/Connection/StreamConnection.php(184): Predis\Connection\AbstractConnection-&gt;getResour in <b>/usr/local/lib/php/Predis/Connection/AbstractConnection.php</b> on line <b>168</b><br />

HTTP 403 User "system:serviceaccount:default:elasticsearch" cannot get endpoints in the namespace "default".

Regarding: https://github.com/kubernetes/examples/tree/master/staging/elasticsearch

I get:

Exception caught during discovery javax.ws.rs.WebApplicationException : HTTP 403 User "system:serviceaccount:default:elasticsearch" cannot get endpoints in the namespace "default".

Full:

tyrion@Stephans-MBP ~/d/s/s/elastic> kubectl logs -f elastic-f7qxr
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2017-10-02 13:26:23,995][INFO ][node                     ] [D'Ken] version[1.7.1], pid[7], build[b88f43f/2015-07-29T09:54:16Z]
[2017-10-02 13:26:23,995][INFO ][node                     ] [D'Ken] initializing ...
[2017-10-02 13:26:24,219][INFO ][plugins                  ] [D'Ken] loaded [cloud-kubernetes], sites []
[2017-10-02 13:26:24,280][INFO ][env                      ] [D'Ken] using [1] data paths, mounts [[/data (/dev/xvda9)]], net usable_space [11.6gb], net total_space [28.6gb], types [ext4]
[2017-10-02 13:26:27,985][INFO ][node                     ] [D'Ken] initialized
[2017-10-02 13:26:27,985][INFO ][node                     ] [D'Ken] starting ...
[2017-10-02 13:26:28,186][INFO ][transport                ] [D'Ken] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.2.6.148:9300]}
[2017-10-02 13:26:28,215][INFO ][discovery                ] [D'Ken] myesdb/EUCn0wAqQjSOUVpqrC4lZQ
[2017-10-02 13:26:30,737][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [D'Ken] Exception caught during discovery javax.ws.rs.WebApplicationException : HTTP 403 User "system:serviceaccount:default:elasticsearch" cannot get endpoints in the namespace "default".

I have no idea how to fix this. I did everything as on the GitHub Readme.

Example Cassandra Daemon set fails to deploy

What keywords did you search in Kubernetes issues before filing this one? cassandra
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version):
BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean",
BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GKE 1.6.4
  • OS (e.g. from /etc/os-release): COS
  • Kernel (e.g. uname -a): Linux gke-guestbook-default-pool-38a0d458-0gjw 4.4.35+ #1 SMP Wed Apr 5 13:00:57 PDT 2017 x86_64 Intel(R) Xeon(R) CPU @ 2.30GHz GenuineIntel GNU/Linux
  • Install tools: GKE
  • Others:

What happened:
Tried to deploy Cassandra daemon set example:
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/storage/cassandra/cassandra-daemonset.yaml

What you expected to happen:
Cassandra starts on every node.

How to reproduce it (as minimally and precisely as possible):
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/storage/cassandra/cassandra-daemonset.yaml

Anything else we need to know:
kubectl logs -f cassandra-t531v

Starting Cassandra on 10.0.5.4
CASSANDRA_CONF_DIR /etc/cassandra
CASSANDRA_CFG /etc/cassandra/cassandra.yaml
CASSANDRA_AUTO_BOOTSTRAP true
CASSANDRA_BROADCAST_ADDRESS 10.0.5.4
CASSANDRA_BROADCAST_RPC_ADDRESS 10.0.5.4
CASSANDRA_CLUSTER_NAME 'Test Cluster'
CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
CASSANDRA_CONCURRENT_COMPACTORS
CASSANDRA_CONCURRENT_READS
CASSANDRA_CONCURRENT_WRITES
CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
CASSANDRA_DC
CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
CASSANDRA_GC_WARN_THRESHOLD_IN_MS
CASSANDRA_INTERNODE_COMPRESSION
CASSANDRA_KEY_CACHE_SIZE_IN_MB
CASSANDRA_LISTEN_ADDRESS 10.0.5.4
CASSANDRA_LISTEN_INTERFACE
CASSANDRA_MEMTABLE_ALLOCATION_TYPE
CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
CASSANDRA_MEMTABLE_FLUSH_WRITERS
CASSANDRA_MIGRATION_WAIT 1
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RACK
CASSANDRA_RING_DELAY 30000
CASSANDRA_RPC_ADDRESS 0.0.0.0
CASSANDRA_RPC_INTERFACE
CASSANDRA_SEEDS cassandra-t531v
CASSANDRA_SEED_PROVIDER io.k8s.cassandra.KubernetesSeedProvider
changed ownership of '/etc/cassandra/cassandra-env.sh' from root to cassandra
changed ownership of '/etc/cassandra/cassandra.yaml' from root to cassandra
changed ownership of '/etc/cassandra/jvm.options' from root to cassandra
changed ownership of '/etc/cassandra/logback.xml' from root to cassandra
changed ownership of '/etc/cassandra' from root to cassandra
OpenJDK 64-Bit Server VM warning: Cannot open file /usr/local/apache-cassandra-3.9/logs/gc.log due to No such file or directory

INFO  22:52:14 Configuration location: file:/etc/cassandra/cassandra.yaml
INFO  22:52:15 Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=10.0.5.4; broadcast_rpc_address=10.0.5.4; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=null; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/cassandra_data/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=null; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@11dc3715; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=/cassandra_data/hints; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=all; internode_recv_buff_size_in_bytes=null; internode_send_buff_size_in_bytes=null; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=10.0.5.4; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=1; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=32; otc_coalescing_strategy=TIMEHORIZON; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=/cassandra_data/saved_caches; seed_provider=io.k8s.cassandra.KubernetesSeedProvider{seeds=cassandra-t531v}; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@69930714; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO  22:52:15 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO  22:52:15 Global memtable on-heap threshold is enabled at 128MB
INFO  22:52:15 Global memtable off-heap threshold is enabled at 128MB
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
org.apache.cassandra.exceptions.ConfigurationException: io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
        at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:782)
        at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:125)
        at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:576)
        at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730)
ERROR 22:52:15 Exception encountered during startup
org.apache.cassandra.exceptions.ConfigurationException: io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
        at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:782) ~[apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:125) ~[apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:576) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730) [apache-cassandra-3.9.jar:3.9]

staging/psp/rbac: update usage of local-up-cluster.sh

I see that example of using local-up-cluster.sh could be updated:

  • we don't need to specify ALLOW_ANY_TOKEN=true variable because it was removed: kubernetes/kubernetes#49045 (RBAC example "relies on the ALLOW_ANY_TOKEN setting" so perhaps it doesn't work at this time. Here is the PR message about what to use instead: "users of the flag should use impersonation headers instead for debugging")
  • we don't need to specify ENABLE_RBAC=true variable anymore as it's enabled by default: kubernetes/kubernetes#49323
  • we don't need to specify RUNTIME_CONFIG="extensions/v1beta1=true,extensions/v1beta1/podsecuritypolicy=true" anymore as it's enabled by default since 1.6: kubernetes/kubernetes#39743
  • we should note that now default PSP policies are created by default: kubernetes/kubernetes#39301

CC @pweil- @liggitt

Redis Example: Which host to use as Sentinel IP?

https://github.com/kubernetes/examples/tree/master/staging/storage/redis

I am following the example above, but something I couldn't understand which I am confused.
We create a redis-sentinel service but without type=NodePort so this service is not open to outside world. Is it because of the redis sentinel non-authentication to keep it internal ?

When I try the "redis-sentinel"'s ClusterIp (which is the default type for a service) It works!
Is this the standard way to do this?

What if the clusterIP changes during scaling or something happened that changes the clusterIp, what should I do?

For live scenario, there will be a DNS so it won't be an issue but for local development environment every time I restart the PC, I got a new IP address, with this the clusterIP also changes and I need to update the code to connect the new redis-sentinel cluster IP.

    try:
        sentinel = Sentinel([('10.100.111.64', 26379)], socket_timeout=1)
        master = sentinel.master_for('mymaster', socket_timeout=1)
        slave = sentinel.slave_for('mymaster', socket_timeout=1)
        visits = master.incr("counter")
    except RedisError as e:
        visits = "<i>cannot connect to Redis, counter disabled</i>"
        print(e)

Clarification about container policy

I am reading this:

Docker images are pre-built, and source is contained in a subfolder.
Source is the Dockerfile and any custom files needed beyond the upstream app being packaged.
Images are pushed to gcr.io/google-samples. Contact @jeffmendoza to have an image pushed
Images are tagged with a version (not latest) that is referenced in the example config.

This has been a challenge in the past, since getting an update in takes way too long. Are we going to implement a system to automatically push new containers? Are we going to force all containers Dockerfiles to exist in this repo?

Cassandra: seed provider lists no seeds.

Hi,

Following the instruction of the Cassandra example leads to the following error message:

The seed provider lists no seeds.
WARN  14:15:51 Seed provider couldn't lookup host cassandra-0.cassandra.default.svc.cluster.local
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: The seed provider lists no seeds.
ERROR 14:15:51 Exception encountered during startup: The seed provider lists no seeds.

It looks like a chicken an egg situation:

  • Container starts
  • readiness probe shows cassandra-0 is not ready
  • Cassandra checks the seed, cassandra-0.cassandra.default.svc.cluster.local does not resolve
  • Container stops with fatal error

If I comment the readinessProbe then the stateful set works.

Shouldn't this use the KubernetesSeedProvider?

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T15:48:59Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Cassandra: example does not use KubernetesSeedProvider while doc says it does

From the Kubernetes documentation (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/), we can read this statement:

In this instance, a custom Cassandra SeedProvider enables Cassandra to discover new Cassandra nodes as they join the cluster.

However the example does not seem to use the KubernetesSeedProvider.

On the KubernetesSeedProvider Readme:

This provider is bundled with the Docker provided in this example.

Therefore I set the environment variable as follow in the statefulset.yaml, to try to force it to use KubernetesSeedProvider:

- name: CASSANDRA_SEED_PROVIDER value: "org.apache.cassandra.locator.KubernetesSeedProvider"

I got this error:

java.lang.ClassNotFoundException: org.apache.cassandra.locator.KubernetesSeedProvider

Can we either fix the documentation or fix the code so that it uses the KubernetesSeedProvider? Seems we are not making use of it

MySQL Wordpress PV example giving errors.

What happened:
I was following the Stateful Applications example - Wordpress MySQL persistent volume
In the Deploy MySQL step, I encountered an error. I did describe on the pod. It was showing this error.

SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "mysql-pv-claim", which is unexpected.

The full output of kubectl describe pod is here. I also checked whether the PersistentVolume was created and whether a PersistentVolumeClaim was created - here

What you expected to happen:
I expected the MySQL pod to be up as the tutorial suggested. At least the tutorial should mention any such issues if they are known and link to an explanation, if possible.

How to reproduce it (as minimally and precisely as possible):

gcloud container clusters create wp --num-nodes=3
gcloud compute disks create --size=20GB wordpress-1 wordpress-2
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/examples/master
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/gce-volumes.yaml
tr --delete '\n' <password.txt >.strippedpassword.txt && mv .strippedpassword.txt password.txt
kubectl create secret generic mysql-pass --from-file=password.txt
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/mysql-deployment.yaml

Now running kubectl get pods will show status as CrashLoopBackOff or Error in a few minutes.

Environment:

  • Kubernetes version (use kubectl version):
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release):
BUILD_ID=9202.64.0
NAME="Container-Optimized OS"
GOOGLE_CRASH_ID=Lakitu
VERSION_ID=57
BUG_REPORT_URL=https://crbug.com/new
PRETTY_NAME="Container-Optimized OS from Google"
VERSION=57
GOOGLE_METRICS_PRODUCT_ID=26
HOME_URL="https://cloud.google.com/compute/docs/containers/vm-image/"
ID=cos
  • Kernel (e.g. uname -a):
Linux gke-wp-default-pool-a7a7b2f3-0cbs 4.4.35+ #1 SMP Wed Apr 5 13:00:57 PDT 2017 x86_64 Intel(R) Xeon(R) CPU @ 2.50GHz GenuineIntel GNU/Linux

issue with spark-ui-proxy

Hi,

I followed all steps until Step 2:

$ kubectl create -f staging/spark/spark-ui-proxy-controller.yaml
replicationcontroller "spark-ui-proxy-controller" created

$ kubectl create -f staging/spark/spark-ui-proxy-service.yaml
service "spark-ui-proxy" created

$ kubectl get svc spark-ui-proxy -o wide
NAME             TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE       SELECTOR
spark-ui-proxy   LoadBalancer   10.0.0.38    <pending>     80:31945/TCP   9m        component=spark-ui-proxy

i dont get the external IP, why is that? i waited 10 min between spark-ui-proxy and kubectl get svc spark-ui-proxy -o wide...

ceph rbd support multi storageclass

I test pod mount volume through pvc on storageclass ceph must exist in the same namespace with secret. Now I want mutil namespace environment, so each namespace need to create its own storageclass.
kubectl create ns like
ceph osd pool create like 128
ceph auth get-or-create client.like mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=like'
I creaet ceph-secret-like in namespace like and storageclass ceph-like use pool like and secret ceph-secret-like
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph-like
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.110.17.56:6789,10.110.17.57:6789,10.110.17.58:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: like
userId: like
userSecretName: ceph-secret-like
imageFormat: "1"
When I create pvc in namespace like,its status is pending, and log show like this:
Warning ProvisioningFailed 3m (x17 over 7m) persistentvolume-controller (combined from similar events): Failed to provision volume with StorageClass "ceph-like": failed to create rbd image: exit status 1, command output: 2017-11-24 03:08:55.020934 7fe5c270d780 -1 did not load config file, using default settings.
2017-11-24 03:08:55.032621 7fe5c270d780 0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!

https-nginx go issue

Following httpd-nginx docs

Getting a golang error.


> $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json                                                      ⬡ 6.10.2 [±master ●]
# The CName used here is specific to the service specified in nginx-app.yaml.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
......+++
..+++
unable to write 'random state'
writing new private key to '/tmp/nginx.key'
-----
go run make_secret.go -crt /tmp/nginx.crt -key /tmp/nginx.key > /tmp/secret.json
# command-line-arguments
./make_secret.go:63: cannot use "k8s.io/apimachinery/pkg/apis/meta/v1".ObjectMeta literal (type "k8s.io/apimachinery/pkg/apis/meta/v1".ObjectMeta) as type "k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".ObjectMeta in field value
./make_secret.go:69: cannot use api.Codecs.LegacyCodec(api.Registry.EnabledVersions()...) (type "k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime".Codec) as type "k8s.io/apimachinery/pkg/runtime".Encoder in argument to "k8s.io/apimachinery/pkg/runtime".EncodeOrDie:
	"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime".Codec does not implement "k8s.io/apimachinery/pkg/runtime".Encoder (wrong type for Encode method)
		have Encode("k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime".Object, io.Writer) error
		want Encode("k8s.io/apimachinery/pkg/runtime".Object, io.Writer) error
./make_secret.go:69: cannot use secret (type *api.Secret) as type "k8s.io/apimachinery/pkg/runtime".Object in argument to "k8s.io/apimachinery/pkg/runtime".EncodeOrDie:
	*api.Secret does not implement "k8s.io/apimachinery/pkg/runtime".Object (wrong type for DeepCopyObject method)
		have DeepCopyObject() "k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime".Object
		want DeepCopyObject() "k8s.io/apimachinery/pkg/runtime".Object
make: *** [secret] Error 2

Location not ignored for kubernetes.io/azure-disk

Accordint to azure-disk provisioner

If storage account is provided (same resource group as the cluster) location should be ignored but creating a storage class like this:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azuredisk-low
provisioner: kubernetes.io/azure-disk
parameters:
storageAccount: my-account (same rg as acs)

And a persistent volume claim using this storage class:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-name
namespace: my-namespace
annotations:
volume.beta.kubernetes.io/storage-class: azuredisk-low
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

Persistent volume claim fails with:

Failed to provision volume with StorageClass "azuredisk-low": AzureDisk - location() and account(my-account) must be both empty or specified for dedicated kind, only one value specified is not allowed

minio distributed service stuck and failed to load

Kubernetes v1.6.6

https://github.com/kubernetes/examples/tree/master/staging/storage/minio#step-1-create-minio-headless-service

Minio Distributed Server Deployment: The non-headless service failed to load causes the pod failed to load and eventually the pvc failed to load.

 $ kubectl get svc minio-service
 NAME                       CLUSTER-IP     EXTERNAL-IP       PORT(S)             AGE
 minio-service             10.45.215.47     pending       9000:31852/TCP         24m

Create the two services and deployment using the yaml below,
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-headless-service.yaml?raw=true
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-statefulset.yaml?raw=true
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-service.yaml?raw=true

guestbook: consider splitting service file into two

Currently we're telling people:

spec:
  # comment or delete the following line if you want to use a LoadBalancer
  type: NodePort 
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ...

Instead of this we should have 2 files that are ready to be deployed, configured with NodePort and LoadBalancer respectively.

This way we can say "if you're on minikube, deploy frontend-service.yaml, if you're on a cloud provider, deploy frontend-service-external.yaml.

cc: @cody-clark

Cassandra: statefulset and volume constraint prevents Cassandra pods from healing

Running Cassandra in AWS but I imagine the same applies elsewhere (Kubernetes 1.7.4). Consider:

3 Cassandra nodes as defined by the Statefulset.

  • cassandra-0 is in AZ "a"
  • cassandra-1 is in AZ "b"
  • cassandra-2 is in AZ "c"

AZ stands for a Availability Zone (same as Google Zone I heard). It's like a rack (although we don't have Kubernetes Cassandra snitch so for Cassandra it looks like everything is in one rack)

I shutdown all Kubernetes nodes in AZ "a" and prevent creating new nodes in AZ "a"
Therefore cassandra-0 cannot start as its EBS Volume (PV) is tight to the AZ "a".

This simulates 1 Availability Zone (zone) down.

During this time cassandra-2 fails and needs to be restarted elsewhere.
A statefulset won't self-heal cassandra-2 and also prevents creating further cassandra-3, cassandra-4, etc.
If we also loose cassandra-1 (maybe the host is gone because of AWS/Cloud autoscaling), then we have no Cassandra node left. The more time the first zone is down, the more we risk having to re-create the other Cassandra nodes (infra is elastic and hosts can go wrong), which is not working unless AZ "a" goes back online. For a 30minutes outage of AZ "a", it will be fine. But then the clock is ticking...

From Statefulset doc:
"Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready."

Some questions

  • Is that expected behaviour?
  • shouldn't we allow Cassandra to scale nodes in other AZ even if the first one is down?
  • Is Statefulset the right choice then? would deployment + replica work better? (as far as I know we need the seeds to start first, thereafter we don't care about the order).
  • Should we use 3 statefulset (1 per zone/rack) joining the same cluster? How to do that? How do Cassandra node recognise they are in the same cluster? Is there any issue with this approach?

How to access mysql(statefulsets) from outside of cluster

Hi Thanks for ur sharing info.

Is there any way to access to mysql from outside of cluster?
For developers, they need to access mysql directly for design and debugging.

I tried port-forward and using sql workbench on 3306 client port only,
unfortunately it doesn't work well.

Please give me an idea to support developers.
Thanks in advance.

For your Info

This is my yaml on working on

#
# MariaDB 10.1 Galera Cluster
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  clusterIP: None
  selector:
    app: mysql
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config-vol
data:
  galera.cnf: |
    [galera]
    user = mysql
    bind-address = 0.0.0.0

    default_storage_engine = InnoDB
    binlog_format = ROW
    innodb_autoinc_lock_mode = 2
    innodb_flush_log_at_trx_commit = 0
    query_cache_size = 0
    query_cache_type = 0

    # MariaDB Galera settings
    wsrep_on=ON
    wsrep_provider=/usr/lib/galera/libgalera_smm.so
    wsrep_sst_method=rsync

    # Cluster settings (automatically updated)
    wsrep_cluster_address=gcomm://
    wsrep_cluster_name=galera
    wsrep_node_address=127.0.0.1
  mariadb.cnf: |
    [client]
    default-character-set = utf8
    [mysqld]
    character-set-server  = utf8
    collation-server      = utf8_general_ci
    # InnoDB tuning
    innodb_log_file_size  = 50M
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: ausov/k8s-mariadb-cluster
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql
              key: password
        readinessProbe:
          exec:
            command: ["bash", "-c", "mysql -uroot -p\"${MYSQL_ROOT_PASSWORD}\" -e 'show databases;'"]
          initialDelaySeconds: 20
          timeoutSeconds: 5
        volumeMounts:
        - name: config
          mountPath: /etc/mysql/conf.d
        - name: datadir
          mountPath: /var/lib/mysql
      volumes:
      - name: config
        configMap:
          name: mysql-config-vol
          items:
            - path: "galera.cnf"
              key: galera.cnf
            - path: "mariadb.cnf"
              key: mariadb.cnf
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi

Openshift: use of external Kubernetes is no longer supported

Can no longer use the openshift-origin recipe to deploy openshift over existing kubernetes.

docker run --privileged -v /Users/l.abruce/proj/git/lmgitlab.hlsdev.local/l.lmil/lmil_sysadmin/demos/JMorenz/2017-03-MDE-ATO/poc/localdata/openshift/config:/config --name default-mde-ato-openshift-origin-openshift-master-run openshift/origin start master --kubeconfig=/config/kubeconfig --master=https://localhost:8443 --public-master=https://192.168.98.100:17082 --etcd=http://etcd:2379

W0715 18:12:48.465226       1 start_master.go:294] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W0715 18:12:48.465357       1 start_master.go:294] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
W0715 18:12:48.465371       1 start_master.go:294] Warning: serviceAccountConfig.publicKeyFiles: Invalid value: "": no service account tokens will be accepted by the API, which will prevent builds and deployments from working, master start will continue.
W0715 18:12:48.465381       1 start_master.go:294] Warning: auditConfig.auditFilePath: Required value: audit can not be logged to a separate file, master start will continue.
F0715 18:12:48.465397       1 start_master.go:115] KubernetesMasterConfig is required to start this server - use of external Kubernetes is no longer supported.

elasticsearch: No up-and-running site-local (private) addresses found

k8s 1.6.4
flannel

[2017-10-30T09:35:24,499][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [397f514e-3be8-4d89-9256-f1f215d0dbd8] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.2.jar:5.6.2]
Caused by: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
at org.elasticsearch.common.network.NetworkUtils.getSiteLocalAddresses(NetworkUtils.java:187) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.common.network.NetworkService.resolveInternal(NetworkService.java:246) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.common.network.NetworkService.resolveInetAddresses(NetworkService.java:220) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.common.network.NetworkService.resolveBindHostAddresses(NetworkService.java:130) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:720) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:173) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:209) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.node.Node.start(Node.java:694) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:278) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.2.jar:5.6.2]

Not able to run spark-submit iteractively on zeppelin container.

I have setup spark cluster in our k8s infrastructure. And i was trying to submit spark program through spark-submit in zepelin container as follow -

kubectl exec zeppelin-controller-nqz79 -it spark-submit --class com.ibm.cedp.spark.JavaSparkPi /home/sidartha/bazel/bazel-bin/java/com/ibm/cedp/spark/SparkPi_deploy.jar

And getting follwing error -

Error: unknown flag: --class

NFS example needs updates

Is this now the best place to raise: kubernetes/kubernetes#48161

The NFS example is not on the list of supported examples, but if you're looking for maintainer volunteers... I am pretty sure I am going to need this NFS example to work personally, and I hope to keep up with Kubernetes upgrades with several dev/prod environments, so I might be a good fit for supporting maintainer.

Cassandra Statefulset: How to expose the cluster?

I am trying out the cassandra -statefulset example. And can see all the three pods joined the cluster:

kubectl exec cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.44.0.4  65.63 KiB  32           59.3%             91152a3e-c802-45bf-a4a9-33187a97b4c4  Rack1-K8Demo
UN  10.45.0.1  65.63 KiB  32           68.9%             ca7d39f4-341a-43d6-a180-23f89b124334  Rack1-K8Demo
UN  10.44.0.3  65.59 KiB  32           71.9%             b9075133-61ed-4fde-97c7-c2fd8d2b4234  Rack1-K8Demo

The out put of kubectl get service is:

kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra    None         <none>        9042/TCP   10s
kubernetes   10.96.0.1    <none>        443/TCP    50m

Now, I want to expose it and use from out side the cluster.

Any help will be highly appreciated.

Statefulset Casssandra not able to mount pvc at /cassandra_data.

/kind bug
/sig apps
/area example

In Cassandra Statefulsets example not able to mount PVC at /cassandra_data but able to mount at say /cassandra_test location.

Getting below error in logs.

kubectl logs cassandra-33dl9
Starting Cassandra on 172.16.30.2
CASSANDRA_CONF_DIR /etc/cassandra
CASSANDRA_CFG /etc/cassandra/cassandra.yaml
CASSANDRA_AUTO_BOOTSTRAP true
CASSANDRA_BROADCAST_ADDRESS 172.16.30.2
CASSANDRA_BROADCAST_RPC_ADDRESS 172.16.30.2
CASSANDRA_CLUSTER_NAME 'Test Cluster'
CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
CASSANDRA_CONCURRENT_COMPACTORS
CASSANDRA_CONCURRENT_READS
CASSANDRA_CONCURRENT_WRITES
CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
CASSANDRA_DC
CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
CASSANDRA_GC_WARN_THRESHOLD_IN_MS
CASSANDRA_INTERNODE_COMPRESSION
CASSANDRA_KEY_CACHE_SIZE_IN_MB
CASSANDRA_LISTEN_ADDRESS 172.16.30.2
CASSANDRA_LISTEN_INTERFACE
CASSANDRA_MEMTABLE_ALLOCATION_TYPE
CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
CASSANDRA_MEMTABLE_FLUSH_WRITERS
CASSANDRA_MIGRATION_WAIT 1
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RACK
CASSANDRA_RING_DELAY 30000
CASSANDRA_RPC_ADDRESS 0.0.0.0
CASSANDRA_RPC_INTERFACE
CASSANDRA_SEEDS cassandra-33dl9
CASSANDRA_SEED_PROVIDER io.k8s.cassandra.KubernetesSeedProvider
changed ownership of '/etc/cassandra/cassandra-env.sh' from root to cassandra
changed ownership of '/etc/cassandra/cassandra.yaml' from root to cassandra
changed ownership of '/etc/cassandra/jvm.options' from root to cassandra
changed ownership of '/etc/cassandra/logback.xml' from root to cassandra
changed ownership of '/etc/cassandra' from root to cassandra
OpenJDK 64-Bit Server VM warning: Cannot open file /usr/local/apache-cassandra-3.9/logs/gc.log due to No such file or directory
INFO  16:46:04 Configuration location: file:/etc/cassandra/cassandra.yaml
INFO  16:46:05 Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=172.16.30.2; broadcast_rpc_address=172.16.30.2; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=null; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/cassandra_data/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=null; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@11dc3715; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=/cassandra_data/hints; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=all; internode_recv_buff_size_in_bytes=null; internode_send_buff_size_in_bytes=null; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=172.16.30.2; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=1; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=32; otc_coalescing_strategy=TIMEHORIZON; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=/cassandra_data/saved_caches; seed_provider=io.k8s.cassandra.KubernetesSeedProvider{seeds=cassandra-33dl9}; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@69930714; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO  16:46:05 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO  16:46:06 Global memtable on-heap threshold is enabled at 128MB
INFO  16:46:06 Global memtable off-heap threshold is enabled at 128MB
Exception (java.lang.IllegalArgumentException) encountered during startup: Out of range: 2199023255551
java.lang.IllegalArgumentException: Out of range: 2199023255551
	at com.google.common.primitives.Ints.checkedCast(Ints.java:91)
	at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:553)
	at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:125)
	at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:576)
	at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730)
ERROR 16:46:06 Exception encountered during startup
java.lang.IllegalArgumentException: Out of range: 2199023255551
	at com.google.common.primitives.Ints.checkedCast(Ints.java:91) ~[guava-18.0.jar:na]
	at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:553) ~[apache-cassandra-3.9.jar:3.9]
	at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:125) ~[apache-cassandra-3.9.jar:3.9]
	at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:576) [apache-cassandra-3.9.jar:3.9]
	at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730) [apache-cassandra-3.9.jar:3.9]

Environment Details:

kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-09-29T05:56:06Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

How to setup a hadoop cluster in a kubernetes cluster?

I would like to setup a hadoop cluster in a kubernetes cluster.There are 4 nodes for kubernetes cluster.
I create a pod for hadoop master in a kubernetes cluster node.And I create three pods for hadoop slaves in the other 3 nodes.I have to do that all the pods in a intranet each other.I want to do that multiple pods in different nodes in a intranet. What should I do?Any help is appreciated!

Selenium example

It seems that there is no geckodriver installed in the image google/python-hello, so the python code won't work as expected

Another image inodb/python-selenium can support the example.

OMS dameonset takes a long time to get heartbeat success

Hi,

I've deployed the OMS Agent dameonset to a k8 cluster and configured with the correct OMS Id and key, after following this OMS Agent setup

omsagent-f35cw 1/1 Running 0 15m 10.244.0.189 k8s-agentpool1-41733846-0
omsagent-v9dr1 1/1 Running 0 18m 10.244.1.10 k8s-master-41733846-0

After deploying the pod running in the k8 agentpool1 takes around 1h to get a successful heartbeat:

2017-09-07 10:44:34 +0000 [error]: Unable to resolve the IP of ‘omsagent-f35cw’: getaddrinfo: Servname not supported for ai_socktype
2017-09-07 10:44:34 +0000 [warn]: Failed to get the IP for omsagent-f35cw.
2017-09-07 11:58:21 +0000 [info]: Heartbeat success

Sometimes ssh'ing into the cluster and doing
docker restart k8s_omsagent_omsagent-w48t1_default_d3d99e53-9312-11e7-99dd-000d3a260024_0
solves the issue and the logs start to show on OMS.

Any help on what might be going on?

Thank you

Redis: READONLY You can't write against a read only slave.

Hi, can someone help me with a problem on the redis cluster,

for example i use google cloud and i have created the redis cluster with the redis sentinel svc, also deleted the master pod an recreated again by it self, (as told in the example)

but can someone have any idea how to use this cluster,

i created a internal loadbalancer svc

apiVersion: v1
kind: Service
metadata:
  name: redis-loadbalancer
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    name: redis
spec:
  type: LoadBalancer
  loadBalancerIP:
  ports:
  - port: 6379
    protocol: TCP
  selector:
    name: redis

and the good thing is i can connect to the cluster
but when i try to insert some data it gives me an error

redis-cli -h 10.128.0.11 -p 6379
10.128.0.11:6379> set test test
(error) READONLY You can't write against a read only slave.
10.128.0.11:6379> 

redis-sentinel didn't completed the election

After execute “kubectl delete pods redis-master”
kubectl logs -f redis-sentinel:
12:X 26 Sep 01:10:02.266 # +monitor master mymaster 172.17.109.163 6379 quorum 2
12:X 26 Sep 01:10:02.267 * +slave slave 192.168.250.49:6379 192.168.250.49 6379 @ mymaster 172.17.109.163 6379
12:X 26 Sep 01:10:03.401 * +sentinel sentinel 489c70f345a51146d384a39830b0b7c069605f59 172.17.109.145 26379 @ mymaster 172.17.109.163 6379
12:X 26 Sep 01:10:03.742 * +sentinel sentinel aa7ee4d01c938fca50204f49e3f9a2a8dd73b537 172.17.109.163 26379 @ mymaster 172.17.109.163 6379
12:X 26 Sep 01:10:05.243 * +sentinel sentinel 5f5328394e1ba2ddbe002d2b3d35cc244a4b16f6 172.17.109.110 26379 @ mymaster 172.17.109.163 6379
12:X 26 Sep 01:11:02.337 # +sdown slave 192.168.250.49:6379 192.168.250.49 6379 @ mymaster 172.17.109.163 6379
12:X 26 Sep 01:12:44.216 # +sdown master mymaster 172.17.109.163 6379
12:X 26 Sep 01:12:44.217 # +sdown sentinel aa7ee4d01c938fca50204f49e3f9a2a8dd73b537 172.17.109.163 26379 @ mymaster 172.17.109.163 6379
12:X 26 Sep 01:12:44.367 # +new-epoch 1
12:X 26 Sep 01:12:44.420 # +vote-for-leader 489c70f345a51146d384a39830b0b7c069605f59 1
12:X 26 Sep 01:12:45.296 # +odown master mymaster 172.17.109.163 6379 #quorum 3/2
12:X 26 Sep 01:12:45.296 # Next failover delay: I will not start a failover before Tue Sep 26 01:18:44 2017
12:X 26 Sep 01:18:44.691 # +new-epoch 2
12:X 26 Sep 01:18:44.691 # +try-failover master mymaster 172.17.109.163 6379
12:X 26 Sep 01:18:44.733 # +vote-for-leader 67c6de61fc6be502e9fd0b2ae1be056ddc192aea 2
12:X 26 Sep 01:18:44.822 # 489c70f345a51146d384a39830b0b7c069605f59 voted for 67c6de61fc6be502e9fd0b2ae1be056ddc192aea 2
12:X 26 Sep 01:18:44.840 # 5f5328394e1ba2ddbe002d2b3d35cc244a4b16f6 voted for 67c6de61fc6be502e9fd0b2ae1be056ddc192aea 2
12:X 26 Sep 01:18:44.850 # +elected-leader master mymaster 172.17.109.163 6379
12:X 26 Sep 01:18:44.850 # +failover-state-select-slave master mymaster 172.17.109.163 6379
12:X 26 Sep 01:18:44.914 # -failover-abort-no-good-slave master mymaster 172.17.109.163 6379
12:X 26 Sep 01:18:44.985 # Next failover delay: I will not start a failover before Tue Sep 26 01:24:45 2017
12:X 26 Sep 01:24:45.127 # +new-epoch 3
12:X 26 Sep 01:24:45.187 # +vote-for-leader 489c70f345a51146d384a39830b0b7c069605f59 3
12:X 26 Sep 01:24:45.187 # Next failover delay: I will not start a failover before Tue Sep 26 01:30:45 2017

kubectl logs -f redis-td4wq:
14:S 26 Sep 01:24:57.030 * MASTER <-> SLAVE sync started
14:S 26 Sep 01:25:58.153 # Timeout connecting to the MASTER...
14:S 26 Sep 01:25:58.153 * Connecting to MASTER 172.17.109.163:6379
14:S 26 Sep 01:25:58.153 * MASTER <-> SLAVE sync started

redis-sentinel didn't completed the election

How to pass access token in spark-submit to connect to k8s cluster

I have tried to submit spark job using spark-submit in our k8s cluster. The authentication was failed. I have tried with kubectl proxy also. It was not working. Is anyone try to add access token in spark-submit for k8s.

bin/spark-submit  --deploy-mode cluster   --class org.apache.spark.examples.SparkPi   --master k8s://http://127.0.0.1:8003 & KUBERNETES_TRUST_CERTIFICATES=true.  --kubernetes-namespace sidartha-spark-cluster   --conf spark.executor.instances=2   --conf spark.app.name=spark-pi   --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1   --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1   examples/jars/spark-examples_2.11-2.1.0-k8s-0.1.0-SNAPSHOT.jar 1000

getting this error

Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: http://127.0.0.1:8003/api/v1/namespaces/sidartha-spark-cluster/pods. Message: Forbidden! User sidartha-spark-cluster doesn't have permission..

guestbook: consider using single-instance redis deployment

Currently guestbook deployment consists of:

  • frontend, 3 replicas (+service)
  • redis-master, 1 replica (+service)
  • redis-slave, 2 replicas (+service)

Redis reads go to slaves, writes go to master. However it seems like an overkill to me to have slave/master in this tutorial:

  • it makes the tutorial ~30% longer and we have to explain why we have slave/master setup
  • we have to maintain a custom redis image with entrypoint pointing to a custom script
  • same story can be told with a single-instance redis image from Docker Hub.

cc: @jeffmendoza do you have any context you can provide?

Update Cassandra Example

I have a PR in the core repo that I need to move over to here. Also how are we doing owners here?

Redis needs a service?

I just setup redis in my cluster using this guide: https://github.com/kubernetes/examples/tree/master/staging/storage/redis#tl-dr

The only service this creates is the sentinel at port 26379.
So when connecting to redis via the application pods, should they be referecing redis-sentinel:26379, or should we be creating a second service like the following to use for our applications to connect to redis?

apiVersion: v1
kind: Service
metadata:
  labels:
    name: redis
    role: service
  name: redis
spec:
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    name: redis
    role: master

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.